id
stringlengths 25
96
| input
stringlengths 137
1.08M
| output
stringlengths 501
1.6k
| instruction
stringclasses 5
values |
---|---|---|---|
arxiv-format/0702027v1.md | # Interplay between kaon condensation and hyperons in highly dense matter
Takumi Muto
Department of Physics, Chiba Institute of Technology
2-1-1 Shibazono, Narashino, Chiba 275-0023, Japan
Email address: [email protected]
######
PACS: 05.30.Jp, 13.75.Jz, 26.60.+c, 95.35.+d
Keywords: kaon-baryon interaction, kaon condensation, hyperon, neutron stars, density isomer
Introduction
It has long been suggested that antikaon (\\(K^{-}\\)) condensation should be realized in high density hadronic matter [1, 2, 3, 4, 5, 6, 7].1 It is characterized as a macroscopic appearance of strangeness in a strongly interacting kaon-baryon system, where chiral symmetry and its spontaneous breaking has a key role. If kaon condensation exists in neutron stars, it softens the hadronic equation of state (EOS), having an influence on the bulk structure of neutron stars such as mass-radius relations [5, 6, 7, 8]. Effects of the phase-equilibrium condition associated with the first-order phase transition on the inner structure of neutron stars have also been elucidated [9, 10, 11, 12, 13, 14]. With regard to dynamical evolution of newly-born neutron stars, delayed collapse of protoneutron stars accompanying a phase transition to kaon-condensed phase has been discussed [15, 16, 17, 18]. The existence of kaon condensation is important for thermal evolution of neutron stars since the neutrino emission processes are largely enhanced in the presence of kaon condensates [19, 20, 21, 22].
Footnote 1: We consider antikaon (\\(K^{-}\\)) condensation, while we conventionally call it “kaon condensation”.
In the kaon-condensed phase in neutron stars, the net (negative) strangeness gets abundant as a consequence of chemical equilibrium for weak interaction processes, \\(n\\rightleftharpoons pK^{-}\\), \\(e^{-}\\rightleftharpoons K^{-}\
u_{e}\\). At threshold, the onset condition for kaon condensation has been given by2
Footnote 2: Throughout this paper, the units \\(\\hbar=c=1\\) are used.
\\[\\omega(\\rho_{\\rm B})=\\mu\\, \\tag{1}\\]
where \\(\\omega(\\rho_{B})\\) is the lowest \\(K^{-}\\) energy obtained at the baryon number density \\(\\rho_{\\rm B}\\) from the zero point of the \\(K^{-}\\) inverse propagator, \\(D_{K}^{-1}(\\omega;\\rho_{B})=0\\), and \\(\\mu\\) is the charge chemical potential which is equal to both the antikaon chemical potential \\(\\mu_{K}\\) and electron chemical potential \\(\\mu_{e}\\) under the chemical equilibrium condition for the weak processes [23, 24]. This onset condition (1) is based on the assumption of _continuous phase transition_ : Kaon condensation sets in with zero amplitude at a critical density, above which kaon condensates develop smoothly with increase in baryon number density \\(\\rho_{\\rm B}\\). It has been shown that the onset condition (1) holds true even if hyperon (\\(Y\\)) particle-nucleon (\\(N\\)) hole excitation through the \\(p\\)-wave kaon-baryon interaction is taken into account [25, 26] in the ordinary neutron-star matter where only nucleons and leptons are in \\(\\beta\\) equilibrium.
Concerning another hadronic phase including strangeness, hyperons (\\(\\Lambda\\), \\(\\Sigma^{-}\\), \\(\\Xi^{-}\\), \\(\\cdots\\)) as well as nucleons and leptons have been expected to be mixed in the ground state of neutron-star matter [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. We call the hyperon-mixed neutron-star matter _hyperonic matter_ throughout this paper. With regard to coexistence or competition of kaon condensation with hyperons in neutrons stars, it has been pointed out that onset density of the \\(s\\)-wave kaon condensation subject to the condition (1) in hyperonic matter is shifted to a higher density [7, 28, 29] : The electron carrying the negative charge is replaced by the negatively charged hyperon, so that the charge chemical potential \\(\\mu\\) (=\\(\\mu_{e}=(3\\pi^{2}\\rho_{e})^{1/3}\\)) is diminished as compared with the case of neutron-star matter without hyperons. As a result, the lowest \\(K^{-}\\) energy \\(\\omega(\\rho_{\\rm B})\\) meets the charge chemical potential \\(\\mu\\) at a higher density. Subsequently, several works on the onset density and the EOS for the kaon-condensed phase in hyperonic matter have been elaborated with the relativistic mean-field theory [39, 40], the quark-meson coupling models [41, 42]. Recently the in medium kaon dynamics and mechanisms of kaon condensation stemming from the \\(s\\) and \\(p\\)-wave kaon-baryon interactions in hyperonic matter have been investigated [43, 44].
It is emphasized here that most of the results on the onset mechanisms of kaon condensation in _hyperonic matter_ rely on the same assumption as the case of the usual neutron-star matter where hyperons are not mixed, i.e., the assumption of the _continuous phase transition_ with the help of Eq. (1). In this paper, we reexamine the onset condition of kaon condensation realized from hyperonic matter. We consider the \\(s\\)-wave kaon condensation and incorporate the kaon-baryon interaction within the effective chiral Lagrangian. The nonrelativistic effective baryon-baryon interaction is taken into account, and the parameters are determined so as to reproduce the nuclear saturation properties and baryon potential depths deduced from the recent hypernuclear experiments [45]. We demonstrate that the assumption of continuous phase transition cannot always be applied to the case where the _negatively charged hyperons_ (\\(\\Sigma^{-}\\)) are already present in the ground state of hyperonic matter, as a result of competition between the negatively charged kaons and hyperons. It will be shown that, in the vicinity of the baryon density a little lower than that where Eq. (1) is satisfied, there exists already another energy solution, for which kaons are condensed without mixing of the \\(\\Sigma^{-}\\) hyperons, in addition to the usual energy solution corresponding to the noncondensed state with the \\(\\Sigma^{-}\\) mixing. In particular, in the case of the stronger kaon-baryon attractive interaction, there is a discontinuous transition between these two states in a small density interval. Thus, from a theoretical viewpoint, one ought to be careful about the previous results concerning coexistence and/or competition of kaon condensates and \\(\\Sigma^{-}\\) hyperons, although quantitative effects resulting from the discontinuous phase transition are small.3
Footnote 3: Our discussion is concentrated on obtaining the energy solutions in the presence of hyperons under the constraints relevant to neutron stars. We don’t discuss here the prescription of the Gibbs condition for the phase equilibrium associated with a first-order phase transition [9, 10, 11, 12, 13, 14]. This issue will be reported elsewhere [46]. A first-order phase transition to the \\(K^{-}\\)–condensed phase has also been discussed in another context in Refs. [26, 44].
The interplay between \\(K^{-}\\) condensates and \\(\\Sigma^{-}\\) hyperons can also be revealed in the EOS and characteristic features of the fully-developed kaon-condensed phase such as density-dependence of particle fractions. In the case of the stronger kaon-baryon attractive interaction, we will see that there appears a local energy minimum with respect to the baryon number density (a density isomer state) as a consequence of considerable softening of the EOS due to both kaon condensation and hyperon-mixing and recovering of the stiffness of the EOS at very high densities due to the increase in the repulsive interaction between baryons.
The paper is organized as follows. In Sec. 2, the formulation to obtain the effective energy density of the kaon-condensed phase in hyperonic matter is presented. In Sec. 3, numerical results for the composition of the noncondensed phase of hyperonic matter are given for the subsequent discussions. Section 4 is devoted to the discussion on the validity of the continuous phase transition. The results for the EOS of the kaon-condensed phase are given in Sec. 5. In Sec. 6, summary and concluding remarks are addressed. In the Appendix, we remark that the two sets of parameters used in this paper for the baryon-baryon interaction models give different behaviors for the onset of \\(\\Lambda\\) and \\(\\Sigma^{-}\\) hyperons in ordinary neutron-star matter.
Formulation
### Outline of the kaon-condensed matter
In order to simplify and to make clear the discussion about the interrelations between kaon condensation and hyperons, we consider the s-wave kaon condensation by putting the kaon momentum \\(|\\mathbf{k}|\\)=0, and we also take into account only the proton (\\(p\\)), \\(\\Lambda\\), neutron (\\(n\\)), and \\(\\Sigma^{-}\\) of the octet baryons and the ultrarelativistic electrons for kaon-condensed hyperonic matter in neutron stars.
Within chiral symmetry, the classical kaon field as an order parameter of the \\(s\\)-wave kaon condensation is chosen to be a spatially uniform type:
\\[\\langle K^{-}\\rangle=\\frac{f}{\\sqrt{2}}\\theta e^{-i\\mu_{K}t}\\, \\tag{2}\\]
where \\(\\theta\\) is the chiral angle as an amplitude of condensation, and \\(f(\\sim f_{\\pi}\\)=93 MeV) is the meson decay constant.
We impose the charge neutrality condition and baryon number conservation, and construct the effective Hamiltonian density by introducing the charge chemical potential \\(\\mu\\) and the baryon number chemical potential \\(\
u\\), respectively, as the Lagrange multipliers corresponding to these two conditions. The resulting effective energy density is then written in the form
\\[{\\cal E}_{\\rm eff}={\\cal E}+\\mu(\\rho_{p}-\\rho_{\\Sigma^{-}}-\\rho_{K^{-}}-\\rho_ {e})+\
u(\\rho_{p}+\\rho_{\\Lambda}+\\rho_{n}+\\rho_{\\Sigma^{-}})\\, \\tag{3}\\]
where \\({\\cal E}\\) is the total energy density of the kaon-condensed phase, and \\(\\rho_{i}\\) (\\(i\\)= \\(p\\), \\(\\Lambda\\), \\(n\\), \\(\\Sigma^{-}\\), \\(K^{-}\\), \\(e^{-}\\)) are the number densities of the particles \\(i\\). It is to be noted that the number density of the kaon condensates \\(\\rho_{K^{-}}\\) consists of the free kaon part and the kaon-baryon interaction part of the vector type.[See Eq. (18).] From the extremum conditions for \\({\\cal E}_{\\rm eff}\\) with respect to variation of \\(\\rho_{i}\\), one obtains the following relations,
\\[\\mu_{K} = \\mu_{e}=\\mu_{n}-\\mu_{p}=\\mu_{\\Sigma^{-}}-\\mu_{n}=\\mu\\, \\tag{4a}\\] \\[\\mu_{\\Lambda} = \\mu_{n}=-\
u\\, \\tag{4b}\\]
where \\(\\mu_{i}\\) (\\(i\\)= \\(p\\), \\(\\Lambda\\), \\(n\\), \\(\\Sigma^{-}\\), \\(K^{-}\\), \\(e^{-}\\)) are the chemical potentials, which are given by \\(\\mu_{i}=\\partial{\\cal E}/\\partial\\rho_{i}\\). Equations (4a) and (4b) imply that the system is in chemical equilibrium for the weak interaction processes, \\(n\\rightleftharpoons pK^{-}\\), \\(n\\rightleftharpoons pe^{-}(\\bar{\
u}_{e})\\), \\(ne^{-}\\rightleftharpoons\\Sigma^{-}(\
u_{e})\\), and \\(n\\rightleftharpoons\\Lambda(\
u_{e}\\bar{\
u}_{e})\\).
### Kaon-baryon interaction
We are based on chiral symmetry for kaon-baryon interaction and start with the effective chiral SU(3)\\({}_{L}\\times\\) SU(3)\\({}_{R}\\) Lagrangian [1].4 Then the relevant Lagrangian density, leading to the total energy density \\({\\cal E}\\), consists of the following parts:
Footnote 4: Except for setting \\(|\\mathbf{k}|\\)=0, the basic formulation presented here is the same as that in Ref. [43], where both \\(s\\)-wave and \\(p\\)-wave kaon-baryon interactions are incorporated.
\\[{\\cal L} = \\frac{1}{4}f^{2}\\ {\\rm Tr}\\partial^{\\mu}\\Sigma^{\\dagger} \\partial_{\\mu}\\Sigma+\\frac{1}{2}f^{2}\\Lambda_{\\chi{\\rm SB}}({\\rm Tr}M(\\Sigma- 1)+{\\rm h.c.})\\]\\[+ \\mbox{ Tr}\\overline{\\Psi}(i\\,\\partial-M_{\\rm B})\\Psi+\\mbox{Tr} \\overline{\\Psi}i\\gamma^{\\mu}[V_{\\mu},\\Psi] \\tag{5}\\] \\[+ a_{1}\\mbox{Tr}\\overline{\\Psi}(\\xi M^{\\dagger}\\xi+\\mbox{h.c.})\\Psi +a_{2}\\mbox{Tr}\\overline{\\Psi}\\Psi(\\xi M^{\\dagger}\\xi+\\mbox{h.c.})+a_{3}( \\mbox{Tr}M\\Sigma+\\mbox{h.c.})\\mbox{Tr}\\overline{\\Psi}\\Psi\\,\\]
where the first and second terms on the r. h. s. of Eq. (5) are the kinetic and mass terms of mesons, respectively. \\(\\Sigma\\) is the nonlinear meson field defined by \\(\\Sigma\\equiv e^{2i\\Pi/f}\\), where \\(\\Pi\\equiv\\sum_{a=1\\sim 8}\\pi_{a}T_{a}\\) with \\(\\pi_{a}\\) being the octet meson fields and \\(T_{a}\\) being the SU(3) generators. Since only charged kaon condensation is considered, the \\(\\Pi\\) is simply given as
\\[\\Pi=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{ccc}0&0&K^{+}\\\\ 0&0&0\\\\ K^{-}&0&0\\end{array}\\right). \\tag{6}\\]
In the second term of Eq. (5), \\(\\Lambda_{\\chi\\rm SB}\\) is the chiral symmetry breaking scale, \\(\\sim 1\\) GeV, \\(M\\) the mass matrix which is given by \\(M\\equiv\\mbox{diag}(m_{u},m_{d},m_{s})\\) with the quark masses \\(m_{i}\\). The third term in Eq. (5) denotes the free baryon part, where the \\(\\Psi\\) is the octet baryon field including only the \\(p\\), \\(\\Lambda\\), \\(n\\), \\(\\Sigma^{-}\\), and \\(M_{\\rm B}\\) the baryon mass generated as a consequence of spontaneous chiral symmetry breaking. The fourth term in Eq. (5) gives the \\(s\\)-wave kaon-baryon interaction of the vector type corresponding to the Tomozawa-Weinberg term with \\(V_{\\mu}\\) being the mesonic vector current defined by \\(V_{\\mu}\\equiv 1/2(\\xi^{\\dagger}\\partial_{\\mu}\\xi+\\xi\\partial_{\\mu}\\xi^{ \\dagger})\\) with \\(\\xi\\equiv\\Sigma^{1/2}\\). The last three terms in Eq. (5) give the \\(s\\)-wave meson-baryon interaction of the scalar type, which explicitly breaks chiral symmetry.5
Footnote 5: The same types of the scalar and vector interactions are derived in the quark-meson coupling model [47].
The quark masses \\(m_{i}\\) are chosen to be \\(m_{u}\\) = 6 MeV, \\(m_{d}\\) = 12 MeV, and \\(m_{s}\\) = 240 MeV. Together with these values, the parameters \\(a_{1}\\) and \\(a_{2}\\) are fixed to be \\(a_{1}\\) = \\(-\\)0.28, \\(a_{2}\\) = 0.56 so as to reproduce the empirical octet baryon mass splittings [1]. The parameter \\(a_{3}\\) is related to the kaon-nucleon (\\(KN\\)) sigma terms simulating the \\(s\\)-wave \\(KN\\) attraction of the scalar type through the expressions, \\(\\Sigma_{Kp}=-(a_{1}+a_{2}+2a_{3})(m_{u}+m_{s})\\), \\(\\Sigma_{Kn}=-(a_{2}+2a_{3})(m_{u}+m_{s})\\), evaluated at the on-shell Cheng-Dashen point for the effective chiral Lagrangian (5). Recent lattice calculations suggest the value of the \\(KN\\) sigma term \\(\\Sigma_{KN}\\)=(300\\(-\\)400) MeV [48]. We take the value of \\(a_{3}=-0.9\\), leading to \\(\\Sigma_{Kn}\\)=305 MeV, as a standard value. For comparison, we also take another value \\(a_{3}=-0.7\\), which leads to \\(\\Sigma_{Kn}\\)=207 MeV. The \\(K^{-}\\) optical potential in symmetric nuclear matter, \\(V_{\\rm opt}(\\rho_{\\rm B})\\), is estimated as a scale of the \\(K^{-}\\)-nucleon attractive interaction. It is defined by
\\[V_{\\rm opt}(\\rho_{\\rm B})=\\Pi_{K^{-}}(\\omega(\\rho_{\\rm B}),\\rho_{\\rm B})/2 \\omega(\\rho_{\\rm B})\\, \\tag{7}\\]
where \\(\\Pi_{K^{-}}\\left(\\omega(\\rho_{\\rm B}),\\rho_{\\rm B}\\right)\\) is the \\(K^{-}\\) self-energy at given \\(\\rho_{B}\\) with \\(\\rho_{p}=\\rho_{n}=\\rho_{\\rm B}/2\\). For \\(a_{3}=-0.9\\) (\\(a_{3}=-0.7\\)), \\(V_{\\rm opt}(\\rho_{0})\\) is estimated to be \\(-\\) 115 MeV (\\(-\\) 95 MeV) at the nuclear saturation density \\(\\rho_{0}\\) (=0.16 fm\\({}^{-3}\\)).
In order to be consistent with the on-shell \\(s\\)-wave \\(K\\) (\\(\\bar{K}\\))-\\(N\\) scattering lengths, we have to take into account the range terms proportional to \\(\\omega^{2}\\) coming from the higher-order terms in chiral expansion and a pole contribution from the \\(\\Lambda(1405)\\)[4, 5]. Nevertheless, these contributions to the energy density become negligible in high-density matter. Therefore, we omit these correction terms throughout this paper and consider the simplified expression for the energy density of the kaon-condensed phase.
### Effective energy density
The total effective energy density \\({\\cal E}_{\\rm eff}\\) is separated into baryon, meson, and lepton parts as \\({\\cal E}_{\\rm eff}={\\cal E}_{\\rm eff}^{\\rm B}+{\\cal E}_{\\rm eff}^{\\rm M}+{\\cal E }_{\\rm eff}^{\\rm e}\\). The kaon-baryon interaction is incorporated in the baryon part \\({\\cal E}_{\\rm eff}^{\\rm B}\\), and the meson part \\({\\cal E}_{\\rm eff}^{\\rm M}\\) consists of the free classical kaons only. The baryon part \\({\\cal E}_{\\rm eff}^{\\rm B}\\) and the meson part \\({\\cal E}_{\\rm eff}^{\\rm M}\\) are derived from the effective chiral Lagrangian (5). After the nonrelativistic reduction for the baryon part of the effective Hamiltonian by way of the Foldy-Wouthuysen-Tani transformation and with the mean-field approximation, one obtains
\\[{\\cal E}_{\\rm eff}^{\\rm B}=\\sum_{i=p,\\Lambda,n,\\Sigma^{-}}\\sum_{ \\stackrel{{|{\\bf p}|\\leq|{\\bf p}_{F}(i)|}}{{s=\\pm 1/2}}}E_{{\\rm eff},s}^{(i)}({\\bf p })\\, \\tag{8}\\]
where \\({\\bf p}_{F}(i)\\) are the Fermi momenta, and the subscript '\\(s\\)' stands for the spin states for the baryon. The effective single-particle energies \\(E_{{\\rm eff},s}^{(i)}({\\bf p})\\) for the baryons \\(i\\) are represented by
\\[E_{{\\rm eff},s}^{(p)}({\\bf p}) = {\\bf p}^{2}/2M_{N}-(\\mu+\\Sigma_{Kp})(1-\\cos\\theta)+\\mu+\
u\\, \\tag{9a}\\] \\[E_{{\\rm eff},s}^{(\\Lambda)}({\\bf p}) = {\\bf p}^{2}/2M_{N}-\\Sigma_{K\\Lambda}(1-\\cos\\theta)+\\delta M_{ \\Lambda N}+\
u\\,\\] (9b) \\[E_{{\\rm eff},s}^{(n)}({\\bf p}) = {\\bf p}^{2}/2M_{N}-\\Big{(}\\frac{1}{2}\\mu+\\Sigma_{Kn}\\Big{)}(1- \\cos\\theta)+\
u\\,\\] (9c) \\[E_{{\\rm eff},s}^{(\\Sigma^{-})}({\\bf p}) = {\\bf p}^{2}/2M_{N}-\\Big{(}-\\frac{1}{2}\\mu+\\Sigma_{K\\Sigma^{-}} \\Big{)}(1-\\cos\\theta)+\\delta M_{\\Sigma^{-}N}-\\mu+\
u\\, \\tag{9d}\\]
where \\(M_{N}\\) is the nucleon mass, \\(\\delta M_{\\Lambda N}\\) (= 176 MeV) is the \\(\\Lambda\\)-\\(N\\) mass difference and \\(\\delta M_{\\Sigma^{-}N}\\) (= 258 MeV) the \\(\\Sigma^{-}\\)-\\(N\\) mass difference. The \"kaon-hyperon sigma terms\" are defined by \\(\\Sigma_{K\\Lambda}\\equiv-\\left(\\frac{5}{6}a_{1}+\\frac{5}{6}a_{2}+2a_{3}\\right) (m_{u}+m_{s})\\) and \\(\\Sigma_{K\\Sigma^{-}}\\equiv-(a_{2}+2a_{3})(m_{u}+m_{s})\\) (=\\(\\Sigma_{Kn}\\)). It is to be noted that each term in Eqs. (9) contains both the kaon-baryon attraction of the scalar type simulated by the \"sigma term\" and the kaon-baryon interaction of the vector type proportional to \\(\\mu\\) the coefficient of which is given by the V-spin charge of each baryon.
The meson contribution to the effective energy density, \\({\\cal E}_{\\rm eff}^{\\rm M}\\), is given by the substitution of the classical kaon field (2) into the meson part of the effective Hamiltonian :
\\[{\\cal E}_{\\rm eff}^{\\rm M}=-\\frac{1}{2}f^{2}\\mu^{2}\\sin^{2}\\theta+f^{2}m_{K}^ {2}(1-\\cos\\theta)\\, \\tag{10}\\]
where \\(m_{K}\\equiv[\\Lambda_{\\chi{\\rm SB}}(m_{u}+m_{s})]^{1/2}\\), which is identified with the free kaon mass, and is replaced by the experimental value, 493.7 MeV. The lepton contribution to the effective energy density is given as
\\[{\\cal E}_{\\rm eff}^{\\rm e}=\\frac{\\mu^{4}}{4\\pi^{2}}-\\mu\\frac{\\mu^{3}}{3\\pi^{2 }}=-\\frac{\\mu^{4}}{12\\pi^{2}} \\tag{11}\\]
with \\(\\rho_{e}=\\mu^{3}/(3\\pi^{2})\\) for the ultrarelativistic electrons.
### Baryon potentials
We introduce a potential energy density \\({\\cal E}_{\\rm pot}\\) as a local effective baryon-baryon interaction, which is assumed to be given by functions of the number densities of the relevant baryons [36]. In order to take into account the baryon potential effects on both the whole energy of thesystem and the baryon single-particle energies consistently, we take the following prescription: The baryon potential \\(V_{i}\\) (\\(i=p,\\Lambda\\), \\(n\\), \\(\\Sigma^{-}\\)) is defined as
\\[V_{i}=\\partial{\\cal E}_{\\rm pot}/\\partial\\rho_{i} \\tag{12}\\]
with \\(\\rho_{i}\\) being the number density of baryon \\(i\\), and it is added to each effective single particle energy, \\(E^{(i)}_{{\\rm eff},s}({\\bf p})\\to E^{\\prime(i)}_{{\\rm eff},s}({\\bf p})=E^{(i)}_ {{\\rm eff},s}({\\bf p})+V_{i}\\). The potential energy density \\({\\cal E}_{\\rm pot}\\) is added to the total effective energy density \\({\\cal E}^{\\rm eff}\\), and the term \\(\\sum_{i=p,\\Lambda,n,\\Sigma^{-}}\\rho_{i}V_{i}\\) is subtracted to avoid the double counting of the baryon interaction energies in the sum over the effective single particle energies \\({E^{\\prime}}^{(i)}_{{\\rm eff},s}({\\bf p})\\). Accordingly, the baryon part of the effective energy density is modified as
\\[{\\cal E}^{\\prime\\rm B}_{\\rm eff} = \\sum_{i=p,\\Lambda,n,\\Sigma^{-}}\\sum_{\\begin{subarray}{c}|{\\bf p} |\\leq|{\\bf p}_{F}(i)|\\\\ s=\\pm 1/2\\end{subarray}}E^{\\prime(i)}_{{\\rm eff},s}({\\bf p})+{\\cal E}_{\\rm pot}- \\sum_{i=p,\\Lambda,n,\\Sigma^{-}}\\rho_{i}V_{i} \\tag{13}\\] \\[= \\frac{3}{5}\\frac{(3\\pi^{2})^{2/3}}{2M_{N}}(\\rho_{p}^{5/3}+\\rho_{ \\Lambda}^{5/3}+\\rho_{n}^{5/3}+\\rho_{\\Sigma^{-}}^{5/3})+(\\rho_{\\Lambda}\\delta M _{\\Lambda p}+\\rho_{\\Sigma^{-}}\\delta M_{\\Sigma^{-}n})+{\\cal E}_{\\rm pot}\\] \\[- \\Bigg{\\{}\\rho_{p}(\\mu+\\Sigma_{Kp})+\\rho_{\\Lambda}\\Sigma_{K\\Lambda }+\\rho_{n}\\Big{(}\\frac{1}{2}\\mu+\\Sigma_{Kn}\\Big{)}+\\rho_{\\Sigma^{-}}\\Big{(}- \\frac{1}{2}\\mu+\\Sigma_{K\\Sigma^{-}}\\Big{)}\\Bigg{\\}}(1-\\cos\\theta)\\] \\[+ \\mu(\\rho_{p}-\\rho_{\\Sigma^{-}})+\
u\\rho_{\\rm B}\\.\\]
The total effective energy density \\({\\cal E}^{\\prime}_{\\rm eff}\\) is obtained as the sum of the baryon, meson, and lepton parts whose explicit forms are given by Eqs. (13), (10), and (11), respectively. For later convenience, we also show the total energy density \\({\\cal E}^{\\prime}\\) including the potential contribution for baryons:
\\[{\\cal E}^{\\prime} = \\frac{3}{5}\\frac{(3\\pi^{2})^{2/3}}{2M_{N}}\\Big{(}\\rho_{p}^{5/3}+ \\rho_{\\Lambda}^{5/3}+\\rho_{n}^{5/3}+\\rho_{\\Sigma^{-}}^{5/3}\\Big{)} \\tag{14}\\] \\[+ (\\rho_{\\Lambda}\\delta M_{\\Lambda p}+\\rho_{\\Sigma^{-}}\\delta M_{ \\Sigma^{-}n})+{\\cal E}_{\\rm pot}\\] \\[- (\\rho_{p}\\Sigma_{Kp}+\\rho_{\\Lambda}\\Sigma_{K\\Lambda}+\\rho_{n} \\Sigma_{Kn}+\\rho_{\\Sigma^{-}}\\Sigma_{K\\Sigma^{-}})\\left(1-\\cos\\theta\\right)\\] \\[+ \\frac{1}{2}f^{2}\\mu^{2}\\sin^{2}\\theta+f^{2}m_{K}^{2}(1-\\cos\\theta )+\\mu^{4}/(4\\pi^{2})\\,\\]
where the first term on the right hand side denotes the baryon kinetic energy, the second term comes from the mass difference between the hyperons and nucleons, the third term the baryon potential energy, the fourth term the \\(s\\)-wave kaon-baryon scalar interaction brought about by the kaon-baryon sigma terms, the fifth and sixth terms the free parts of the condensed kaon energy (kinetic energy and free mass), and the last term stands for the lepton kinetic energy.
For the hyperonic matter composed of \\(p\\), \\(\\Lambda\\), \\(n\\), and \\(\\Sigma^{-}\\), the potential energy density \\({\\cal E}_{\\rm pot}\\) is given by
\\[{\\cal E}_{\\rm pot} = \\frac{1}{2}\\Big{[}a_{\\rm NN}(\\rho_{\\rm p}+\\rho_{\\rm n})^{2}+b_{ \\rm NN}(\\rho_{\\rm p}-\\rho_{\\rm n})^{2}+c_{\\rm NN}(\\rho_{\\rm p}+\\rho_{\\rm n})^{ \\delta+1}\\Big{]} \\tag{15}\\] \\[+ a_{\\rm AN}(\\rho_{\\rm p}+\\rho_{\\rm n})\\rho_{\\Lambda}+c_{\\rm AN} \\Bigg{[}\\frac{(\\rho_{\\rm p}+\\rho_{\\rm n})^{\\gamma+1}}{\\rho_{\\rm p}+\\rho_{\\rm n }+\\rho_{\\Lambda}}\\rho_{\\Lambda}+\\frac{{\\rho_{\\Lambda}}^{\\gamma+1}}{\\rho_{\\rm p }+\\rho_{\\rm n}+\\rho_{\\Lambda}}(\\rho_{\\rm p}+\\rho_{\\rm n})\\Bigg{]}+\\frac{1}{2}( a_{YY}{\\rho_{\\Lambda}}^{2}+c_{\\rm YY}{\\rho_{\\Lambda}}^{\\gamma+1})\\] \\[+ a_{\\rm SN}(\\rho_{\\rm p}+\\rho_{\\rm n})\\rho_{\\Sigma^{-}}+b_{\\Sigma \\rm SN}(\\rho_{\\rm n}-\\rho_{\\rm p})\\rho_{\\Sigma^{-}}+c_{\\Sigma\\rm SN}\\Bigg{[} \\frac{(\\rho_{\\rm p}+\\rho_{\\rm n})^{\\gamma+1}}{\\rho_{\\rm p}+\\rho_{\\rm n}+\\rho_ {\\Sigma^{-}}}\\rho_{\\Sigma^{-}}+\\frac{\\rho_{\\Sigma^{-}}{\\gamma+1}}{\\rho_{\\rm p }+\\rho_{\\rm n}+\\rho_{\\Sigma^{-}}}(\\rho_{\\rm p}+\\rho_{\\rm n})\\Bigg{]}\\] \\[+ a_{\\rm YY}\\rho_{\\Sigma^{-}}\\rho_{\\Lambda}+c_{\\rm YY}\\Bigg{[} \\frac{{\\rho_{\\Sigma^{-}}}^{\\gamma+1}}{\\rho_{\\Sigma^{-}}+\\rho_{\\Lambda}}\\rho_{ \\Lambda}+\\frac{{\\rho_{\\Lambda}}^{\\gamma+1}}{\\rho_{\\Sigma^{-}}+\\rho_{\\Lambda}} \\rho_{\\Sigma^{-}}\\Bigg{]}+\\frac{1}{2}\\Big{[}(a_{\\rm YY}+b_{\\Sigma\\Sigma}){ \\rho_{\\Sigma^{-}}}^{2}+c_{\\rm YY}{\\rho_{\\Sigma^{-}}}^{\\gamma+1}\\Big{]}\\.\\]The parameters in the potential energy density (15) are determined as follows: (i) The parameters \\(a_{NN}\\) and \\(c_{NN}\\) in the \\(NN\\) part are fixed so as to reproduce the standard nuclear saturation density \\(\\rho_{0}\\)=0.16 fm\\({}^{-3}\\) and the binding energy \\(-\\)16 MeV in symmetric nuclear matter. With the parameters \\(a_{NN}\\), \\(c_{NN}\\), and \\(\\delta\\), the incompressibility \\(K\\) in symmetric nuclear matter is obtained. The parameter \\(b_{NN}\\) for the isospin-dependent term in the \\(NN\\) part is chosen to reproduce the empirical value of the symmetry energy \\(\\sim\\) 30 MeV at \\(\\rho_{B}=\\rho_{0}\\). (ii) For the \\(YN\\) parts, \\(a_{\\Lambda N}\\) and \\(c_{\\Lambda N}\\) are basically taken to be the same as those in Ref. [36], where the single \\(\\Lambda\\) orbitals in ordinary hypernuclei are reasonably fitted. The depth of the \\(\\Lambda\\) potential in nuclear matter is then given as \\(V_{\\Lambda}(\\rho_{p}=\\rho_{n}=\\rho_{0}/2)=a_{\\Lambda N}\\rho_{0}+c_{\\Lambda N} \\rho_{0}^{\\gamma}\\)=\\(-\\)27 MeV [49]. The depth of the \\(\\Sigma^{-}\\) potential \\(V_{\\Sigma^{-}}\\) in nuclear matter is taken to be repulsive, following recent theoretical calculations [50, 51] and the phenomenological analyses on the (\\(K^{-}\\), \\(\\pi^{\\pm}\\)) reactions at BNL [52, 53], (\\(\\pi^{-}\\), \\(K^{+}\\)) reactions at KEK [54, 55, 56], and the \\(\\Sigma^{-}\\) atom data [57]: \\(V_{\\Sigma^{-}}(\\rho_{p}=\\rho_{n}=\\rho_{0}/2)=a_{\\Sigma N}\\rho_{0}+c_{\\Sigma N} \\rho_{0}^{\\gamma}\\)=23.5 MeV and \\(b_{\\Sigma N}\\rho_{0}\\)=40.2 MeV. This choice of the parameters corresponds to the values in Ref. [53] based on the Nijmegen model F. (iii) Since the experimental information on the \\(YY\\) interactions is not enough, we take the same parameters for the \\(YY\\) part as those in Ref. [36].
Taking into account the conditions (i) \\(\\sim\\) (iii), we adopt the following two parameter sets throughout this paper: (A) \\(\\delta\\)=\\(\\gamma\\)=5/3. In this case, one obtains \\(K\\)=306 MeV, which is larger than the standard empirical value 210\\(\\pm\\)30 MeV [58]. (B) \\(\\delta\\)=4/3 and \\(\\gamma\\)=2.0. From the choice \\(\\delta\\)=4/3, one obtains \\(K\\)=236 MeV which lies within the empirical value. The choice \\(\\gamma\\)=2.0 leads to the stiffer EOS for hyperonic matter at high densities compared with the case (A). Numerical values of the parameter sets (A) and (B) are listed in Table 1. Here we abbreviate the EOS for hyperonic matter with the use of (A) and (B) as H-EOS (A) and H-EOS (B), respectively.
### Physical constraints
The energy density and physical quantities in the ground state are obtained variationally by the extremization of the total effective energy density \\({\\cal E}^{\\prime}_{\\rm eff}\\) with respect to \\(\\theta\\), \\(\\mu\\), and each number density of the baryon \\(i\\) at a given density \\(\\rho_{\\rm B}\\). From \\(\\partial{\\cal E}^{\\prime}_{\\rm eff}/\\partial\\theta=0\\), one obtains the classical field equation for \\(\\theta\\),
\\[\\sin\\theta\\Bigg{[}\\mu^{2}\\cos\\theta-m_{K}^{2}+\\frac{\\mu}{f^{2}}\\Big{(}\\rho_{p}+ \\frac{1}{2}\\rho_{n}-\\frac{1}{2}\\rho_{\\Sigma^{-}}\\Big{)}+\\frac{1}{f^{2}}\\sum_{i= p,\\Lambda,n,\\Sigma^{-}}\\rho_{i}\\Sigma_{Ki}\\Bigg{]}=0. \\tag{16}\\]
From \\(\\partial{\\cal E}^{\\prime}_{\\rm eff}/\\partial\\mu=0\\), one obtains the charge neutrality condition,
\\[\\rho_{p}-\\rho_{\\Sigma^{-}}-\\rho_{K^{-}}-\\rho_{e}=0\\, \\tag{17}\\]
where the number density of the kaon condensates \\(\\rho_{K^{-}}\\) is given as
\\[\\rho_{K^{-}}=\\mu f^{2}\\sin^{2}\\theta+\\left(\\rho_{p}+\\frac{1}{2}\\rho_{n}-\\frac {1}{2}\\rho_{\\Sigma^{-}}\\right)(1-\\cos\\theta). \\tag{18}\\]
From \\(\\partial{\\cal E}^{\\prime}_{\\rm eff}/\\partial\
u=\\rho_{\\rm B}\\), one obtains the baryon number conservation,
\\[\\sum_{i=p,\\Lambda,n,\\Sigma^{-}}\\rho_{i}=\\rho_{\\rm B}. \\tag{19}\\]
The chemical equilibrium conditions for the weak interaction processes (4), \\(n\\rightleftharpoons pe^{-}(\\bar{\
u}_{e})\\), \\(n\\rightleftharpoons\\Lambda(\
u_{e}\\bar{\
u}_{e})\\), \\(ne^{-}\\rightleftharpoons\\Sigma^{-}(\
u_{e})\\), are rewritten as:
\\[\\mu_{n} = \\mu_{p}+\\mu\\, \\tag{20a}\\] \\[\\mu_{\\Lambda} = \\mu_{n}\\,\\] (20b) \\[\\mu_{\\Sigma^{-}} = \\mu_{n}+\\mu\\, \\tag{20c}\\]
where the chemical potentials for the baryons are given by \\(\\mu_{i}=\\partial{\\cal E}^{\\prime}/\\partial\\rho_{i}\\) with the help of Eqs. (14), (16) and (18) :
\\[\\mu_{n} = \\frac{(3\\pi^{2}\\rho_{n})^{2/3}}{2M_{N}}-\\Big{(}\\frac{1}{2}\\mu+ \\Sigma_{Kn}\\Big{)}(1-\\cos\\theta)+V_{n}\\, \\tag{21a}\\] \\[\\mu_{p} = \\frac{(3\\pi^{2}\\rho_{p})^{2/3}}{2M_{N}}-(\\mu+\\Sigma_{Kp})(1-\\cos \\theta)+V_{p}\\,\\] (21b) \\[\\mu_{\\Lambda} = \\frac{(3\\pi^{2}\\rho_{\\Lambda})^{2/3}}{2M_{N}}-\\Sigma_{K\\Lambda}( 1-\\cos\\theta)+\\delta M_{\\Lambda N}+V_{\\Lambda}\\,\\] (21c) \\[\\mu_{\\Sigma^{-}} = \\frac{(3\\pi^{2}\\rho_{\\Sigma^{-}})^{2/3}}{2M_{N}}-\\Big{(}-\\frac{1} {2}\\mu+\\Sigma_{K\\Sigma^{-}}\\Big{)}(1-\\cos\\theta)+\\delta M_{\\Sigma^{-}N}+V_{ \\Sigma^{-}}. \\tag{21d}\\]
## 3 Composition of matter in the noncondensed phase
The critical density satisfying Eq. (1) depends sensitively on the density dependence of \\(\\mu\\), which is also affected by the matter composition through the relation \\(\\mu=\\mu_{e}=(3\\pi^{2}\\rho_{e})^{1/3}\\). Thereby, before going into detail on the onset density of kaon condensation, we address behaviors of particle fractions in the noncondensed hyperonic matter.
In Figs. 1 and 2, particle fractions \\(\\rho_{i}/\\rho_{\\rm B}\\) (\\(i=p,\\Lambda,n,\\Sigma^{-}\\), \\(e^{-}\\)) in the noncondensed hyperonic matter are shown as functions of baryon number density \\(\\rho_{\\rm B}\\) for H-EOS (A) and (B), respectively.
In both figures, the dashed lines stand for the ratio of the total negative strangeness number density \\(\\rho_{\\rm strange}\\)(=\\(\\rho_{\\Lambda}+\\rho_{\\Sigma^{-}}\\)) to the baryon number density \\(\\rho_{\\rm B}\\).
In the case of H-EOS (A), the \\(\\Lambda\\) hyperon starts to be mixed at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c}(\\Lambda)=0.340\\) fm\\({}^{-3}\\) (= 2.13 \\(\\rho_{0}\\)) and the \\(\\Sigma^{-}\\) hyperon does at a higher density, \\(\\rho_{\\rm B}\\)= \\(\\rho_{\\rm B}^{c}(\\Sigma^{-})\\) = 0.525 fm\\({}^{-3}\\) (= 3.28 \\(\\rho_{0}\\)) (Fig. 2). In the case of H-EOS (B), both hyperons start to be mixed at higher densities than the case of H-EOS (A), i.e., at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c}(\\Lambda)\\sim 0.44\\) fm\\({}^{-3}\\) (= 2.69 \\(\\rho_{0}\\)) for the \\(\\Lambda\\) and \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c}(\\Sigma^{-})\\sim 0.59\\) fm\\({}^{-3}\\) (= 3.69 \\(\\rho_{0}\\)) for the \\(\\Sigma^{-}\\), respectively (Fig. 2). In fact, the condition for the \\(\\Lambda\\)-mixing in the (\\(n\\), \\(p\\), \\(e^{-}\\)) matter, for instance, is written by the use of Eqs. (21a) and (21c) as
\\[\\delta M_{\\Lambda N}+V_{\\Lambda}\\leq\\frac{(3\\pi^{2}\\rho_{n})^{2/3}}{2M_{N}}+V_ {n}\\, \\tag{22}\\]
where \\(V_{\\Lambda}=a_{\\Lambda N}\\rho_{\\rm B}+c_{\\Lambda N}\\rho_{\\rm B}^{\\gamma}\\) and \\(V_{n}=a_{NN}\\rho_{\\rm B}-b_{NN}(\\rho_{p}-\\rho_{n})+\\frac{1}{2}c_{NN}(\\delta+1) \\rho_{\\rm B}^{\\delta}\\). In the case of H-EOS (B), the index \\(\\delta\\) (=4/3) is smaller than that for H-EOS (A) (=5/3), which makes the repulsive interaction of \\(V_{n}\\) smaller than that for H-EOS (A). Furthermore the index \\(\\gamma\\) (=2) is bigger than that for H-EOS (A) (=5/3), which makes the repulsive interaction of \\(V_{\\Lambda}\\) larger than that for H-EOS (A). Both effects push up the threshold density for the condition (22) as compared with the case of H-EOS (A).
In general, the smaller value of the index \\(\\delta\\) simulating the higher order terms of the repulsive nucleon-nucleon interactions gives the smaller potential energy contributions for the nucleons. The larger value of the index \\(\\gamma\\) simulating the higher order repulsive terms of the hyperon-nucleon and hyperon-hyperon interactions gives the large potential energy contributions for the hyperons. As a result, the beta equilibrium conditions for the hyperons, \\(n\\rightleftharpoons\\Lambda\\) (\\(\
u_{e}\\bar{\
u}_{e}\\)), \\(ne^{-}\\rightleftharpoons\\Sigma^{-}\\) (\\(\
u_{e}\\)), are satisfied at higher densities for H-EOS (B) than the case of H-EOS (A).
It should be noted that, in the case of H-EOS (B), the \\(\\Lambda\\) and \\(\\Sigma^{-}\\) start to appear in the ground state of neutron-star matter such that the mixing ratios, \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\), increase discontinuously from zero to finite nonzero values above certain densities. This is a different behavior from the usual one, where the hyperon-mixing ratios increase continuously from zero as density increases like the case of H-EOS (A). (See the Appendix.)
One can see common behavior with regard to the density dependence of each particle fraction for H-EOS (A) and H-EOS (B) : As the hyperons dominate the matter composition with increase in \\(\\rho_{\\rm B}\\), the electron fraction decreases. In particular, the negative charge of the electron is taken over by that of the \\(\\Sigma^{-}\\) hyperon, so that the electron fraction decreases rapidly, while the \\(\\Sigma^{-}\\) fraction increases as the density increases. The proton fraction increases so as to compensate the negative charge of the \\(\\Sigma^{-}\\). At high densities, the \\(\\Sigma^{-}\\) and proton fractions amount to (20\\(-\\)30) %, the \\(\\Lambda\\) fraction to (30\\(-\\)40) %, and the fraction of total negative strangeness to (50\\(-\\)60) %. As a result of increase in fractions of the proton, \\(\\Lambda\\), and \\(\\Sigma^{-}\\), the neutron fraction decreases rapidly with increase in \\(\\rho_{\\rm B}\\).
## 4 Validity of continuous phase transition
In ordinary neutron-star matter without hyperon-mixing, the onset density for kaon condensation is given by the condition, \\(\\omega=\\mu\\) [Eq. (1) ]. Here the lowest energy \\(\\omega\\) for \\(K^{-}\\) is obtained from the zero point of the inverse propagator for \\(K^{-}\\), \\(D_{K}^{-1}(\\omega;\\rho_{\\rm B})\\), which can be read from expansion of the total effective energy density \\({\\cal E}^{\\prime}_{\\rm eff}\\) with respect to the chiral angle \\(\\theta\\) around \\(\\theta=0\\):
\\[{\\cal E}^{\\prime}_{\\rm eff}(\\theta)={\\cal E}^{\\prime}_{\\rm eff}(0)-\\frac{f^{2 }}{2}D_{K}^{-1}(\\mu;\\rho_{\\rm B})\\theta^{2}+O(\\theta^{4}). \\tag{23}\\]
This onset condition, \\(D_{K}^{-1}(\\mu;\\rho_{\\rm B})=0\\), is equal to the nontrivial classical kaon-field equation (16) with \\(\\theta=0\\), and is based on the assumption of the continuous phase transition : The chiral angle \\(\\theta\\), for instance, increases continuously from zero as \\(\\rho_{\\rm B}\\) increases. In this section, we consider validity of the assumption of the continuous phase transition to \\(K^{-}\\) condensation in hyperonic matter. Numerical results are presented by the use of the H-EOS (A) and H-EOS(B) for the noncondensed hyperonic matter EOS in Secs. 4.1 and 4. 2, respectively.
### Case of H-EOS (A)
In Fig. 3, we show the lowest energies of the \\(K^{-}\\) as functions of baryon number density \\(\\rho_{\\rm B}\\) for \\(\\Sigma_{Kn}=\\) 305 MeV (bold solid line) and \\(\\Sigma_{Kn}=\\) 207 MeV (thin solid line) in the case of H-EOS (A). The dependence of the charge chemical potential \\(\\mu\\) (=\\(\\mu_{K}=\\mu_{e}\\)) on \\(\\rho_{\\rm B}\\) is shown by the dotted line. The density at which the lowest \\(K^{-}\\) energy \\(\\omega\\) crosses the \\(\\mu\\) is denoted as \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\). The charge chemical potential \\(\\mu\\) decreases with increase in density after the appearance of the negatively charged hyperon \\(\\Sigma^{-}\\), as seen in Fig. 3, so that the onset condition Eq. (1) is satisfied at a higher density than the case of neutron-star matter without mixing of hyperons. From Fig. 3, one reads \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\)=0.6433 fm\\({}^{-3}\\) (=4.02\\(\\rho_{0}\\)) for \\(\\Sigma_{Kn}\\)=305 MeV and \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\)=0.9254 fm\\({}^{-3}\\) (=5.78\\(\\rho_{0}\\)) for \\(\\Sigma_{Kn}\\)=207 MeV.
Now we examine whether the state at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\) is the true ground state or not, by considering the dependence of the total energy of the system at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\) on the chiral angle \\(\\theta\\) and \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\). In Fig. 4, the contour plots of the total energy per baryon\\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{\\rm c(2)}(K^{-})\\) are depicted for \\(\\Sigma_{Kn}\\)=305 MeV [Fig. 4(a)] and \\(\\Sigma_{Kn}\\)=207 MeV [Fig. 4(b)] in the case of H-EOS (A). Note that \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) has been maximized with respect to \\(\\mu\\) and minimized with respect to the other remaining parameters \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\) and \\(\\rho_{p}/\\rho_{\\rm B}\\). The energy interval between the contours is taken to be 0.2 MeV for \\(\\Sigma_{Kn}\\)=305 MeV and 0.5 MeV for \\(\\Sigma_{Kn}\\)=207 MeV. For \\(\\Sigma_{Kn}\\)=305 MeV [Fig. 4(a)], one obtains a state satisfying the condition \\(\\omega\\)=\\(\\mu\\) at a point, (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\))=(0, 0.117), where \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\)=117.32 MeV (denoted as P). However, this point is not a minimum, but a saddle point in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane. A true minimum state exists at a different point, (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\))=(0.70 rad, 0) in the plane (denoted as Q). This state Q stands for the fully-developed \\(K^{-}\\)-condensed state with no \\(\\Sigma^{-}\\)-mixing.
In Fig. 5, the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) and the contributions to \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) from each term on the r.h.s. of Eq. (14) are shown as functions of the \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) at \\(\\rho_{\\rm B}\\)=\\(\\rho_{\\rm B}^{\\rm c(2)}(K^{-})\\) (\\(=0.6433\\) fm\\({}^{-3}\\)) for \\(\\Sigma_{Kn}\\)=305 MeV by the solid lines. At a given \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) is minimized with respect to \\(\\theta\\), \\(\\rho_{p}\\), \\(\\rho_{\\Lambda}\\), and maximized with respect to \\(\\mu\\). For comparison, those for the noncondensed state (\\(\\theta\\)=0) are shown by the dashed lines. The state P satisfying the condition \\(\\omega=\\mu\\) corresponds to the point at \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)=0.117 on the bold solid line. The state Q denoting the absolute energy minimum with \\(\\theta\\)=0.70 rad corresponds to the point at \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)=0 on the bold solid line. From comparison of each energy contribution at the states P and Q, one can see that the hyperon(\\(Y\\))-nucleon (\\(N\\)) mass difference mainly pushes up the total energy as the \\(\\Sigma^{-}\\)-mixing ratio increases. The mixing of the \\(\\Sigma^{-}\\) hyperon slightly reduces the total kinetic energy of baryons by lowering the Fermi momentum of each baryon, while slightly enlarging the baryon potential energy contribution. These two effects are compensated each other. The lepton energy contribution is little changed by the \\(\\Sigma^{-}\\)-mixing. Note that the sum of the kaon-baryon scalar interaction energy and free kaon energy consisting of the kaon free mass and kinetic energy is positive, and that it decreases as the \\(\\Sigma^{-}\\)-mixing ratio increases. However,
Figure 3: The lowest energies of \\(K^{-}\\) as functions of baryon number density \\(\\rho_{\\rm B}\\) for \\(\\Sigma_{Kn}\\) = 305 MeV (bold solid line) and \\(\\Sigma_{Kn}\\) = 207 MeV (thin solid line) in the case of H-EOS (A).
Figure 4: (a) Contour plot of the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\) for \\(\\Sigma_{Kn}\\)=305 MeV in the case of H-EOS (A). The energy interval is taken to be 0.2 MeV. (b) The same as in (a), but for \\(\\Sigma_{Kn}\\)=207 MeV. The energy interval is taken to be 0.5 MeV. See the text for details.
Figure 5: The contributions from each term on the r.h.s. of Eq. (12) to the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) as functions of the \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) at \\(\\rho_{\\rm B}\\)=\\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) (=0.6433 fm\\({}^{-3}\\)) for \\(\\Sigma_{Kn}\\)=305 MeV (the solid lines). For comparison, those for the noncondensed state (\\(\\theta\\)=0) is shown by the dashed lines. The H-EOS (A) is used for the hyperonic matter EOS. See the text for details.
the decrease in the sum of the kaon-baryon scalar interaction energy and free kaon energy cannot compensate for the energy excess from the \\(Y\\)-\\(N\\) mass difference as the \\(\\Sigma^{-}\\)-mixing increases. It is to be noted that, for any value of the \\(\\Sigma^{-}\\)-mixing ratio, all the energy contributions except for the sum of the kaon-baryon scalar interaction energy and free kaon energy have lower energy in the kaon-condensed state (solid lines) than in the noncondensed state (dashed lines).
The more detailed numerical analysis shows the following behavior for the energy minima in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane in the vicinity of \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) : At a certain density below \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\), a local minimum state Q' corresponding to the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing appears in addition to the absolute minimum state P' corresponding to the noncondensed state with the \\(\\Sigma^{-}\\)-mixing. [We denote this density as \\(\\rho_{\\rm B}^{*}(K^{-};{\\rm no}\\ \\Sigma^{-})\\).] As the density increases, the state Q' shifts to have a lower energy, and at a density, denoted as \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\), the energy values of the two minima P' and Q' get equal. Above \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\), the state Q' becomes the absolute minimum, having a lower energy than that of the state P'. In Table 2, we show the typical densities, \\(\\rho_{\\rm B}^{*}(K^{-};{\\rm no}\\ \\Sigma^{-})\\), \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) as well as \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\). In the case of H-EOS (A) and \\(\\Sigma_{Kn}=305\\) MeV, there is a _discontinuous_ transition from the noncondensed state of hyperonic matter with the \\(\\Sigma^{-}\\)-mixing (the state P') to the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing (the state Q') above \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\). This transition density \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) is slightly lower than \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\).
Next we proceed to the case of H-EOS (A) and \\(\\Sigma_{Kn}\\)=207 MeV. As seen from Fig. 4(b), the state P is an absolute minimum. Therefore, the assumption of the continuous transition is kept valid, and the onset density for kaon condensation is given by \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\). For this weaker kaon-baryon scalar attraction case, the critical density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) is far beyond the onset density of \\(\\Sigma^{-}\\), \\(\\rho_{\\rm B}^{c}(\\Sigma^{-})\\) (=0.52 fm\\({}^{-3}\\)), so that the concentration of the \\(\\Sigma^{-}\\) hyperon in matter is not affected much by the appearance of kaon condensates. However, it should be noted that there still exists a kaon-condensed local minimum Q (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\))=(1.0 rad, 0) without the \\(\\Sigma^{-}\\)-mixing. Indeed, the local minimum (the state Q') exists from a fairly lower density \\(\\rho_{\\rm B}^{*}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) (=0.8280 fm\\({}^{-3}\\)) than \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) (=0.9254 fm\\({}^{-3}\\)). [See Table 2.]
\\begin{table}
\\begin{tabular}{c|c||c|c|c||c|c} \\hline H-EOS & \\(\\Sigma_{Kn}\\) & \\(\\rho_{\\rm B}^{*}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) & \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) & \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) & \\(\\rho_{\\rm B}^{*}(K^{-};\\Sigma^{-})\\) & \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\Sigma^{-})\\) \\\\ & (MeV) & (fm\\({}^{-3}\\)) & (fm\\({}^{-3}\\)) & (fm\\({}^{-3}\\)) & (fm\\({}^{-3}\\)) & (fm\\({}^{-3}\\)) \\\\ \\hline (A) & 305 & 0.5782 & 0.6135 & (0.6433) & 1.011 & 1.039 \\\\ & 207 & 0.8280 & \\(-\\) & 0.9254 & \\(-\\) & \\(-\\) \\\\ \\hline (B) & 305 & \\(-\\) & \\(-\\) & 0.5504 & 1.006 & 1.069 \\\\ & 207 & 0.7084 & 0.9086 & (0.9189) & 0.9189 & 1.170 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: The typical densities associated with the appearance of \\(K^{-}\\) condensates and \\(\\Sigma^{-}\\) hyperons. They are calculated with the EOS models for hyperonic matter, H-EOS (A) and H-EOS (B). The values in the parentheses for \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) mean that they don’t correspond to the true energy minimum but the local minimum or the saddle point in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)) plane. See the text for details.
### Case of H-EOS (B)
In the case of H-EOS (B) for the hyperonic matter EOS, both the \\(\\Lambda\\) and \\(\\Sigma^{-}\\) start to be mixed at higher densities as compared with the case of H-EOS (A) [Sec. 3]. In Fig. 6, we show the lowest energies of the \\(K^{-}\\) as functions of baryon number density \\(\\rho_{\\rm B}\\) for \\(\\Sigma_{Kn}=305\\) MeV (bold solid line) and \\(\\Sigma_{Kn}=207\\) MeV (thin solid line).
In Fig. 7, the contour plots of the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\) are depicted for \\(\\Sigma_{Kn}\\)=305 MeV [Fig. 7(a)] and \\(\\Sigma_{Kn}\\)=207 MeV [Fig. 7(b)] in the case of H-EOS (B). From Fig. 6, the critical density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) is read as \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\)=0.5504 fm\\({}^{-3}\\) (=3.44\\(\\rho_{0}\\)) for \\(\\Sigma_{Kn}\\)=305 MeV and \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\)=0.9189 fm\\({}^{-3}\\) (=5.74\\(\\rho_{0}\\)) for \\(\\Sigma_{Kn}\\)=207 MeV.
For \\(\\Sigma_{Kn}\\)=305 MeV, the condition \\(\\omega=\\mu\\) [Eq. (1) ] is satisfied before mixing of the \\(\\Sigma^{-}\\) starts, i.e., \\(\\rho_{\\rm B}^{c(2)}(K^{-})<\\rho_{\\rm B}^{c}(\\Sigma^{-})\\). As seen from Fig. 7(a), the corresponding state P is the absolute minimum for the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\), and there is no local minimum with kaon condensates without the \\(\\Sigma^{-}\\)-mixing. Therefore, the assumption of the continuous phase transition does not lose its validity when the \\(\\Sigma^{-}\\) hyperons are not mixed in the ground state.
On the other hand, for \\(\\Sigma_{Kn}\\)=207 MeV, the state P satisfying the condition \\(\\omega=\\mu\\) is obtained as a local minimum at a point (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\))=(0, 0.200) in the presence of the \\(\\Sigma^{-}\\), and there is an absolute minimum with kaon condensates without the \\(\\Sigma^{-}\\)-mixing (the state Q in Fig. 7(b)) at a point (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\))=(1.02 rad, 0). As compared with the case of H-EOS (A), the critical density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) is not so far from the onset density of the \\(\\Sigma^{-}\\), \\(\\rho_{\\rm B}^{c}(\\Sigma^{-})\\). [ \\(\\rho_{\\rm B}^{c(2)}(K^{-})-\\rho_{\\rm B}^{c}(\\Sigma^{-})=2.1\\)\\(\\rho_{0}\\) for H-EOS (B), while \\(\\rho_{\\rm B}^{c(2)}(K^{-})-\\rho_{\\rm B}^{c}(\\Sigma^{-})=2.5\\)\\(\\rho_{0}\\) for H-EOS (A). ] As a result, competition between the \\(\\Sigma^{-}\\) and \\(K^{-}\\) condensates is more remarkable in the case of H-EOS (B) and \\(\\Sigma_{Kn}=207\\) MeV than in the case of H-EOS (A) and \\(\\Sigma_{Kn}=207\\) MeV, making the state Q energetically more favorable than the state P. From Table 2, one can see the common behavior as the case of
Figure 6: The lowest energies of \\(K^{-}\\) as functions of baryon number density \\(\\rho_{\\rm B}\\) for \\(\\Sigma_{Kn}=305\\) MeV (bold solid line) and \\(\\Sigma_{Kn}=207\\) MeV (thin solid line) in the case of H-EOS (B).
H-EOS (A) and \\(\\Sigma_{Kn}\\) = 305 MeV concerning the appearance of \\(K^{-}\\) condensates and \\(\\Sigma^{-}\\) hyperons : There is a _discontinuous_ transition from the noncondensed state of hyperonic matter with the \\(\\Sigma^{-}\\)-mixing to the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing above \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(1)}(K^{-};\\)no \\(\\Sigma^{-})\\), and this transition density \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\)no \\(\\Sigma^{-})\\) is slightly lower than \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\).
## 5 Equation of State
### Two energy minima with and without the \\(\\Sigma^{-}\\)-mixing for the \\(K^{-}\\)-condensed phase
Here we discuss the EOS of the \\(K^{-}\\)-condensed phase in hyperonic matter. The total energies per baryon in the \\(K^{-}\\)-condensed phase, \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\), as functions of the baryon number density \\(\\rho_{\\rm B}\\) are shown in Fig. 8. Fig. 8 (a) is for H-EOS (A), and (b) is for H-EOS (B). The bold (thin) lines are for \\(\\Sigma_{Kn}\\) = 305 MeV (\\(\\Sigma_{Kn}\\) = 207 MeV). The solid lines stand for the total energies per baryon for the \\(K^{-}\\)-condensed state with the \\(\\Sigma^{-}\\)-mixing, while the dashed lines for the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing. For comparison, the energy per baryon for the noncondensed hyperonic matter is shown by the dotted line. In each case of the model EOS for hyperonic matter and the \\(Kn\\) sigma term \\(\\Sigma_{Kn}\\), there are two solutions of the kaon-condensed phase corresponding to two minima in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)) plane at some density intervals: One is the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing (dashed lines) called the state Q' in Sec. 4, and the other is with the \\(\\Sigma^{-}\\)-mixing (solid lines), which we call the state R'. The density at which the state R' appears as a local minimum is denoted as \\(\\rho_{\\rm B}^{*}(K^{-};\\Sigma^{-})\\). For example, we show the contour plots of the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{*}(K^{-};\\Sigma^{-})\\) for \\(\\Sigma_{Kn}\\)=305 MeV in the case of H-EOS (A) in Fig. 9(a) and H-EOS (B) in Fig. 9(b). In Fig. 10, we also show
Figure 7: (a) Contour plot of the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\) for \\(\\Sigma_{Kn}\\)=305 MeV in the case of H-EOS (B). The energy interval is taken to be 0.2 MeV. (b) The same as in (a), but for \\(\\Sigma_{Kn}\\)=207 MeV. The energy interval is taken to be 0.5 MeV. See the text for details.
the dependence of the baryon potentials \\(V_{\\Sigma^{-}}\\), \\(V_{n}\\), the neutron chemical potential \\(\\mu_{n}\\), and the difference of the \\(\\Sigma^{-}\\) and charge chemical potentials, \\(\\mu_{\\Sigma^{-}}-\\mu\\), on the \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}\\)/\\(\\rho_{\\rm B}\\) in the kaon-condensed phase at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{*}(K^{-};\\Sigma^{-})\\). Fig. 10(a) is for \\(\\Sigma_{Kn}\\)=305 MeV with H-EOS (A) and Fig. 10(b) is for \\(\\Sigma_{Kn}\\)=305 MeV with H-EOS (B). One can
potential difference \\(\\mu_{\\Sigma^{-}}-\\mu\\) has a minimum at a finite value of \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) (\\(=\\) 0.12 \\(-\\) 0.14) and that the neutron chemical potential \\(\\mu_{n}\\) decreases monotonically with increase in the \\(\\Sigma^{-}\\)-mixing ratio within the range \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)\\(=\\) 0.0 \\(-\\) 0.3. As a result, the chemical equilibrium condition, \\(\\mu_{n}=\\mu_{\\Sigma^{-}}-\\mu\\), for the weak process \\(ne^{-}\\rightleftharpoons\\Sigma^{-}(\
u_{e})\\) is met at a finite \\(\\Sigma^{-}\\)-mixing ratio (\\(=\\) 0.07 \\(-\\) 0.1), which corresponds to the appearance of the state R' in Fig. 9. The dependence of the chemical potentials \\(\\mu_{\\Sigma^{-}}-\\mu\\) and \\(\\mu_{n}\\) on the \\(\\Sigma^{-}\\)-mixing ratio is caused from that of the baryon potentials \\(V_{\\Sigma^{-}}\\) and \\(V_{n}\\), respectively, as seen from Fig. 10.
As the density increases, the difference of the energies between the state R' and the absolute minimum state Q' gets smaller, and at a certain density denoted as \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\Sigma^{-})\\), the energies of the states Q' and R' get equal. Above the density \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\Sigma^{-})\\), the state R' develops as an absolute energy minimum. In Table 2, we show the numerical values of \\(\\rho_{\\rm B}^{*}(K^{-};\\Sigma^{-})\\) and \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\Sigma^{-})\\) for each case of \\(\\Sigma_{Kn}\\) and the hyperonic matter EOS.
At a given density, the ground state is determined by the lowest energy state in Fig. 8. For all the cases, the state R' becomes the ground state at higher densities, i.e., the \\(\\Sigma^{-}\\) is mixed in the fully-developed kaon-condensed phase. Except for the case of \\(\\Sigma_{Kn}\\)=207 MeV with H-EOS (A), the transition from the Q' state (the dashed lines) to the R' state (the solid lines) is discontinuous. Even when the state R' is the ground state, the local minimum (the state Q') prevails over the wide range of the baryon number density, in particular, in the case of H-EOS (B) [see the dashed lines in Fig. 8 (b)].
In Fig. 11, we show the pressure in the \\(K^{-}\\)-condensed phase obtained by \\(P\\equiv\\rho_{\\rm B}^{2}\\partial({\\cal E}^{\\prime}/\\rho_{\\rm B})/\\partial\\rho _{\\rm B}\\) (\\(=\\)\\(-{\\cal E}^{\\prime}_{\\rm eff}\\)), as functions of the energy density, \\(\\epsilon\\equiv{\\cal E}^{\\prime}+M_{N}\\rho_{\\rm B}\\), in MeV\\(\\cdot\\)fm\\({}^{-3}\\). Fig. 11 (a) is for H-EOS (A), and (b) is for H-EOS (B). The bold (thin) lines are for \\(\\Sigma_{Kn}\\)\\(=\\) 305 MeV (\\(\\Sigma_{Kn}\\)\\(=\\) 207 MeV). The solid lines stand for the pressure for the \\(K^{-}\\)-condensed state with the \\(\\Sigma^{-}\\)-mixing, while the dashed lines for the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing. For comparison, the pressure for the noncondensed hyperonic matter is shown by the dotted line. Except for the case of \\(\\Sigma_{Kn}=207\\) MeV with H-EOS (A), there appears a gap in the pressure at the transition density \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\Sigma^{-})\\) as a result of the discontinuous transition. It should be noted that the sound speed, \\((\\partial P/\\partial\\epsilon)^{1/2}\\), exceeds the speed of light \\(c\\) above a certain energy density, which is denoted as an arrow for each corresponding pressure curve. In such high density region, the relativistically covariant formulation is necessary for quantitative discussion of the EOS.
### Density isomer state in the case of the stronger \\(s\\)-wave kaon-baryon scalar attraction
In the kaon-condensed phase realized in hyperonic matter, the EOS becomes considerably soft. In particular, in the case of \\(\\Sigma_{Kn}=305\\) MeV for both H-EOS (A) and (B), there appears a local energy minimum (which we call the density isomer state) at a certain density \\(\\rho_{\\rm B,min}\\), and the pressure becomes negative at some density intervals below \\(\\rho_{\\rm B,min}\\), as seen in Fig. 8 and 11 (bold lines). For H-EOS (A) [H-EOS (B)], one reads \\(\\rho_{\\rm B,min}=1.22\\) fm\\({}^{-3}\\) (0.92 fm\\({}^{-3}\\)), and the minimum energy per baryon at \\(\\rho_{\\rm B,min}\\) is 76.9 MeV (106.1 MeV), which is smaller than the \\(\\Lambda\\)-\\(N\\) mass difference, \\(\\delta M_{\\Lambda N}\\)=176 MeV. Thus the density isomer state is stable against the strong decay processes.
In order to clarify mechanisms for the significant softening of the EOS leading to the appearance of the local energy minimum and for subsequent recovering of the stiffness of the EOS at higher density region, we show the energy contributions to the total energy per baryon by the solid lines in the case of \\(\\Sigma_{Kn}=305\\) MeV for H-EOS (A) and H-EOS (B) in Figs. 12 (a) and (b), respectively. For comparison, those for the kaon-condensed phase realized in ordinary neutron-star matter, obtained after putting \\(\\rho_{\\Lambda}\\)=\\(\\rho_{\\Sigma^{-}}\\)=0, are shown by the dashed lines. The density region where the total energy per baryon decreases with density (i.e., the negative pressure region) is bounded by the vertical dotted lines in Figs. 12 (a) and (b). The dependence of the total energy on the baryon number density is mainly determined by the two contributions: (I) the contribution from the classical kaons as the sum of the \\(s\\)-wave scalar kaon-baryon interaction and the free parts of the condensed kaon energy [the fourth, fifth and sixth terms in Eq. (14)] and (II) the baryon potential energy \\({\\cal E}_{\\rm pot}/\\rho_{\\rm B}\\) [the third term in Eq. (14)]. The contribution (I) decreases with increase in density, while the contribution (II) increases as density increases. As one can see from comparison of the solid lines and the dashed lines in Fig. 12, the attractive effect from the contribution (I) is pronounced due to mixing of hyperons as compared with the case without hyperons, lowering the total energy at a given density. In addition, the repulsive effect from the contribution (II) is much weakened due to mixing of hyperons at a given density, since the repulsive interaction between nucleons is avoided by lowering the relative nucleon density through mixing of hyperons.6 As a result, the total energy is much reduced, leading to significant softening of the EOS for the kaon-condensed phase in hyperonic matter.
Footnote 6: This suppression mechanism of the repulsive interaction between nucleons is essentially the same as that for softening of the EOS in the noncondensed hyperonic matter as pointed out in Ref. [37, 38].
For the density region bounded by the dotted lines in Figs. 12 (a) and (b), the increase of the absolute value of the kaon-baryon attractive interaction with density is more remarkable than the increase of the potential energy with density, so that the total energy per baryon decreases with density in this density region, and there is an energy minimum at \\(\\rho_{\\rm B,min}\\). At higher densities above \\(\\rho_{\\rm B,min}\\), the decrease of the contribution (I) gets slightly moderate, while the repulsive interaction between baryons becomes strong and so the increase of the contribution (II) with density gets more marked. As a result, the total energy per baryon increases rapidly with density, and the EOS recovers the stiffness at high densities. Thus the stiffness of the EOS depends on the quantitative behavior of the repulsive interaction between baryons at high
Figure 12: (a) The contributions to the total energy per baryon for the kaon-condensed phase in hyperonic matter as functions of the baryon number density \\(\\rho_{\\rm B}\\) for H-EOS (A) and \\(\\Sigma_{Kn}=305\\) MeV (solid lines). For comparison, those for the kaon-condensed phase realized in ordinary neutron-star matter, obtained after putting \\(\\rho_{\\Lambda}\\)=\\(\\rho_{\\Sigma^{-}}\\)=0, are shown by the dashed lines. (b) The same as in (a), but for H-EOS (B). See the text for details.
densities, which has ambiguity depending on the model interaction. In our framework, the H-EOS (B) brings about stiffer EOS at high densities than the H-EOS (A), since the many-body hyperon-nucleon and hyperon-hyperon repulsive interaction terms, which control the stiffness of the EOS at high densities, contribute more significantly for H-EOS (B) [the index \\(\\gamma\\)=2.0] than for H-EOS(A) [\\(\\gamma\\)=5/3].
It has been suggested in Ref. [59] that a density isomer state with kaon-condensates in hyperonic matter implies the existence of self-bound objects, which can be bound essentially without gravitation on a scale of an atomic nucleus to a neutron star just like a strangelet and a strange star [60, 61, 62, 63] or other exotic matter [64, 65, 66, 67, 68]. The density isomer state is located at a local energy minimum with respect to the baryon number density as a metastable state, but it decays only through multiple weak processes, so that it is regarded as substantially stable. Implications of such self-bound objects with kaon condensates for astrophysical phenomena and nuclear experiments will be discussed in detail in a subsequent paper, where both \\(s\\)-wave and \\(p\\)-wave kaon-baryon interactions are taken into account [46].
### Composition of matter in the kaon-condensed phase
The characteristic features of the kaon-condensed phase in hyperonic matter can be surveyed from the density dependence of the compositon of matter. In Figs. 13 and 14, the particle fractions \\(\\rho_{i}/\\rho_{\\rm B}\\) (\\(i\\)=\\(p\\), \\(\\Lambda\\), \\(n\\), \\(\\Sigma^{-}\\), \\(K^{-}\\), \\(e^{-}\\)) are shown as functions of the baryon number density \\(\\rho_{\\rm B}\\). Figures 13 (a) and (b) are for H-EOS(A) with \\(\\Sigma_{Kn}\\) = 305 MeV and \\(\\Sigma_{Kn}\\) = 207 MeV, respectively, and Fig. 14 (a) and (b) are for H-EOS(B) with \\(\\Sigma_{Kn}\\) = 305 MeV and \\(\\Sigma_{Kn}\\) = 207 MeV, respectively. The long dashed lines stand for the ratio of the total negative strangeness number density \\(\\rho_{\\rm strange}\\) to the baryon number density \\(\\rho_{\\rm B}\\) with
\\[\\rho_{\\rm strange}=\\rho_{K^{-}}+\\rho_{\\Lambda}+\\rho_{\\Sigma^{-}}. \\tag{24}\\]
The contribution from the classical kaon part, \\(\\rho_{K^{-}}/\\rho_{\\rm B}\\), is shown by the short dashed line in each figure. For each curve in Figs. 13 and 14, only the quantity corresponding to the lowest energy minimum state in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)) plane is shown as a function of density, so that there are gaps in the quantities at the transition densities \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\mbox{no}\\ \\Sigma^{-})\\) and \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\ \\Sigma^{-})\\). One can see competitive effects between kaon condensates and \\(\\Sigma^{-}\\) hyperons: As the former develops around the density \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\mbox{no}\\ \\Sigma^{-})\\), the latter is suppressed, while as the latter develops in the kaon-condensed phase around the density \\(\\rho_{\\rm B}^{c(1)}(K^{-};\\ \\Sigma^{-})\\), the former is suppressed.
Appearance of both kaon condensates and the \\(\\Sigma^{-}\\) leads to considerable suppression of the electron fraction, since the negative charge of the electron is replaced by that of kaon condensates and \\(\\Sigma^{-}\\) hyperons. Accordingly, the charge chemical potential \\(\\mu\\) decreases with increase in the baryon number density through the relation \\(\\rho_{e}=\\mu^{3}/(3\\pi^{2})\\). It becomes even negative above the density \\(\\rho_{\\rm B}\\) = 0.77 fm\\({}^{-3}\\) for \\(\\Sigma_{Kn}\\) = 305 MeV and \\(\\rho_{\\rm B}\\) = 1.06 fm\\({}^{-3}\\) for \\(\\Sigma_{Kn}\\) = 207 MeV in both the cases of H-EOS (A) and H-EOS (B). For \\(\\mu<0\\), the positrons (\\(e^{+}\\)) appear in place of the electrons.
At high densities, the protons, \\(\\Lambda\\), and \\(\\Sigma^{-}\\) hyperons are equally populated in the kaon-condensed phase, i. e., \\(\\rho_{p}/\\rho_{\\rm B}\\), \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) = 30\\(-\\)40 %, whereas the neutrons almost disappear. The total negative strangeness ratio, \\(\\rho_{\\rm strange}/\\rho_{\\rm B}\\), gets larger with increase in density: ItFigure 14: (a) Particle fractions \\(\\rho_{i}/\\rho_{\\rm B}\\) in the \\(K^{-}\\)-condensed phase as functions of the baryon number density \\(\\rho_{\\rm B}\\) for H-EOS (B) and \\(\\Sigma_{Kn}\\)=305 MeV. (b) The same as in (a), but for H-EOS (B) and \\(\\Sigma_{Kn}\\)=207 MeV.
Figure 13: (a) Particle fractions \\(\\rho_{i}/\\rho_{\\rm B}\\) in the \\(K^{-}\\)-condensed phase as functions of the baryon number density \\(\\rho_{\\rm B}\\) for H-EOS (A) and \\(\\Sigma_{Kn}\\)=305 MeV. (b) The same as in (a), but for H-EOS (A) and \\(\\Sigma_{Kn}\\)=207 MeV.
reaches almost unity for \\(\\Sigma_{Kn}=305\\) MeV and 0.8\\(-\\)0.9 for \\(\\Sigma_{Kn}=207\\) MeV at high densities for both H-EOS (A) and H-EOS (B). Such a high strangeness fraction implies a close connection between the kaon-condensed phase in hyperonic matter and strange matter where \\(u\\), \\(d\\) and \\(s\\) quarks are almost equally populated in quark matter.
## 6 Summary and Concluding Remarks
We have studied the \\(s\\)-wave kaon condensation realized in hyperonic matter based on chiral symmetry for the kaon-baryon interactions and taking into account the parameterized effective interactions between baryons. We have concentrated on interrelations between kaon condensates and negatively charged hyperons (\\(\\Sigma^{-}\\)) and reexamined the validity of the assumption of the continuous phase transition from the noncondensed hyperonic matter to the \\(K^{-}\\)-condensed phase. We have also discussed the EOS and the characteristic features of the system for the fully developed kaon-condensed phase.
The validity of the continuous phase transition for kaon condensation in hyperonic matter is summarized as follows : In cases where the condition \\(\\omega=\\mu\\) [Eq. (1) ] is satisfied at \\(\\rho_{\\rm B}\\)=\\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) in the presence of the \\(\\Sigma^{-}\\) hyperons, there exist, in general, two energy minima in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane at some density intervals near the density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\). One is the noncondensed state with the \\(\\Sigma^{-}\\)-mixing (P'), and the other is the \\(K^{-}\\)-condensed state without the \\(\\Sigma^{-}\\)-mixing (Q'). If the density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) is located near the onset density of the \\(\\Sigma^{-}\\), \\(\\rho_{\\rm B}^{c}(\\Sigma^{-})\\), the state P' is a local minimum or a saddle point, and the state Q' is the absolute minimum at \\(\\rho_{\\rm B}=\\rho_{\\rm B}^{c(2)}(K^{-})\\). In this case, the assumption of the continuous phase transition is not valid : Below \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\), there exists a typical density \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\) at which the energies of the two minima become equal. Above the density \\(\\rho_{\\rm B}^{c(1)}(K^{-};{\\rm no}\\ \\Sigma^{-})\\), there is a discontinuous transition from the state P' to the state Q'. On the other hand, if the density \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\) is located high enough from the density \\(\\rho_{\\rm B}^{c}(\\Sigma^{-})\\), the state P' is always an absolute minimum. In this case, the assumption of the continuous phase transition holds true, and the onset density of kaon condensation is given by \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\), above which kaon condensates develop continuously with increase in the baryon number density.
In cases where the condition \\(\\omega=\\mu\\) is satisfied in the absence of the \\(\\Sigma^{-}\\) hyperon, there exists a unique minimum of the noncondensed state (P') at a point (0,0) in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{B}\\)) plane, and the assumption of the continuous phase transition is kept valid. The onset density is given by \\(\\rho_{\\rm B}^{c(2)}(K^{-})\\).
The above consequences on the validity of the continuous phase transition are expected to be general and should also be applied to cases where the other negatively charged hyperons such as the cascade \\(\\Xi^{-}\\) are present in the noncondensed ground state and where both the \\(s\\)-wave and \\(p\\)-wave kaon-baryon interactions are taken into account [46].
In the fully developed phase with kaon condensates, there exist two energy minima with and without the \\(\\Sigma^{-}\\)-mixing in the (\\(\\theta\\), \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\)) plane at some density intervals. At higher densities, the ground state is transferred discontinuously from the kaon-condensed state without the \\(\\Sigma^{-}\\)-mixing to that with the \\(\\Sigma^{-}\\)-mixing, except for the case of H-EOS (A) with the weaker \\(s\\)-wave kaon-baryon attractive interaction. The EOS of the kaon-condensed phase becomes considerably soft, since both the kaon-baryon attractions and mixing of hyperons work to lower the energy of the system. At higher densities, the stiffness of the EOS is recovered due to the increase in the repulsive interaction between baryons. As a result, in the case of the stronger \\(s\\)-wave kaon-baryon attractive interaction (\\(\\Sigma_{Kn}\\)=305 MeV), there appears a local energy minimum as a density isomer state, which suggests the existence of self-bound objects with kaon condensates on any scale from an atomic nucleus to a neutron star. Recently, deeply bound kaonic nuclear states have been proposed theoretically, and much discussion has been made about the experimental achievements of them [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79]. In particular, the double and/or multiple kaon clusters advocated in the recent experimental proposal by way of invariant mass spectroscopy [71] may have a close connection with our results of kaon-condensed self-bound objects. These experimental searches for deeply bound kaonic nuclear states may provide us with important information on the existence of kaon condensation in high-density matter.
In this paper, both kinematics and interactions associated with baryons are treated nonrelativistically. For more quantitative consideration, one needs a relativistic framework. Specifically, the \\(s\\)-wave kaon-baryon scalar attraction, which is proportional to the scalar densities for baryons in the relativistic framework, is suppressed at high densities due to saturation of the scalar densities [5]. This effect is expected to make the EOS stiffer at high densities.
The kaon-condensed phase is important for understanding the high-density QCD phase diagram from a hadronic picture which we have taken over the relevant baryon densities. At high densities, however, quark degrees of freedom may appear explicitly. It has been shown in this paper that the kaon-condensed phase in hyperonic matter leads to large (negative) strangeness fraction, \\(\\rho_{\\rm strange}/\\rho_{\\rm B}\\sim 1\\). This result suggests that kaon-condensed phase in the hadronic picture may be considered as a pathway to strange quark matter. In a quark picture, a variety of deconfined quark phases including color superconductivity have been elaborated [80]. In particular, kaonic modes may be condensed in the color-flavor locked phase [81, 82, 83, 84]. It is interesting to clarify the relationship between kaon condensation in the hadronic phase and that in the quark phase and a possible transition between the two phases.
## Acknowledgments
The author is grateful to T. Tatsumi, T. Takatsuka, T. Kunihiro, and M. Sugawara for valuable discussions. He also thanks the Yukawa Institute for Theoretical Physics at Kyoto University, where this work was completed during the YKIS 2006 on \"New Frontiers on QCD\". This work is supported in part by the Grant-in-Aid for Scientific Research Fund (C) of the Ministry of Education, Science, Sports, and Culture (No. 18540288), and by the funds provided by Chiba Institute of Technology.
Onset conditions of \\(\\Lambda\\) and \\(\\Sigma^{-}\\) hyperons in ordinary neutron-star matter
We recapitulate how hyperons \\(\\Lambda\\) and \\(\\Sigma^{-}\\) appear in the ordinary neutron-star matter composed of protons, neutrons and leptons within the baryon-baryon interaction models H-EOS (A) and (B). In particular, we compare onset mechanisms of the hyperon-mixing between H-EOS (A) and (B).
### \\(\\Lambda\\)-mixing
The condition of the \\(\\Lambda\\)-mixing in the noncondensed neutron-star matter is given by Eq. (20b) with \\(\\theta\\) = 0 : \\(\\mu_{\\Lambda}=\\mu_{n}\\) with
\\[\\mu_{\\Lambda} = \\frac{(3\\pi^{2}\\rho_{\\Lambda})^{2/3}}{2M_{N}}+\\delta M_{\\Lambda N }+V_{\\Lambda}\\, \\tag{25a}\\] \\[\\mu_{n} = \\frac{(3\\pi^{2}\\rho_{n})^{2/3}}{2M_{N}}+V_{n}\\, \\tag{25b}\\]
together with \\(\\mu_{n}=\\mu_{p}+\\mu\\) [Eq. (20a) with \\(\\theta\\) = 0] and the charge neutrality condition, \\(\\rho_{p}=\\rho_{e}\\).
In Fig. 15, the baryon potentials \\(V_{\\Lambda}\\), \\(V_{n}\\) (dashed lines), the chemical potentials \\(\\mu_{\\Lambda}\\), \\(\\mu_{n}\\) (solid lines), and the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) (short dashed line) are shown as functions of the \\(\\Lambda\\)-mixing ratio \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\) at the minimal baryon number density \\(\\rho_{\\rm B}^{*}(\\Lambda)\\) above which the \\(\\Lambda\\)-mixing condition, \\(\\mu_{\\Lambda}=\\mu_{n}\\), is satisfied. Fig. 15(a) is for H-EOS (A), and Fig. 15(b) is for H-EOS (B). For H-EOS (A), the condition \\(\\mu_{\\Lambda}\\)=\\(\\mu_{n}\\) is met at \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\)=0, where the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) is a minimum. Therefore, the \\(\\Lambda\\) hyperon starts to be mixed continuously in the ground state of \\((n,p,e^{-})\\) matter at this density \\(\\rho_{\\rm B}^{*}(\\Lambda)\\) (= 0.340 fm\\({}^{-3}\\)), which gives the onset density \\(\\rho_{\\rm B}^{\\rm c}(\\Lambda)\\). For H-EOS (B), however, the condition \\(\\mu_{\\Lambda}\\)=\\(\\mu_{n}\\) is met at a nonzero value of \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\) (=0.126), at which the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) is a local minimum, and the absolute energy minimum still lies at \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\) = 0 [Fig. 15(b)]. In this case, the \\(\\Lambda\\)-mixing starts at a slightly higher density (\\(\\sim\\) 0.44 fm\\({}^{-3}\\)) than \\(\\rho_{\\rm B}^{*}(\\Lambda)\\) (=0.421 fm\\({}^{-3}\\)) with a nonzero value of \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\).
In the cases of both H-EOS (A) and (B), the dependence of the chemical potentials \\(\\mu_{\\Lambda}\\) and \\(\\mu_{n}\\) on the \\(\\Lambda\\)-mixing ratio is closely correlated with that of the baryon potentials \\(V_{\\Lambda}\\) and \\(V_{n}\\), respectively, as seen in Figs. 15 (a) and (b). In fact, the difference of the onset mechanisms of the \\(\\Lambda\\)-mixing between the cases H-EOS (A) and (B) stems from the difference of the dependence of the baryon potentials on the \\(\\Lambda\\)-mixing ratio between H-EOS (A) and (B).
### \\(\\Sigma^{-}\\)-mixing
The condition of the \\(\\Sigma^{-}\\)-mixing in the noncondensed (\\(n\\), \\(p\\), \\(\\Lambda\\), \\(e^{-}\\)) matter is given by Eq. (20c) with \\(\\theta\\) = 0 : \\(\\mu_{\\Sigma^{-}}=\\mu_{n}+\\mu\\) with
\\[\\mu_{\\Sigma^{-}}=\\frac{(3\\pi^{2}\\rho_{\\Sigma^{-}})^{2/3}}{2M_{N}}+\\delta M_{ \\Sigma^{-}N}+V_{\\Sigma^{-}} \\tag{26}\\]
and (25b), together with \\(\\mu_{n}=\\mu_{p}+\\mu\\) [Eq. (20a) with \\(\\theta\\) = 0], \\(\\mu_{\\Lambda}=\\mu_{n}\\) [Eq. (20b) with \\(\\theta\\) = 0], and the charge neutrality condition, \\(\\rho_{p}=\\rho_{e}\\).
In Fig. 16, we depict the baryon potentials \\(V_{\\Sigma^{-}}\\), \\(V_{n}\\) (dashed lines), the difference of the \\(\\Sigma^{-}\\) chemical potential and charge chemical potential, \\(\\mu_{\\Sigma^{-}}-\\mu\\) (solid line), the neutron chemical potential \\(\\mu_{n}\\) (solid line), and the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) (short dashed line) as functions of the \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) in the noncondensed (\\(n\\), \\(p\\), \\(\\Lambda\\), \\(e^{-}\\)) matter at the minimal baryon number density \\(\\rho_{\\rm B}^{*}(\\Sigma^{-})\\) above which the \\(\\Sigma^{-}\\)-mixing condition, \\(\\mu_{\\Sigma^{-}}=\\mu_{n}+\\mu\\), is satisfied. (a) is for H-EOS (A), and (b) is for H-EOS (B). One finds the results on the onset mechanism of the \\(\\Sigma^{-}\\)-mixing similar to the case of the \\(\\Lambda\\)-mixing : For H-EOS (A) [Fig. 16 (a)], the onset density \\(\\rho_{\\rm B}^{\\rm c}(\\Sigma^{-})\\) is given by \\(\\rho_{\\rm B}^{\\rm c}(\\Sigma^{-})\\)=\\(\\rho_{\\rm B}^{*}(\\Sigma^{-})\\) (\\(=0.525\\) fm\\({}^{-3}\\)) with \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}=0\\), and the \\(\\Sigma^{-}\\)-mixing ratio increases continuously from zero with increase in the baryon number density. For H-EOS (B) [Fig. 16 (b)], the onset density \\(\\rho_{\\rm B}^{\\rm c}(\\Sigma^{-})\\) (\\(\\sim 0.59\\) fm\\({}^{-3}\\)) is slightly larger than the minimal density \\(\\rho_{\\rm B}^{*}(\\Sigma^{-})\\) (=0.575 fm\\({}^{-3}\\)) satisfying the \\(\\Sigma^{-}\\)-mixing conditions, and the \\(\\Sigma^{-}\\)-mixing starts with a nonzero value of \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\). The dependence of the chemical potentials \\(\\mu_{\\Sigma^{-}}\\) and \\(\\mu_{n}\\) on the \\(\\Sigma^{-}\\)-mixing ratio is correlated with that of the baryon potentials \\(V_{\\Sigma^{-}}\\) and \\(V_{n}\\), respectively, which leads to the difference of the onset machanisms of the \\(\\Sigma^{-}\\)-mixing between H-EOS (A) and (B).
Figure 15: Dependence of \\(V_{\\Lambda}\\), \\(V_{n}\\) (dashed lines), \\(\\mu_{\\Lambda}\\), \\(\\mu_{n}\\) (solid lines), and the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) (short dashed line) on the \\(\\Lambda\\)-mixing ratio \\(\\rho_{\\Lambda}/\\rho_{\\rm B}\\) in the noncondensed neutron-star matter at the minimal baryon number density \\(\\rho_{\\rm B}^{*}(\\Lambda)\\) above which the \\(\\Lambda\\)-mixing condition, \\(\\mu_{\\Lambda}\\) = \\(\\mu_{n}\\), is satisfied. (a) is for H-EOS (A), and (b) is for H-EOS (B). See the text for details.
## References
* [1] D. B. Kaplan, A. E. Nelson, Phys. Lett. **B 175**, 57 (1986).
* [2] T. Muto, R. Tamagaki, and T. Tatsumi, Prog. Theor. Phys. Suppl. **112**, 159 (1993). T. Muto, T. Takatsuka, R. Tamagaki, and T. Tatsumi, Prog. Theor. Phys. Suppl. **112**, 221 (1993).
* [3] T. Tatsumi, Prog. Theor. Phys. Suppl. **120**, 111 (1995).
* [4] C. -H. Lee, G. E. Brown, D. -P. Min and M. Rho, Nucl. Phys. **A 585**, 401 (1995).
* [5] H. Fujii, T. Maruyama, T. Muto, and T. Tatsumi, Nucl. Phys. **A597**, 645 (1996).
* [6] C. -H. Lee, Phys. Rep. **275**, 197 (1996).
* [7] M. Prakash, I. Bombaci, M. Prakash, P. J. Ellis, J. M. Lattimer, and R. Knorren, Phys. Rep. **280**, 1 (1997).
* [8] V. Thorsson, M. Prakash, and J. M. Lattimer, Nucl. Phys. **A 572**, 693 (1994) ; _ibid_**A 574**, 851 (1994) (E).
* [9] N. K. Glendenning and J. Schaffner-Bielich, Phys. Rev. **C 60**, 025803 (1999). N. K. Glendenning, Phys. Rep. **342**, 393 (2001).
* [10] H. Heiselberg, C.J. Pethick, and E. F. Staubo, Phys. Rev. Lett. **70**, 1355 (1993).
Figure 16: Dependence of the baryon potentials \\(V_{\\Sigma^{-}}\\), \\(V_{n}\\) (dashed lines), the difference of the \\(\\Sigma^{-}\\) chemical potential and charge chemical potential, \\(\\mu_{\\Sigma^{-}}-\\mu\\) (solid line), the neutron chemical potential \\(\\mu_{n}\\) (solid line), and the total energy per baryon \\({\\cal E}^{\\prime}/\\rho_{\\rm B}\\) (short dashed line) on the \\(\\Sigma^{-}\\)-mixing ratio \\(\\rho_{\\Sigma^{-}}/\\rho_{\\rm B}\\) in the noncondensed (\\(n\\), \\(p\\), \\(\\Lambda\\), \\(e^{-}\\)) matter at the minimal baryon number density \\(\\rho_{\\rm B}^{*}(\\Sigma^{-})\\) above which the \\(\\Sigma^{-}\\)-mixing condition, \\(\\mu_{\\Sigma^{-}}=\\mu_{n}+\\mu\\), is satisfied. (a) is for H-EOS (A), and (b) for H-EOS (B). See the text for details.
* [11] M. Christiansen, N. K. Glendenning, and J. Schaffner-Bielich, Phys. Rev. **C 62**, 025804 (2000).
* [12] T. Norsen and S. Reddy, Phys. Rev. **C 63**, 065804 (2001).
* [13] D. N. Voskresensky, M. Yasuhira, and T. Tatsumi, Nucl. Phys. **A 723**, 291 (2003).
* [14] T. Maruyama, T. Tatsumi, D. N. Voskresensky, T. Tanigawa, and S. Chiba, Nucl.Phys. **A 749**, 186c (2005). T. Maruyama, T. Tatsumi, D. N. Voskresensky, T. Tanigawa, T. Endo, and S. Chiba, Phys. Rev. **C 73**, 035802 (2006). T. Maruyama, T. Tatsumi, T. Endo, and S. Chiba, Recent Res. Devel. Physics **7**, 1 (2006).
* [15] G. E. Brown and H. A. Bethe, Astrophys. J. **423**, 659 (1994).
* [16] T. W. Baumgarte, S. L. Shapiro, and S. Teukolsky, Astrophys. J. **443**, 717 (1995); **458**, 680 (1996).
* [17] J. A. Pons et al., Phys. Rev. **C 62**, 035803 (2000).; Astrophys. J. **553**, 382 (2001).
* [18] T. Tatsumi and M. Yasuhira, Nucl. Phys. **A 653**, 133 (1999). M. Yasuhira and T. Tatsumi, Nucl. Phys. **A 690**, 769 (2001).
* [19] G. E. Brown, K. Kubodera, D. Page, and P. Pizzecherro, Phys. Rev. **D 37**, 2042 (1988).
* [20] T. Tatsumi, Prog. Theor. Phys. **80**, 22 (1988).
* [21] D. Page and E. Baron, Astrophys. J. **254**, L17 (1990).
* [22] H. Fujii, T. Muto, T. Tatsumi, and R. Tamagaki, Nucl. Phys. **A571**, 758 (1994) ; Phys. Rev. **C50**, 3140 (1994).
* [23] T. Muto and T. Tatsumi, Phys. Lett. **B 283**, 165 (1992).
* [24] G. E. Brown, K. Kubodera, M. Rho, and V. Thorsson, Phys. Lett. **291**, 355 (1992).
* [25] T. Muto, Prog. Theor. Phys. **89**, 415 (1993).
* [26] E. E. Kolomeitsev, D. N. Voskresensky, and B. Kampfer, Nucl. Phys. **A 588**, 889 (1995).
* [27] N. K. Glendenning, Astrophys. J. **293**, 470 (1985) ; N. K. Glendenning and S. A. Moszkowski, Phys. Rev. Lett. **67**, 2414 (1991).
* [28] P. J. Ellis, R. Knorren, and M. Prakash, Phys. Lett. **B349**, 11 (1995). R. Knorren, M. Prakash, and P. J. Ellis, Phys. Rev. **C52**, 3470 (1995).
* [29] J. Schaffner and I. N. Mishustin, Phys. Rev. **C53**, 1416 (1996).
* [30] S. Pal, M. Hanauske, I. Zakout, H. Stocker, and W. Greiner, Phys. Rev. **C 60**, 015802 (1999).
* [31] M. Hanauske, D. Zschiesche, S. Pal, S. Schramm, H. Stocker, and W. Greiner, Astrophys. J. **537**, 958 (2000).
* [32] P. K. Sahu, Phys. Rev. **C 62**, 045801 (2000).
* [33] H. Huber, F. Weber, M. K. Weigel, and Ch. Schaab, Int. J. Mod. Phys. **E 7**, 301 (1998).
* [34] M. Baldo, G. F. Burgio, and H. -J. Schulze, Phys. Rev. **C 58**, 3688 (1998) ; _ibid_**C 61**, 055801 (2000).
* [35] I. Vidana, A. Polls, A. Ramos, M. Hjorth-Jensen, and V. G. J. Stoks, Phys. Rev. **C 61**, 025802 (2000) ; I. Vidana, A. Polls, A. Ramos, L. Engvik, and M. Hjorth-Jensen, Phys. Rev. **C 62**, 035801 (2000).
* [36] S. Balberg and A. Gal, Nucl. Phys. **A625**, 435 (1997).
* [37] S. Nishizaki, Y. Yamamoto, and T. Takatsuka, Prog. Theor. Phys. **108**, 703 (2002).
* [38] T. Takatsuka, Prog. Theor. Phys. _Supplement_**156**, 84 (2004).
* [39] S. Pal, D. Bandyopadhyay, and W. Greiner, Nucl. Phys. **A 674**, 553 (2000).
* [40] S. Banik and D. Bandyopadhyay, Phys. Rev. **C 63**, 035802 (2001) ; _ibid_**C 64**, 055805 (2001).
* [41] D. P. Menezes, P. K. Panda, and C. Providencia, Phys. Rev. **C 72**, 035802 (2005).
* [42] C. H. Hyun, C. Y. Ryu, and S. W. Hong, Soryushiron Kenkyu (Japanese Bulletin of elementary physics) **114**, B66 (2006).
* [43] T. Muto, Nucl. Phys. **A 691**, 447c (2001); **A 697**, 225 (2002).
* [44] E. E. Kolomeitsev and D. N. Voskresensky, Phys. Rev. **C 68**, 015803 (2003).
* [45] A. Gal, Prog. Theor. Phys. _Supplement_**156**, 1 (2004), and references therein.
* [46] T. Muto, unpublished.
* [47] K. Tsushima, K. Saito, A. W. Thomas, and S. V. Wright, Phys. Lett. **B 429**, 239 (1998); _ibid_. **436**, 453 (1998) (E).
* [48] S.J. Dong, J.-F. Lagae, and K.F. Liu, Phys. Rev. **D 54**, 5496 (1996).
* [49] D. J. Millener, C. B. Dover, and A. Gal, Phys. Rev. **C 38**, 2700 (1988).
* [50] M. Kohno, Y. Fujiwara, T. Fujita, C. Nakamoto, and Y. Suzuki, Nucl. Phys. **A 674**, 229 (2000).
* [51] Y. Fujiwara, M. Kohno, C. Nakamoto, and Y. Suzuki, Phys. Rev. **C 64**, 054001 (2001).
* [52] S. Bart et al., Phys. Rev. Lett. **83**, 5238 (1999).
* [53] J. Dabrowski, Phys. Rev. **C 60**, 025205 (1999).
* [54] H. Noumi et al., Phys. Rev. Lett. **89**, 072301 (2002); Phys. Rev. Lett. **90**, 049902 (2003).
* [55] J. Dabrowski and J. Rozynek, Acta Phys. Polon. **B 35**, 2303 (2004).
* [56] T. Harada and Y. Hirabayashi, Nucl. Phys. **A 759**, 143 (2005).
* [57] J. Mares, E. Friedman, A. Gal, and B. K. Jennings, Nucl. Phys. **A 594**, 311 (1995).
* [58] J. P. Blaizot, Phys. Rep. **64**, 171 (1980).
* [59] T. Muto, Nucl. Phys. **A 754**, 350c (2005). T. Muto, AIP Conference Proceedings **847**, 439 (2006) (_Proc. of the International Symposium on Origin of Matter and Evolution of Galaxies 2005_, Ed. S. Kubono, Tokyo, Japan.).
* [60] A. R. Bodmer, Phys. Rev. **D4**, 1601 (1971).
* [61] S. A. Chin and A. K. Kerman, Phys. Rev. Lett. **43**, 1292 (1979).
* [62] E. Witten, Phys. Rev. **D30**, 272 (1984).
* [63] E. Farhi and R. L. Jaffe, Phys. Rev. **D30**, 2379 (1984).
* [64] T. D. Lee and G. C. Wick, Phys. Rev. **D 9**, 2291 (1974).
* [65] J. B. Hartle et al., Astrophys. J. **199**, 471 (1975).
* [66] A. B. Migdal et al., Phys. Lett. **B 65**, 423 (1976).
* [67] B. W. Lynn, A. E. Nelson, and N. Tetradis, Nucl. Phys. **B 345**, 186 (1990).
* [68] J. Schaffner-Bielich, M. Hanauske, H. Stocker, and W. Greiner, Phys. Rev. Lett. **89**, 171101 (2002).
* [69] Y. Akaishi and T. Yamazaki, Phys. Rev. **C 65**, 044005 (2002); T. Yamazaki and Y. Akaishi, Phys. Lett. **B 535**, 70 (2002).
* [70] A. Dote, H. Horiuchi, Y. Akaishi, and T. Yamazaki, Phys. Lett. **B 590**, 51 (2004) ; Phys. Rev. **C 70**, 044313 (2004).
* [71] T. Yamazaki, A. Dote, and Y. Akaishi, Phys. Lett. **B 587**, 167 (2004).
* [72] T. Kishimoto, Phys. Rev. Lett. **83**, 4701 (1999).
* [73] M. Iwasaki et al., Nucl. Instrum. Methods Phys. Res. **A 473**, 286 (2001).
* [74] T. Kishimoto et al., Prog. Theor. Phys. _Supplement_**149**, 264 (2003); Nucl. Phys. **A 754**, 383c (2005).
* [75] M. Iwasaki et al., nucl-ex/0310018. T. Suzuki et al., Nucl. Phys. **A 754**, 375c (2005). ; Phys. Lett. **B 597**, 263 (2004).
* [76] M. Agnello et al., Phys. Rev. Lett. **94**, 212303 (2005).
* [77] J. Mares, E. Friedman, and A. Gal, Phys. Lett. **B 606**, 295 (2005) ; Nucl. Phys. **A 770**, 84 (2006).
* [78] J. Yamagata, H. Nagahiro, Y. Okumura, and S. Hirenzaki, Prog. Theor. Phys. **114**, 301 (2005) ; _ibid_**114**, 905 (2005) (E). J. Yamagata, H. Nagahiro, and S. Hirenzaki, Phys. Rev. **C 74**, 014604 (2006).
* [79] E. Oset and H. Toki, Phys. Rev. **C 74**, 015207 (2006). V. K. Magas, E. Oset, A. Ramos, and H. Toki, Phys. Rev. **C 74**, 025206 (2006).
* [80] For a review, eds. A. Nakamura, T. Hatsuda, A. Hosaka, and T. Kunihiro, Prog. Theor. Phys. _Supplement_**153** (2004).
* [81] P. F. Bedaque and T. Schafer, Nucl. Phys. **A 697**, 802 (2002).
* [82] D. B. Kaplan and S. Reddy, Phys. Rev. **D 65**, 054042 (2002).
* [83] M. Buballa, Phys. Lett. **B609**, 57 (2005).
* [84] M. M. Forbes, Phys. Rev. **D 72**, 094032 (2005). | Possible coexistence and/or competition of kaon condensation with hyperons are investigated in hyperonic matter, where hyperons are mixed in the ground state of neutron-star matter. The formulation is based on the effective chiral Lagrangian for the kaon-baryon interaction and the nonrelativistic baryon-baryon interaction model. First, the onset condition of the \\(s\\)-wave kaon condensation realized from hyperonic matter is reexamined. It is shown that the usual assumption of the continuous phase transition is not always kept valid in the presence of the negatively charged hyperons (\\(\\Sigma^{-}\\)). Second, the equation of state (EOS) of the kaon-condensed phase in hyperonic matter is discussed. In the case of the stronger kaon-baryon attractive interaction, it is shown that a local energy minimum with respect to the baryon number density appears as a result of considerable softening of the EOS due to both kaon condensation and hyperon-mixing and recovering of the stiffness of the EOS at very high densities. This result implies a possible existence of self-bound objects with kaon condensates on any scale from an atomic nucleus to a neutron star. | Condense the content of the following passage. |
arxiv-format/0702032v2.md | INJE-TP-07-01
gr-qc/yymmnnn
**Black hole and holographic dark energy**
Yun Soo Myung*
Footnote *: e-mail address: [email protected]
_Institute of Mathematical Science and School of Computer Aided Science_
_Inje University, Gimhae 621-749, Korea_
#
Introduction
Observations of supernova type Ia suggest that our universe is accelerating [1]. Considering the \\(\\Lambda\\)CDM model [2, 3], the dark energy and cold dark matter contribute \\(\\Omega_{\\Lambda}^{\\rm ob}\\simeq 0.74\\) and \\(\\Omega_{\\rm CDM}^{\\rm ob}\\simeq 0.22\\) to the critical density of the present universe. Recently the combination of WMAP3 and Supernova Legacy Survey data shows a significant constraint on the EOS for the dark energy, \\(w_{\\rm ob}=-0.97^{+0.07}_{-0.09}\\) in a flat universe 1[5].
Footnote 1: Another combination of data shows \\(w_{\\rm ob}=-1.04\\pm 0.06\\)[4].
Although there exist a number of dark energy models [6], the two promising candidates are the cosmological constant and the quintessence scenario. The EOS for the latter is determined dynamically by the scalar or tachyon. In the study of dark energy [7], the first issue is whether the dark energy is a cosmological constant with \\(\\omega_{\\Lambda}=-1\\). If the dark energy is shown not to be a cosmological constant, the next is whether the phantom-like state of \\(\\omega_{\\Lambda}<-1\\) is allowed. However, most theoretical models that may lead to \\(\\omega_{\\Lambda}<-1\\) confront with serious problems including violation of the causality. The last issue is whether \\(\\omega_{\\Lambda}\\) is changing (dynamical) as the universe evolves.
On the other hand, there exists another model of the dark energy arisen from the holographic principle. The authors in [8] showed that in quantum field theory, the ultraviolet (UV) cutoff \\(\\Lambda\\) could be related to the infrared (IR) cutoff \\(L\\) due to the limit set by forming a black hole. If \\(\\rho_{\\Lambda}=\\Lambda^{4}\\) is the vacuum energy density caused by the UV cutoff, the total energy for a system of size \\(L\\) should not exceed the mass of the system-size black hole:
\\[E_{\\Lambda}\\leq E_{BH}\\longrightarrow L^{3}\\rho_{\\Lambda}\\leq M_{\\rm p}^{2}L. \\tag{1}\\]
If the largest cutoff \\(L\\) is chosen to be the one saturating this inequality, the holographic energy density is given by the energy density of a system-size black hole as
\\[\\rho_{\\Lambda}=\\frac{3c^{2}M_{\\rm p}^{2}}{8\\pi L^{2}}\\simeq\\rho_{\\rm BH},\\ \\ \\rho_{\\rm BH}=\\frac{3M_{\\rm p}^{2}}{8\\pi L^{2}} \\tag{2}\\]
with a constant \\(c\\). Here we regard \\(\\rho_{\\Lambda}\\) as the dynamical cosmological constant like the quintessence density of \\(\\rho_{\\rm Q}=\\dot{\\phi}^{2}/2+V(\\phi)\\)[7]. At the planck scale of \\(L=M_{\\rm p}^{-1}\\), it is just the vacuum energy density \\(\\rho_{\\rm V}=M_{\\rm p}^{2}\\Lambda_{\\rm eff}/8\\pi\\) of the universe at \\(\\Lambda_{\\rm eff}\\sim M_{\\rm p}^{2}\\): \\(\\rho_{\\Lambda}\\sim\\rho_{\\rm p}\\sim M_{\\rm p}^{4
with the inverse-area law. The total energy density dilutes as \\(L^{-3}\\) due to the evolution of the universe, whereas its upper limit set by gravity (black hole) decreases as \\(L^{-2}\\). Even though it may explain the present data, this approach with \\(L=H_{0}^{-1}\\) fails to recover the EOS for a dark energy-dominated universe. This is because there exists a missing information about the pressure \\(p_{\\Lambda}\\) of holographic dark energy.
It is not easy to determine the EOS for a system including gravity with the UV and IR cutoffs. If one considers \\(L=H_{0}^{-1}\\) together with the cold dark matter, the EOS may take the form of \\(w_{\\Lambda}=0\\)[10], which is just that of the cold dark matter. However, introducing an interaction between holographic dark energy and cold dark matter may lead to an accelerating universe [11]. Interestingly, the future event horizon2 was introduced to obtain an accelerating universe [13, 14, 15, 16, 17].
Footnote 2: As a concrete example, we introduce the definition of the future event horizon \\(R_{\\rm FH}=a(t)\\int_{t}^{\\infty}\\frac{dt^{\\prime}}{da(t^{\\prime})}\\) with the Friedmann-Robertson-Walker metric \\(ds_{\\rm FRW}^{2}=-dt^{2}-a^{2}(t)(d\\tilde{r}^{2}+\\tilde{r}^{2}d\\tilde{\\Omega} _{2}^{2})\\). Assuming the power-law behavior of \\(a(t)=a_{0}t^{\\frac{2}{3(1+\\omega_{\\Lambda})}}\\)[12], one finds \\(R_{\\rm FH}=-\\frac{3(1+\\omega_{\\Lambda})}{1+3\\omega_{\\Lambda}}t\\) for \\(-1<\\omega_{\\Lambda}<-1/3\\). In the case of \\(a(t)=a_{0}e^{Ht}\\), one has \\(R_{\\rm FH}=1/H\\) with \\(\\omega_{\\Lambda}=-1\\). This indicates that de Sitter space can also derived from the future event horizon.
At this stage, we emphasize that the energy density \\(\\rho_{\\rm BH}\\) of the black hole is used to derive the holographic dark energy. On the other hand, we do not use the pressure \\(p_{\\rm BH}\\) of the black hole to find the correct EOS of holographic dark energy. Hence an important issue is to find the pressure of the black hole.
In this Letter, we discuss a few of ways of obtaining the EOS of the black hole from its dual (quantum) systems. Further, we introduce a regular black hole to obtain the dark energy from a singularity-free black hole. Finally, we show that the holographic energy density \\(\\rho_{\\Lambda}\\) with the cosmological horizon leads to the dark energy-dominated universe with \\(\\omega_{\\Lambda}=-1\\).
## 2 EOS for black hole from dual (quantum) systems
We start with the first law of thermodynamics
\\[dE=TdS-pdV. \\tag{3}\\]
On the other hand, the corresponding form of a non-rotating black hole is given by
\\[dE=TdS. \\tag{4}\\]
The most conservative interpretation of \\(pdV=0\\) is that the pressure of a black hole vanishes, \\(p=0\\). This is consistent with the integral form of \\(E=2TS\\) (Euler relation). Ifone chooses \\(p_{\\rm BH}=0\\) really, the black hole plays a role of the cold dark matter with
\\[w_{\\rm BH}=0. \\tag{5}\\]
It seems that the above is consistent with the EOS \\(w_{\\rm A}=0\\) for the holographic dark energy when choosing the Hubble horizon \\(L=H_{0}^{-1}\\)[10].
As a non-zero pressure black hole, we may consider the AdS black hole. In this case, we use the AdS-CFT correspondence to realize the holographic principle [18]. In fact, we have the dual holographic model of the boundary CFT without gravity. Hence we define the energy density and pressure on the boundary by using the AdS-CFT correspondence. The EOS of CFT is given by
\\[w_{\\rm CFT}=\\frac{1}{3} \\tag{6}\\]
which shows that the CFT looks like a radiation-like matter at high temperature [19]. It is suggested that the AdS black hole may have the same EOS as that of CFT at high temperature. This means that we could obtain the EOS of black hole at high temperature from its dual CFT through the AdS-CFT correspondence.
However, for the Schwarzschild black hole, the corresponding holographic model is not yet found [20]. This may be so because the Schwarzschild black hole is too simple to split the energy into the black hole energy and Casimir energy, in contrast to the AdS black hole [21]. Recently, there was a progress on this direction. The authors [22] showed that the energy-entropy duality transforms a strongly interacting gravitational system (Schwarzschild black hole) into a weakly interacting quantum system (quantum gas). The duality transformation between black hole \\((E,\\ S,\\ T)\\) and dual quantum system \\((E^{\\prime},\\ S^{\\prime},\\ T^{\\prime})\\) is proposed as
\\[S^{\\prime}\\to E=M,\\ E^{\\prime}\\to S=A/4,\\ T^{\\prime}\\rightarrow\\frac{1}{T}=8\\pi M \\tag{7}\\]
with \\(A=4\\pi M^{2}\\). This may provide a hint for the quantum-corrected EOS of the Schwarzschild black hole. In this case, they used an extensive thermodynamic relation
\\[E=TS-pV \\tag{8}\\]
which holds if the pressure is non-zero. A choice of the negative pressure \\(p_{\\rm QG}=-TS/V\\) leads to
\\[E=2TS, \\tag{9}\\]
which is just the case of the black hole3. However, this pressure term does no enter into the first law of Eq.(4). This is because they require a constraint of \\(pdV=0\\) to derive the underlying quantum model. As the temperature is associated with the black hole thermodynamics, the pressure of \\(p_{\\rm QG}=-TS/V\\) is related to the quantum nature of the corresponding holographic model. Here we find the EOS for the quantum gas
Footnote 3: In the case of the holographic model, the entropy is given by \\(\\rho_{\\rm QG}=-\\rho_{\\rm QG}\\), where \\(\\rho_{\\rm QG}\\) is the density of the black hole.
\\[w_{\\rm QG}=-\\frac{1}{2}, \\tag{10}\\]
which indicates a kind of the dark energy. If one chooses \\(w_{\\rm QG}\\) as the EOS of the Schwarzschild black hole, this could describe an accelerating universe of \\(w_{\\rm QG}<-1/3\\). However, \\(\\omega_{\\rm QG}=-0.5\\) is not close to the observation data \\(\\omega_{\\rm ob}=-0.97^{+0.07}_{-0.09}\\).
## 3 \\(\\Lambda\\) black hole
We discuss another issue of the singularity on the holographic energy density [24]. The holographic dark energy states that the universe is filled with the maximal amount of dark energy so that our universe has become a black hole. However, an intuitive evidence that this argument may be wrong is that there is no definite evidence that we are approaching a black hole singularity anytime soon. In deriving the holographic energy density in Eq.(2), we did not take into account the singularity inside the event horizon seriously.
In order to avoid the singularity, one may introduce a regular black hole called the de Sitter-Schwarzschild (\\(\\Lambda\\)) black hole [25]. Using a self-gravitating droplet of anisotropic fluid of mass density \\(\\rho_{\\rm m}=\\rho_{\\rm V}e^{-r^{3}/r_{\\rm CH}^{2}r_{\\rm EH}}\\) with \\(r_{\\rm CH}=\\sqrt{3/\\Lambda_{\\rm eff}}=1/H\\) and \\(r_{\\rm EH}=2m/M_{\\rm p}^{2}\\), the conservation of the energy-momentum tensor \\(T^{\\mu}_{\\ \\ \
u}={\\rm diag}[\\rho_{\\rm m},-p_{\\rm r},-p_{\\perp},-p_{\\perp}]\\) leads to
\\[p_{\\rm r}=-\\rho_{\\rm m},\\ \\ p_{\\perp}=-\\rho_{\\rm m}-r\\frac{\\partial_{r}\\rho_{ \\rm m}}{2} \\tag{11}\\]
with the radial pressure \\(p_{\\rm r}\\) and tangential pressure \\(p_{\\perp}\\). The Arnowitt-Deser-Misner mass is defined by \\(m=4\\pi\\int_{0}^{\\infty}\\rho_{\\rm m}r^{2}dr\\). If \\(p_{\\perp}=0\\), one finds the zero gravity surface where the gravitational repulsion balances the gravitational attraction. Here one finds the solution 4 that includes de Sitter space near \\(r=0\\) and asymptotically Schwarzschild spacetime at \\(r=\\infty\\).
Footnote 4: For \\(r\\to 0\\), one has the de Sitter metric \\(ds_{\\rm dS}^{2}=-(1-H^{2}r^{2})d\\tau^{2}+(1-H^{2}r^{2})^{-1}dr^{2}+r^{2}d \\Omega_{2}^{2}\\) with \\(T_{\\mu\
u}\\simeq\\rho_{\\rm m}g_{\\mu\
u}(\\rho_{\\rm m}=\\rho_{\\rm V}=M_{\\rm p}^{2} \\Lambda_{\\rm eff}/8\\pi)\\), while for \\(r\\rightarrow\\infty\\), one finds the Schwarzschild metric \\(ds_{\\rm S}^{2}=-(1-r_{\\rm EH}/r)d\\tau^{2}+(1-r_{\\rm EH}/r)^{-1}dr^{2}+r^{2}d \\Omega_{2}^{2}\\) with \\(T_{\\mu\
u}\\simeq 0\\). Hence for \\(m>m_{\\rm c}\\), one has two horizons: outer event horizon \\(r_{\\rm EH}\\) and inner cosmological horizon \\(r_{\\rm CH}\\). Actually, the \\(\\Lambda\\) black hole looks like the Reissner-Nordstrom black hole with replacing the singularity by de Sitter space.
As is shown in Fig. 1, the matter source \\(\\rho_{\\rm m}\\) connects smoothly de Sitter vacuum in the origin with the Minkowski vacuum at infinity. For \\(m\\geq m_{\\rm c}\\simeq 0.3M_{\\rm p}\\sqrt{\\rho_{\\rm p}/\\rho_{\\rm V}}\\), de Sitter-Schwarzschild geometry describes a vacuum nonsingular black hole with \\(r_{\\rm EH}>r_{\\rm CH}\\), while for \\(m<m_{\\rm c}\\), it describes the G-lump which is a vacuum self-gravitating object without horizon. At \\(m=m_{\\rm c}\\), we have the extremal black hole with \\(r_{\\rm EH}=r_{\\rm CH}\\). Here de Sitter space replaces the singularity. In this case, we have the EOS
\\[w_{\\rm dS}=-1, \\tag{12}\\]
inside the regular black hole. Therefore we attempt to specify its EOS for the holographic dark energy. If the radius of cosmological horizon \\(r_{\\rm CH}\\) is taken to be the IR cutoff, one may consider the interior de Sitter region to be a model of dark energy. Interestingly, the extremal case represents the limiting case when the Schwarzschild radius of the system, whose size is the IR cutoff, is equal to the IR cutoff itself (\\(r_{\\rm EH}=L=r_{\\rm CH}\\)). However, two problems arise in this case. Any infinitesimal step towards a non-saturated holographic dark energy would cause a sudden jump in the EOS: from \\(-1\\) to \\(0\\), so the EOS cannot be clearly determined. Furthermore, the IR cutoff cannot be clearly determined because we have the interior de Sitter space and thus, the Hubble distance and the event horizon are degenerate. We note that the holographic energy density \\(\\rho_{\\Lambda}\\) with \\(L=r_{\\rm CH}\\) is static because \\(r_{\\rm EH}\\) is static. Thus, the holographic dark energy approach is trivial for the \\(r_{\\rm EH}=r_{\\rm CH}\\) case of \\(\\Lambda\\) black hole.
In order to find a non-trivial case, we use the connection b space \\((\\tau,r)\\) and the dynamic Friedmann-Robertson-Walker spacetime \\((t,\\tilde{r})\\)
\\[\\tau=t-\\frac{1}{2H}\\ln[1-H^{2}r^{2}],\\ \\ r=\\frac{\\tilde{r}}{\\sqrt{e^{2H\\tau}+H^{2} \\tilde{r}^{2}}}. \\tag{13}\\]
According to the Penrose diagram in Ref.[26], their asymptotic behaviors are closely related to each other. In de Sitter space, one has the future cosmological horizon \\(r_{\\rm CH}=1/H\\) at \\(\\tau=\\infty\\) only, while in the Friedmann-Robertson-Walker space, one has the future event horizon \\(R_{\\rm FH}=1/H\\) for \\(-\\infty\\leq t\\leq\\infty\\). In case of \\(\\tau=\\infty\\), a dynamical feature of \\(\\rho_{\\Lambda}\\) is recovered and thus we have \\(\\omega_{\\Lambda}=-1\\). In this sense, the EOS of \\(\\omega_{\\rm dS}=-1\\) is considered to be the input and at most, a consistency condition. Hence Eq.(12) is not considered as a derived result.
Inspired by this, we propose that the singular-free condition for holographic dark energy \\(\\rho_{\\Lambda}\\) may determine the equation of state. As was pointed out at footnote 2, we obtain the de Sitter solution \\(L=r_{\\rm CH}\\) from the future event horizon \\(R_{\\rm FH}\\). Here we choose the present universe-size cosmological horizon as the IR cutoff [27, 28], which contrasts to the case with the Hubble horizon \\(L=1/H_{0}\\)[10]. For \\(L=1/H_{0}\\), we could not determine its EOS clearly, while for \\(L=r_{\\rm CH}=1/H\\), we could determine its EOS to be \\(w_{\\Lambda}=-1\\).
## 4 Discussions
We are interested in the equation of state for black hole, because the holographic dark energy came from the energy density of black hole. Here we wish to discuss the connection between the black hole and holographic dark energy. Cohen et. al. [8] mentioned that if one introduces the holographic principle, one could include the gravity effects into the quantum field theory naturally. This is because general relativity (black hole) is the prime example of a holographic theory, whereas quantum field theories are not holographic in their present form. The first thing to realize holographic principle is given by the holographic entropy bound which states that the entropy of the system should be less or equal to the entropy of the system-size black hole: \\(S_{\\Lambda}=L^{3}\\Lambda^{3}\\leq S_{\\rm BH}=\\pi M_{\\rm p}^{2}L^{2}\\). As was clarified by Cohen et.al., this bound includes many states with \\(L_{\\rm S}\\sim L^{5/3}M_{\\rm p}^{2/3}>L\\). Considering the energy \\(E_{\\Lambda}=L^{3}\\Lambda^{4}\\) of the system together with \\(\\Lambda\\sim(M_{\\rm p}^{2}/L)^{1/3}\\) (the saturation of holographic entropy bound), it implies \\(L_{\\rm S}\\sim E_{\\Lambda}>L\\) in the Planck units. This shows a contradiction that a larger black hole can be formed from the system by gravitational collapse. Hence, one requires that no state in the Hilbert space have energy so large that the Schwarzschild radius \\(L_{\\rm S}\\sim E_{\\Lambda}\\) exceeds \\(L\\). Then, a relation between the size \\(L\\) of the system, providing the IR cutoff and the UV cutoff \\(\\Lambda\\) is required to be Eq.(1) (\\(L_{\\rm S}\\sim E_{\\Lambda}<L\\) in the Planck units), which provides the constraint on \\(L\\) that excludes all states lying within \\(L_{\\rm S}\\). In physical terms, it corresponds to the assumption that the effective field theory describe all states of the system excluding those for which it has already collapsed to a black hole. In other words, this relation can be rewritten as \\(E_{\\Lambda}\\leq E_{\\rm BH}\\) called the Bekenstein energy bound. This means that the energy of the system should be less or equal to the energy of the system-size black hole. Actually, both holographic entropy bound and Bekenstein energy bounds are based on the black hole.
If one takes the saturation of the energy bound in Eq.(2)(the limiting case) as the holographic dark energy density, its EOS depends on the IR cutoff and/or interaction with cold dark energy.
Let us calculate the average energy density \\(\\rho\\) of a homogeneous spherical system that saturates the holographic entropy bound. For this purpose, we introduce the Bekenstein's entropy bound \\(S\\leq 2\\pi EL\\) which is another entropy bound. If the system saturates the Bekenstein's entropy bound and holographic entropy bound (\\(S=2\\pi EL=S_{\\rm BH}\\)), then it satisfies the Schwarzschild condition of \\(E=M_{\\rm p}^{2}L/2=E_{\\rm BH}\\), which states that its maximal mass is the half of its radius in the Planck units. The energy density \\(\\rho\\) is given by the black hole energy density \\(\\rho=E/V=3M_{\\rm p}^{2}/8\\pi L^{2}=\\rho_{BH}\\), which is identical to the holographic energy density \\(\\rho_{\\Lambda}\\) with \\(c^{2}=1\\) shown in Eq.(2). This shows the close connection between the black hole and holographic dark energy.
As was pointed out in [14], the pressure of holographic dark energy is determined by the conservation of energy-momentum tensor as \\(p_{\\Lambda}=-\\frac{1}{3}\\frac{d\\rho_{\\Lambda}}{d\\ln a}-\\rho_{\\Lambda}\\) which provides the EOS of \\(\\omega_{\\Lambda}=\\frac{p_{\\Lambda}}{\\rho_{\\Lambda}}=-1+\\frac{2}{3H}\\frac{L_{ \\Lambda}}{L}\\). Hence, if one does not choose an appropriate form of \\(L\\), one cannot find its EOS. For example, if one chooses the Hubble horizon \\(L=1/H\\), it does not give the correct EOS [10], but it leads to the second Friedmann equation of \\(\\dot{H}=-\\frac{3}{2}H^{2}(1+\\omega_{\\Lambda})\\). On the other hand, choosing \\(L=R_{\\rm PH/FH}\\) leads to \\(\\omega_{\\Lambda}=-1/3(1\\mp 2\\sqrt{\\Omega_{\\Lambda}}/c)\\). Despite the success of obtaining the EOS for \\(L=R_{\\rm PH/FH}\\), this may not give us a promising solution to the dark energy problem because choosing the future event horizon just means an accelerating universe. That is, in order for the holographic dark energy to explain the accelerating universe, we first must assume that the universe is accelerating. This is not what we want to obtain really: a realistic dark energy model will be determined from cosmological dynamics with an appropriate EOS. In addition, \\(\\rho_{\\Lambda}\\) violates causality because the current expansion rate depends on the future expansion rate of the universe. Thus we may believe that taking the future event horizon as the IR cutoff is just a trick to get an accelerating universe in the holographic dark energy approach.
This attributes to the ignorance of the black hole pressure because one uses mainly the energy density of the black hole to describe the holographic dark energy. Hence we described how to obtain the EOS of black holes from their dual systems as a first step to understand the nature of holographic dark energy, although it is still lacking for describing the pressure of the holographic dark energy. In this approach, the limiting condition for the saturated holographic energy density Eq.(2) is not found for the EOS of the black hole from dual systems.
Finally we consider the issue of the singularity together with the holographic dark energy. In this direction, we introduce the regular (\\(\\Lambda\\)) black hole with two horizons which includes de Sitter space near \\(r=0\\) and asymptotically Schwarzschild spacetime at \\(r=\\infty\\). We find that the singularity could be removed by choosing an appropriate mass distribution and de Sitter space appears inside the black hole. However, we recover the dynamical behavior of holographic energy density \\(\\rho_{\\Lambda}\\) with \\(L=1/H\\) at \\(\\tau=\\infty\\) because the static coordinates are used for calculation.
In conclusion, we show that the holographic dark energy without the singularity lead to the de Sitter-acceleration with \\(\\omega_{\\Lambda}=-1\\).
## Acknowledgment
The author thanks Q. C. Huang for helpful discussions. This work was in part supported by the Korea Research Foundation (KRF-2006-311-C00249) funded by the Korea Government (MOEHRD) and the SRC Program of the KOSEF through the Center for Quantum Spacetime (CQUeST) of Sogang University with grant number R11-2005-021.
## References
* [1] A. G. Riess _et al._, Astron. J. **116** (1998) 1009 [astro-ph/9805201 ]; S. J. Perlmutter _et al._, Astrophys. J. **517** (1999) 565 [astro-ph/9812133]; A. G. Riess _et al._, Astrophys. J. **607** (2004) 665[astro-ph/0402512]; P. Astier _et al._, Astron. Astrophys. **447** (2006) 31 [arXiv:astro-ph/0510447].
* [2] M. Tegmark _et al._ [SDSS Collaboration], Phys. Rev. D **69** (2004) 103501 [arXiv:astro-ph/0310723]; K. Abazajian _et al._ [SDSS Collaboration], Astron. J. **128** (2004) 502 [arXiv:astro-ph/0403325]; K. Abazajian _et al._ [SDSS Collaboration], Astron. J. **129** (2005) 1755 [arXiv:astro-ph/0410239].
* [3] H. V. Peiris _et al._, Astrophys. J. Suppl. **148** (2003) 213 [astro-ph/0302225]; C. L. Bennett _et al._, Astrophys. J. Suppl. **148** (2003) 1[astro-ph/0302207]; D. N. Spergel _et al._, Astrophys. J. Suppl. **148** (2003) 175[astro-ph/0302209].
* [4] U. Seljak, A. Slosar and P. McDonald, arXiv:astro-ph/0604335.
* [5] D. N. Spergel _et al._, arXiv:astro-ph/0603449.
* [6] E. J. Copeland, M. Sami and S. Tsujikawa, arXiv:hep-th/0603057.
* [7] A. Upadhye, M. Ishak, and P. J. Steinhardt, Phys. Rev. D **72** (2005) 063501[arXiv:astro-ph/0411803].
* [8] A. Cohen, D. Kaplan, and A. Nelson, Phys. Rev. Lett. **82** (1999) 4971[arXiv:hep-th/9803132].
* [9] P. Horava and D. Minic, Phys. Rev. Lett. **85** (2000) 1610[arXiv:hep-th/0001145]; S. Thomas, Phys. Rev. Lett. **89** (2002) 081301.
* [10] S. D. Hsu, Phys. Lett. B **594** (2004) 13[arXiv:hep-th/0403052].
* [11] R. Horvat, Phys. Rev. D **70** (2004) 087301 [arXiv:astro-ph/0404204].
* [12] T. Chiba, R. Takahashi and N. Sugiyama, Class. Quant. Grav. **22** (2005) 3745 [arXiv:astro-ph/0501661].
* [13] M. Li, Phys. Lett. B **603** (2004) 1[arXiv:hep-th/0403127].
* [14] Q. G. Huang and M. Li, JCAP **0408** (2004) 013 [arXiv:astro-ph/0404229].
* [15] Q-C. Huang and Y. Gong, JCAP **0408** (2004) 006[arXiv:astro-ph/0403590]; Y. Gong, Phys. Rev. D **70** (2004) 064029[arXiv:hep-th/0404030]; B. Wang, E. Abdalla and Ru-Keng Su, arXiv:hep-th/0404057; K. Enqvist and M. S. Sloth, Phys. Rev. Lett. **93** (2004) 221302 [arXiv:hep-th/0406019]; H. Kim, H. W. Lee, and Y. S. Myung, arXiv:hep-th/0501118; Phys. Lett. B **628** (2005) 11[arXiv:gr-qc/0507010]; X. Zhang, Int. J. Mod. Phys. D **14** (2005) 1597[arXiv:astro-ph/0504586]. X. Zhang and F. Q. Wu, Phys. Rev. D **72** (2005) 043524[arXiv:astro-ph/0506310].
* [16] Y. S. Myung, Phys. Lett. B **610** (2005) 18[arXiv:hep-th/0412224]; Mod. Phys. Lett. A **27** (2005) 2035[arXiv:hep-th/0501023]; A. J. M. Medved, arXiv:hep-th/0501100.
* [17] Y. S. Myung, Phys. Lett. B **626** (2005) 1[arXiv:hep-th/0502128]. B. Wang, Y. Gong, and E. Abdalla, Phys. Lett. B **624** (2005) 141[arXiv:hep-th/0506069]; H. Kim, H. W. Lee and Y. S. Myung, Phys. Lett. B **632** (2006) 605 [arXiv:gr-qc/0509040]; M. S. Berger and H. Shojaei, Phys. Rev. D **73** (2006) 083528 [arXiv:gr-qc/0601086]; M. S. Berger and H. Shojaei, arXiv:astro-ph/0606408; W. Zimdahl and D. Pavon, arXiv:astro-ph/0606555; M. R. Setare, Phys. Lett. B **642** (2006) 1 [arXiv:hep-th/0609069]. H. M. Sadjadi and M. Honardoost, arXiv:gr-qc/0609076; M. R. Setare, Phys. Lett. B **644** (2007) 99 [arXiv:hep-th/0610190]; M. R. Setare, J. Zhang and X. Zhang, arXiv:gr-qc/0611084; K. H. Kim, H. W. Lee and Y. S. Myung, arXiv:gr-qc/0612112.
* [18] E. Witten, Adv. Theor. Math. Phys. **2** (1998) 505[arXiv:hep-th/9803131].
* [19] E. Verlinde, arXiv:hep-th/0008140.
* [20] D. Klemm, A. C. Petkou, G. Siopsis and D. Zanon, Nucl. Phys. B **620** (2002) 519[arXiv:hep-th/0104141].
* [21] Y. S. Myung, Phys. Lett. B **636** (2006) 324[arXiv:gr-qc/0511104].
* [22] C. Balazs and I. Szapudi, arXiv:hep-th/0605190.
* [23] S. Sarkar and T. Padmanabhan, arXiv:gr-qc/0607042.
* [24] H. C. Kao, W. L. Lee and F. L. Lin, Phys. Rev. D **71** (2005) 123518[arXiv:astro-ph/0501487].
* [25] I. Dymnikova, Int. J. Mod. Phys. D **12** (2003) 1015[arXiv:gr-qc/0304110].
* [26] A. V. Frolov and L. Kofman, JCAP **0305** (2003) 009 [arXiv:hep-th/0212327]; Y. S. Myung, arXiv:hep-th/0307045.
* [27] T. Padmanabhan, Class. Quant. Grav. **22** (2005) L107 [arXiv:hep-th/0406060]; T. Padmanabhan, Curr. Sci. **88** (2005) 1057 [arXiv:astro-ph/0411044].
* [28] P. O. Mazur and E. Mottola, arXiv:gr-qc/0405111. | We discuss the connection between black hole and holographic dark energy. We examine the issue of the equation of state (EOS) for holographic energy density as a candidate for the dark energy carefully. This is closely related to the EOS for black hole, because the holographic dark energy comes from the black hole energy density. In order to derive the EOS of a black hole, we may use its dual (quantum) systems. Finally, a regular black hole without the singularity is introduced to describe an accelerating universe inside the cosmological horizon. Inspired by this, we show that the holographic energy density with the cosmological horizon as the IR cutoff leads to the dark energy-dominated universe with \\(\\omega_{\\Lambda}=-1\\). | Summarize the following text. |
arxiv-format/0702083v1.md | The sensitivity and stability of the ocean's thermohaline circulation to finite amplitude perturbations
Mu Mu
LASG, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
Liang SUN
Department of Mechanics Engineering, University of Science and Technology of China, Hefei 230027, China;
LASG, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
Henk A. Dijkstra
Institute for Marine and Atmospheric research Utrecht, Utrecht University, the Netherlands Department of Atmospheric Science, Colorado State University, Fort Collins, CO, USA
######
**Key words:** thermohaline circulation, conditional optimal nonlinear perturbation, nonlinear stability, sensitivity
Introduction
A recurrent theme in fundamental research on climate variability is the sensitivity of the ocean's thermohaline circulation. When state-of-the-art climate models are used to calculate projections of future climate states as a response to different emission scenarios of greenhouse gases, a substantial spread in the model results is found. One of the reasons of this spread is the diverse behavior of the thermohaline circulation (McAvaney, 2001).
The sensitivity of the thermohaline circulation is caused by several feedbacks induced by the physical processes that determine the evolution of the thermohaline flow. One of these feedbacks is the salt-advection feedback which is caused by the fact that salt is transported by the thermohaline flow, but in turn influences the density difference which drives this flow. The salt-advection feedback can be conceptually understood in a two-box model (Stommel, 1961) where it is shown to cause multiple equilibria and hysteresis behavior.
In many models of the global ocean circulation, it appears that several equilibrium states may exist under similar forcing conditions. When the present equilibrium state, with about 16 Sv Atlantic overturning, is subjected to a quasi-steady freshwater input in the North Atlantic, eventually the circulation may collapse. In this collapsed state, there is deepwater formation in the Southern Ocean instead of in the North Atlantic and the formation of North Atlantic Deep Water (NADW) has ceased (Stocker et al., 1992; Rahmstorf, 1995; Manabe and Stouffer, 1999). As this multiple equilibrium regime seems to be present in many ocean models, it is important to determine whether transitions between the different states can occur due to finite amplitude perturbations.
In a variant of the Stommel-model for which the temperature relaxation is fast, Cessi (1994) studied the transition behavior between the different equilibria.
In this case, there are only three equilibrium states, of which one unstable. In the deterministic case, she finds that a finite amplitude perturbation of the freshwater flux can shift the system into an alternate state and that the minimum amplitude depends on the duration of the disturbance. Regardless of the duration, however, the amplitude of the disturbance has to exceed a certain value for a transition to occur. Under stochastic white-noise forcing, there are occasional transitions from one equilibrium to another as well as fluctuations around each state.
In Timmermann and Lohmann (2000), the effect of multiplicative noise (through fast fluctuations in the meridional thermal temperature gradient) on the variability in a box model similar to that in Cessi (1994) has been studied. It was found that the stability properties of the thermohaline circulation depend on the noise level. Red noise can introduce new equilibria that do not have a deterministic counterpart.
Another line of studies uses box models that show intrinsic variability because of the existence of an oscillatory mode in the eigenspectrum of the linear operator. Griffies and Tziperman (1995) show that noise is able to excite an otherwise damped eigenmode of the thermohaline circulation. Tziperman and Ioannou (2002) study the non-normal growth of perturbations on the thermally driven state and identify two physical mechanisms associated with the transient amplification of these perturbations.
Stochastic noise can have a significant effect on the mean states of the thermohaline circulation and their stability (Hasselmann, 1976; Palmer, 1995; Velez-Belchi et al., 2001; Tziperman and Ioannou, 2002). Some of these mechanisms are intrinsically linear, such as the effects of non-normal growth considered in Tziperman and Ioannou (2002). Others are essentially nonlinear mechanisms, such as those causing the noise-induced transitions reported in Timmermann and Lohmann (2000).
To study linear amplification mechanisms, the linear singular vector (LSV)method is often used, with main applications to predictability studies (Xue and Zebiak, 1997a,b; Thompson, 1998). Knutti and Stocker (2002), for example, found that the sensitivity of the ocean circulation to perturbations severely limits the predictability of the future thermohaline circulation when approaching the bifurcation point. The LSV approach, however, cannot provide critical boundaries on finite amplitude stability of the thermohaline ocean circulation.
In a system which potentially has multiple equilibria and internal oscillatory modes, its response to a finite amplitude perturbation on a particular steady state is a difficult nonlinear problem. In this paper, we determine the nonlinear stability boundaries of linearly stable thermohaline flow states within a simple box model of the thermohaline circulation. To compute these boundaries, we use the concept of the Conditional Nonlinear Optimal Perturbation (CNOP) and study optimal _nonlinear_ growth over a certain given time \\(\\tau\\). We extend results on linear optimal growth properties of perturbations on both thermally and salinity dominated thermohaline flows to the nonlinear case. We find that there is an asymmetric nonlinear response of these flows with respect to the sign of the finite amplitude freshwater perturbation and describe a physical mechanism that explains this asymmetry.
## 2 Model and methodology
_a. Model_
To illustrate the approach, the theory is applied to a 2-box model of the thermohaline circulation (Stommel, 1961). This model consists of an equatorial box and a polar box which contain well mixed water of different temperatures and salinities due to an equatorial-to-pole gradient in atmospheric surface forcing. Flowbetween the boxes is assumed proportional to the density difference between the boxes and, with a linear equation of state, related to the temperature and salinity differences in the boxes.
When the balances of heat and salt are nondimensionalized, the governing dimensionless equations (we use the notation in chapter 3 of Dijkstra (2000)) can be written as
\\[\\frac{dT}{dt} = \\eta_{1}-T(1+\\mid T-S\\mid) \\tag{1a}\\] \\[\\frac{dS}{dt} = \\eta_{2}-S(\\eta_{3}+\\mid T-S\\mid) \\tag{1b}\\]
where \\(T=T_{e}-T_{p},\\ S=S_{e}-S_{p}\\) are the dimensionless temperature and salinity difference between the equatorial and polar box and \\(\\Psi=T-S\\) is the dimensionless flow rate. Three parameters appear in the equations (1): the parameter \\(\\eta_{1}\\) measures the strength of the thermal forcing, \\(\\eta_{2}\\) that of the freshwater forcing and \\(\\eta_{3}\\) is the ratio of the relaxation times of temperature and salinity to the surface forcing. Steady states of the equations (1) are indicated with a temperature of \\(\\bar{T}\\), a salinity of \\(\\bar{S}\\) and a flow rate \\(\\bar{\\Psi}=\\bar{T}-\\bar{S}\\). A steady state is called thermally-dominated when \\(\\bar{\\Psi}>0\\), i.e. a negative equatorial-to-pole temperature gradient exists dominating the density. A steady state is called salinity-dominated when \\(\\bar{\\Psi}<0\\), i.e. a negative equatorial-to-pole salinity gradient exists dominating the density.
It is well known that the equations (1) have multiple steady states for certain parameter values. Here, we fix \\(\\eta_{1}=3.0\\), \\(\\eta_{3}=0.2\\) and use \\(\\eta_{2}\\) as control parameter. The bifurcation diagram for these parameter values is shown in Fig. 1 as a plot of \\(\\bar{\\Psi}\\) versus \\(\\eta_{2}\\). Solid curves indicate linearly stable steady states, whereas the states on the dashed curve are unstable. There are thermally-driven (hereafter TH) stable steady states (\\(\\bar{\\Psi}>0\\)) and salinity-driven (hereafter SA, ie., the circulation is salinity-dominated) stable steady states (\\(\\bar{\\Psi}<0\\)). The saddle-nodebifurcation points occur at \\(\\eta_{2}=0.600\\) and \\(\\eta_{2}=1.052\\), and bound the interval in \\(\\eta_{2}\\) where multiple equilibria occur. Suppose that this bifurcation diagram represents both the present overturning state (on the stable branch with \\(\\bar{\\Psi}>0\\)) and the collapsed state (on the stable branch with \\(\\bar{\\Psi}<0\\)). To study the nonlinear transition behavior of the thermohaline flows from the TH state to the SA state and vice versa, we consider the evolution of finite amplitude perturbations on the stable states.
The nonlinear equation governing the evolution of perturbations can be derived from equation (1). If the steady state \\((\\bar{T},\\bar{S})\\) is given and \\(T^{\\prime}=T-\\bar{T}\\), \\(S^{\\prime}=S-\\bar{S}\\) are the perturbations of temperature and salinity, then it is found that
\\[\\frac{dT^{\\prime}}{dt} = -(2|\\bar{\\Psi}|+1)T^{\\prime}+sign(\\bar{\\Psi})[\\bar{T}\\,S^{\\prime }-\\bar{S}T^{\\prime}-T^{\\prime}(T^{\\prime}-S^{\\prime})] \\tag{2a}\\] \\[\\frac{dS^{\\prime}}{dt} = -(2|\\bar{\\Psi}|+\\eta_{3})S^{\\prime}+sign(\\bar{\\Psi})[\\bar{T}\\,S^ {\\prime}-\\bar{S}T^{\\prime}-S^{\\prime}(T^{\\prime}-S^{\\prime})] \\tag{2b}\\]
where \\(sign(\\bar{\\Psi})\\) is sign of steady flow rate \\(\\bar{\\Psi}\\). If the perturbations are sufficiently small, such that the nonlinear part of the equations (2) can be neglected, we find the tangent linear equation governing the evolution of small perturbations as
\\[\\frac{dT^{\\prime}}{dt} = -(2|\\bar{\\Psi}|+1)T^{\\prime}+sign(\\bar{\\Psi})(\\bar{T}\\,S^{\\prime }-\\bar{S}T^{\\prime}) \\tag{3a}\\] \\[\\frac{dS^{\\prime}}{dt} = -(2|\\bar{\\Psi}|+\\eta_{3})S^{\\prime}+sign(\\bar{\\Psi})(\\bar{T}\\,S^{ \\prime}-\\bar{S}T^{\\prime}) \\tag{3b}\\]
### \\(b\\). Conditional Nonlinear Optimal Perturbation
To study nonlinear mechanisms of amplification, Berloff and Meacham (1996) modified the LSV technique and Mu (2000) proposed the concept of nonlinear singular vectors (NSVs) and nonlinear singular values (NSVAs). These concepts were successfully applied by Mu and Wang (2001) and Durbiano (2001) to study finite amplitude stability of flows in two-dimensional quasi-geostrophic and shallow-water models, respectively. In Mu and Duan (2003), the concept of the conditional nonlinear optimal perturbation (CNOP) was introduced and applied to study the \"spring predictability barrier\" in the El Nino-Southern Oscillation (ENSO), using a simple equatorial ocean-atmosphere model. The \"spring predictability barrier\" refers to the dramatical decline of the prediction skills for most of the ENSO models during the Northern Hemisphere (NH) springtime. The CNOP can also be employed to estimate the prediction errors of an El Nino or a La Nina event (Mu et al., 2003).
As readers may not be familiar with this concept, we give a brief introduction to CNOP. Considering the nonlinear evolution of initial perturbations governed by (2). In general, assume that the equations governing the evolution of perturbations can be written as:
\\[\\left\\{\\begin{array}{l}\\frac{\\partial\\mathbf{x}}{\\partial t}+F( \\mathbf{x};\\bar{\\mathbf{x}})=0,\\\\ \\mathbf{x}|_{t=0}=\\mathbf{x}_{0},\\end{array}\\right.\\quad \\quad\\mbox{in}\\quad\\Omega\\times[0,t_{e}] \\tag{4}\\]
where \\(t\\) is time, \\(\\mathbf{x}(t)=(x_{1}(t),x_{2}(t), ,x_{n}(t))\\) is the perturbation state vector and \\(F\\) is a nonlinear differentiable operator. Furthermore, \\(\\mathbf{x}_{0}\\) is the initial perturbation, \\(\\bar{\\mathbf{x}}\\) is the basic state, \\((\\mathbf{x},t)\\in\\Omega\\times[0,t_{e}]\\) with \\(\\Omega\\) a domain in \\(R^{n}\\), and \\(t_{e}<+\\infty\\).
Suppose the initial value problem (4) is well-posed and the nonlinear propagator \\(M\\) is defined as the evolution operator of (4) which determines a trajectory from the initial time \\(t=0\\) to time \\(t_{e}\\). Hence, for fixed \\(t_{e}>0\\), the solution \\(\\mathbf{x}(t_{e})=M(\\mathbf{x}_{0};\\bar{\\mathbf{x}} )(t_{e})\\) is well-defined, i.e.
\\[\\mathbf{x}(t_{e})=M(\\mathbf{x}_{0};\\bar{\\mathbf{x} })(t_{e}) \\tag{5}\\]
So \\(\\mathbf{x}(t_{e})\\) describes the evolution of the initial perturbation \\(\\mathbf{x}_{0}\\).
For a chosen norm \\(\\|\\cdot\\|\\) measuring \\(x\\), the perturbation \\(\\mathbf{x}_{0\\delta}\\) is called the Conditional Nonlinear Optimal Perturbation (CNOP) with constraint condition \\(\\|\\mathbf{x}_{0}\\|\\leq\\delta\\), if and only if
\\[J(\\mathbf{x}_{0\\delta})=\\max_{\\|\\mathbf{x}_{0}\\|\\leq\\delta}J( \\mathbf{x}_{0}) \\tag{6}\\]
where
\\[J(\\mathbf{x}_{0})=\\|M(\\mathbf{x}_{0};\\bar{\\mathbf{x} })(t_{e})\\| \\tag{7}\\]
The CNOP is the initial perturbation whose nonlinear evolution attains the maximal value of the functional \\(J\\) at time \\(t_{e}\\) with the constraint conditions; in this sense we call it \"optimal\". The CNOP can be regarded as the most (nonlinearly) unstable initial perturbation superposed on the basic state. With the same constraint conditions, the larger the nonlinear evolution of the CNOP is, the more unstable the basic state is. In general, it is difficult to obtain an analytical expression of the CNOP. Instead we look for the numerical solution, by solving a constraint nonlinear optimization problem.
To calculate the CNOP the norm
\\[\\|\\mathbf{x}_{0}\\|=\\sqrt{(T^{\\prime}_{0})^{2}+(S^{\\prime}_{0})^{2}} \\tag{8}\\]
is used. Using a fourth-order Runge-Kutta scheme with a time step \\(dt=0.001\\), perturbation solutions \\((T^{\\prime},S^{\\prime})\\) are obtained numerically by integrating the model (2) up to a time \\(t_{e}\\) and the magnitude of the perturbation is calculated. Next, the Sequential Quadratic Programming (SQP) method is applied to obtain the CNOP numerically; the SQP method is briefly described in the Appendix.
To compare the CNOPs with the LSVs, the latter are also computed using the theory of linear singular vector analysis (Chen et al., 1997). We also use the norm (8) in this analysis and first solve (3) to obtain the linear evolution of initial perturbations. Subsequently, the singular vector decomposition (SVD) is used to determine the linear singular vectors (LSVs) of the model.
Stability and sensitivity analysis
In this section, we compute the CNOPs to study the sensitivity of the thermoha-line circulation to finite amplitude freshwater perturbations in the two-box model. Two problems are studied: (i) the nonlinear development of the finite amplitude perturbations for fixed parameters in the model and (ii) the nonlinear stability of the steady states as parameters are changed. Both thermally-dominated steady states (\\(\\bar{\\Psi}>0\\)) and salinity-dominated ones (\\(\\bar{\\Psi}<0\\)) are investigated. The initial perturbation \\(\\mathbf{x}_{0}\\) is written as \\(\\mathbf{x}_{0}=(T^{\\prime}_{0},S^{\\prime}_{0})=(\\delta\\cos\\theta, \\delta\\sin\\theta)\\), where \\(\\delta\\) is magnitude of initial perturbation and \\(\\theta\\) the angle of the initial vector with the x-axis.
### Finite amplitude evolution of the TH state
For the thermally-driven stable steady state, we consider the state \\(\\bar{T}=1.875,\\bar{S}=1.275,\\bar{\\Psi}=0.6\\) (shown as point \"A\" in Fig. 1) with the fixed parameters \\(\\eta_{1}=3.0\\), \\(\\eta_{2}=1.02\\), \\(\\eta_{3}=0.2\\). We choose \\(t_{e}=2.5\\) and use \\(\\delta=0.3\\) as a maximum norm (in the norm (8)) of the perturbations. The time \\(t_{e}\\) is about half the time the solution takes to equilibrate to steady state from a particular initial perturbation. The amplitude \\(\\delta=0.3\\) is about \\(10\\%\\) of the typical amplitude of the steady state of temperature and salinity (\\(\\bar{T},\\bar{S}\\)). For \\(\\theta\\) in the range \\(\\pi/4<\\theta<5\\pi/4\\), the initial perturbation flow has \\(\\Psi^{\\prime}(0)<0\\). As this is typically caused by a freshwater flux perturbation in the polar box, we refer to the perturbation as being of freshwater type. For other angles \\(\\theta\\in[0,2\\pi]\\), the initial perturbation flow has \\(\\Psi^{\\prime}(0)>0\\), which is typically caused by a salt flux perturbation in the polar box and we refer to it as being of salinity type.
Using equation (2) and (3), both CNOPs and LSVs are computed versus the constraint condition \\(\\delta\\), respectively. The numerical results, plotted in Fig. 2,indicate that the CNOPs are located at the circle \\(\\|\\mathbf{x}_{0}\\|=\\delta\\), which is the boundary of ball \\(\\|\\mathbf{x}_{0}\\|\\leq\\delta\\). The directions of the LSVs, which are independent of \\(\\delta\\), have constant values of \\(\\theta_{1}=1.948\\) (dashed line) and \\(\\theta_{2}=5.089\\) (not shown). The value of \\(\\theta\\) for the CNOPs (solid curve) increases monotonically over the \\(\\delta\\) interval 0.01 to 0.3. The difference between CNOPs and LSVs is relatively small when \\(\\delta\\) is small.
Integrating the model (2) with CNOPs and LSVs as initial conditions, respectively. we obtain their evolutions at time \\(t_{e}\\), which are denoted as \"CNOP-N\" and \"LSV-N\"; these are shown in Fig. 3. For comparison, the linear evolution of the LSVs are also obtained by integrating the model (3) with the LSV as an initial condition; this is denoted as \"LSV-L\" in Fig. 3. It is clear that the evolution of the CNOPs is nonlinear in \\(\\delta\\), while \"LSV-L\" only increases linearly. The line of \"LSV-N\" is between \"CNOP-N\" and \"LSV-L\", but the difference between \"LSV-N\" and \"CNOP-N\" is hardly distinguishable over the whole \\(\\delta\\) interval. Though this difference is not significant in this TH state, it is significant in the following investigation for SA state. In fact, it is very hard to know this without previous calculation.
Note that since our numerical results demonstrate that the CNOPs are all located on the boundary \\(\\|\\mathbf{x}_{0}\\|=\\delta\\), we are able to show the sensitivity of THC to finite amplitude perturbations of specific fixed amplitude \\(\\delta=0.2\\). In Fig. 4 the value of \\(J\\) at \\(t_{e}=2.5\\) is shown for the linear and the nonlinear evolutions of the initial perturbations obtained by (2) and (3). For the linear case (dashed line in Fig. 4), there are two optimal linear initial perturbations \\(\\theta_{1}=1.948\\) and \\(\\theta_{2}=5.089\\) with the same value of \\(J\\), \\(J(\\delta,\\theta_{1})=J(\\delta,\\theta_{2})=0.16484\\), which are the LSVs. Note that \\(\\theta_{2}-\\theta_{1}=\\pi\\), which means that in the linear case perturbations with \\(\\Psi^{\\prime}>0\\) and \\(\\Psi^{\\prime}<0\\) behave similarly (and hence symmetrically with respect to the sign of \\(\\Psi^{\\prime}\\)). For the nonlinear model (solid curve in Fig. 4), there is one global optimal nonlinear initial perturbation with \\(\\theta_{3}=1.979\\), which is the CNOP, and one local optimal nonlinear initial perturbations at \\(\\theta_{4}=5.058\\), with values of \\(J(\\delta,\\theta_{3})=0.22413\\) and \\(J(\\delta,\\theta_{4})=0.13052\\), respectively. The results in Fig. 4 for \\(\\delta=0.2\\) coincide with the results in Fig. 2.
There is another difference between the linear and nonlinear evolution of the perturbations. When the initial perturbations are freshwater (\\(\\Psi^{\\prime}<0\\)), the nonlinear evolution leads to a larger amplitude than the linear evolution. When the initial perturbations are saline (\\(\\Psi^{\\prime}>0\\)), the nonlinear evolution leads to a smaller amplitude than the linear evolution. For example, the initial perturbations with \\(\\theta_{1}\\) and \\(\\theta_{3}\\) are such that \\(\\Psi^{\\prime}<0\\), while the initial perturbations with \\(\\theta_{2}\\) and \\(\\theta_{4}\\) have \\(\\Psi^{\\prime}>0\\).
The values of \\(J/\\delta\\) obtained by integrating (2) with the CNOPs as initial condition are shown for different \\(\\delta\\) in Fig. 5a. The corresponding evolution of \\(\\Psi\\) is plotted in Fig. 5b. To relate the result in Fig. 5a to previous ones, consider the value of \\(J/\\delta\\) at \\(t=2.5\\) on the curve of \\(\\delta=0.2\\). In Fig. 4, the maximum of \\(J\\) is \\(0.224\\) and hence \\(J/\\delta=0.224/0.2=1.12\\). It follows from the Figs. 5 that for the CNOP with \\(\\delta=0.01\\), the flow rate \\(\\Psi\\) recovers to the steady state \\(\\bar{\\Psi}=0.6\\) rapidly. For the CNOP with a larger initial amplitude (\\(\\delta=0.1,0.2\\)), it takes much longer for the thermohaline circulation to recover to steady state. This is different from a linear analysis where the evolution is the same for all optimal initial perturbations (Tziperman and Ioannou, 2002).
In summary, the results for the TH state show that fresh perturbations, with \\(\\Psi^{\\prime}<0\\), are more amplified though nonlinear mechanisms than saline perturbations, with \\(\\Psi^{\\prime}>0\\). This is consistent with the notion that perturbations which move the system towards a bifurcation point will be more amplified through non-linear mechanisms than perturbations that move the system away from a bifurcation point.
### Finite amplitude stability of the SA state
We consider the salinity-dominated SA state (\\(\\bar{T}=2.674,\\bar{S}=2.796,\\bar{\\Psi}=-0.122\\)) for a slightly smaller value of \\(\\eta_{2}\\) than for the thermally-dominated TH state in the previous section (\\(\\eta_{1}=3.0\\), \\(\\eta_{2}=0.9\\), \\(\\eta_{3}=0.2\\)). It is indicated as point \"B\" in Fig. 1. Again for the time \\(t_{e}=2.5\\), using the corresponding equations (2) and (3), both CNOPs and LSVs are computed versus the constraint condition \\(\\delta\\), respectively and the results are plotted in Fig. 6.
Similar to the results in Fig. 2, the directions of the LSVs, which are independent of \\(\\delta\\), have constant values of \\(\\theta_{1}=2.796\\) and \\(\\theta_{2}=5.938\\) (dashed line). The line for \\(\\theta_{1}\\) is similar to \\(\\theta_{2}\\), and is not drawn in Fig. 6. The direction of CNOPs (solid curve) increase monotonously with \\(\\delta\\) varying from 0.01 to 0.125. Then, \\(\\theta\\) drops down to 2.857 at \\(\\delta=0.125\\) and increase slightly with \\(\\delta\\) in the interval \\(0.125<\\delta<0.17\\). After that, \\(\\theta\\) jumps up to 5.195 at \\(\\delta=0.17\\) and increases slightly with \\(\\delta\\) in \\(0.17<\\delta<0.22\\).
Using equations (2) and (3), the evolutions of both the CNOPs and LSVs of the SA state are shown in Fig. 7. All the three kinds of evolutions are approximately the same when the initial perturbations are relatively small. But for larger \\(\\delta\\) the difference between \"LSV-N\" and \"CNOP-N\" is remarkably larger than that of the TH state (Fig. 3). The values of \\(J\\) of \"CNOP-N\" are always larger than those of both \"LSV-L\" and \"LSV-N\" for \\(\\delta>0.17\\).
The values of \\(J\\) at a time \\(t_{e}=2.5\\) for all \\(\\theta\\) show (Fig. 8a) two optimal linear initial perturbations \\(\\theta_{1}=2.796\\) and \\(\\theta_{2}=5.938\\) with \\(J(\\delta,\\theta_{1})=J(\\delta,\\theta_{2})=0.0526\\). Again, because \\(\\theta_{2}-\\theta_{1}=\\pi\\) there is the symmetry in response with respect to the sign of \\(\\Psi^{\\prime}\\). For nonlinear evolutions, there is one globally optimal initial perturbation at \\(\\theta_{3}=5.246\\) (the CNOP) and two locally optimal initial perturbations at \\(\\theta_{4}=0.251\\) and \\(\\theta_{5}=2.890\\), with J-values of \\(J(\\delta,\\theta_{3})=0.0963\\)\\(J(\\delta,\\theta_{4})=0.0432\\) and \\(J(\\delta,\\theta_{5})=0.0503\\), respectively. The initial perturbations with \\(\\theta_{1}\\) and \\(\\theta_{5}\\) are of freshwater type (\\(\\Psi^{\\prime}<0\\)), while the initial perturbations of \\(\\theta_{2}\\), \\(\\theta_{3}\\) and \\(\\theta_{4}\\) are of salinity type (\\(\\Psi^{\\prime}>0\\)).
To understand the difference between the maxima located at \\(\\theta_{3}\\), \\(\\theta_{4}\\) and \\(\\theta_{5}\\), a contour graph of \\(J(\\theta,\\delta)\\) is drawn in Fig. 8b. It is clear from Fig. 8b that there are three groups of local maxima which are indicated by the dashed lines. When \\(\\delta<0.125\\) there are only two local maxima and the CNOP is located in the regime \\(5\\pi/4<\\theta<2\\pi\\). When \\(0.125<\\delta<0.17\\), the CNOP jumps from the interval \\(5\\pi/4<\\theta<2\\pi\\) to the interval \\(3\\pi/4<\\theta<5\\pi/4\\). This coincides with the jumping behavior of \\(\\theta\\) in Fig. 6. When \\(\\delta>0.17\\), there are three local maxima and the CNOP jumps from the interval \\(3\\pi/4<\\theta<5\\pi/4\\) to a new interval \\(5<\\theta<6\\). Both jumps are also shown in Fig. 6 and \\(J\\) has a remarkable increase after the second jump (Fig. 7).
Also for the SA state, the value of \\(J/\\delta\\) along trajectories obtained by integrating equations (2) using the CNOPs as initial conditions are shown for different \\(\\delta\\) in Fig. 9a. The corresponding evolution of \\(\\Psi\\) is plotted in Fig. 9b. For \\(t=2.5\\) and \\(\\delta=0.2\\), the value of \\(J/\\delta=0.0963/0.2=0.48\\), where \\(0.0963\\) is the maximum in Fig. 8a. The flow rate \\(\\Psi\\) recovers to the steady state (whose value is \\(-0.122\\)) shortly after being disturbed with a small amplitude CNOP (\\(\\delta=0.01\\)). It takes much longer for the thermohaline circulation to recover to the steady state after being disturbed with a larger amplitude (\\(\\delta=0.1,0.2\\)), respectively. The larger the CNOP is, the larger the transient effect. In contrast to the TH state, there is now an oscillatory attraction to the SA state, already described in Stommel (1961).
In both the salinity-dominated SA state and the thermally-dominated TH state, the CNOP always moves the system towards the bifurcation point. The SA state (TH state) has an asymmetry in the nonlinear amplification of disturbances,with a larger amplification for those with \\(\\Psi^{\\prime}>0\\) (\\(\\Psi^{\\prime}<0\\)).
### Sensitivity along the bifurcation diagram
Even if a TH or SA state is linearly stable, it can become nonlinearly unstable due to finite amplitude perturbations. The methodology of CNOP provides a means to assess the nonlinear stability thresholds of the thermohaline flows; here this is shown for the two-box model (2). Thereto, we compute the CNOPs under different \\(\\delta\\) constraints for linearly stable TH and SA states along the bifurcation diagram in Fig. 1.
Along the TH branch, we vary \\(\\eta_{2}\\) from 1.043 to 1.046 with step 0.001 and thereby approach the saddle-node bifurcation (Fig. 1). For each value of \\(\\eta_{2}\\), the CNOPs are obtained under the constraint that the magnitude of initial perturbations is less 0.2 (\\(\\delta=0.2\\)) and again the time \\(t_{e}=2.5\\). The trajectories of the CNOPs are calculated by integrating the model (2) (Fig. 10a). Next, the corresponding flow rates \\(\\Psi\\) are drawn in Fig. 10b. Both figures indicate that the CNOPs damp after a while for the steady states labelled \\(A_{1}\\), \\(A_{2}\\) and \\(A_{3}\\). While these three states are consequently nonlinearly stable, the CNOP for steady state \\(A_{4}\\) (\\(\\eta_{2}=1.046\\)) increases in time, which implies that this steady state is nonlinearly unstable (although it is linearly stable) to perturbations with \\(\\delta=0.2\\).
From the above results, it follows that for each value of \\(\\eta_{2}\\), in the multiple equilibria regime of Fig. 1, a critical value of \\(\\delta\\), say \\(\\delta_{c}\\), must exists such that the TH state is nonlinearly unstable. \\(\\delta_{c}\\) is defined as the smallest magnitude of a finite amplitude perturbation which induces a transition from the TH state to the SA state. The larger the value of \\(\\delta_{c}\\), the more stable the steady state is. Using the CNOP method, the values of \\(\\delta_{c}\\) can be computed and the results for the TH states (from \\(\\eta_{2}=0.95\\) up to the saddle-node bifurcation at \\(\\eta_{2}=1.052\\)) are shown in Fig. 11. The curve separates the plane into two parts. For the regime under the curve the steady state is nonlinearly stable and for the regime above the curve it is nonlinearly unstable. When the bifurcation point \\(Q\\) in Fig. 1 is approached, \\(\\delta_{c}\\) decrease more and more quickly. The critical value \\(\\delta_{c}\\) reduces to zero sharply, as \\(\\eta_{2}\\) approaches the bifurcation point. This explains how the steady states lose their stability when the bifurcation point \\(Q\\) is reached.
The same calculations are performed for steady states on the SA branch, when \\(\\eta_{2}\\) varies from 0.75 to the value 0.60 at the saddle-node bifurcation. The CNOPs are obtained under the constraint that the magnitude of initial perturbations is less than 0.1 (\\(\\delta=0.1\\)) with time \\(t_{e}=2.5\\). The trajectories of the CNOPs at each steady state (of which \\(J\\) and \\(\\Psi\\) are plotted in Fig. 12) indicate that the CNOPs damp for the steady states labelled \\(B_{1}\\), \\(B_{2}\\) and \\(B_{3}\\). Although there is oscillatory behavior, these states are nonlinearly stable. However, the evolution of CNOP for steady state \\(B_{4}\\) (\\(\\eta_{2}=0.70\\)) increases, which implies that the SA state is nonlinearly unstable to this finite amplitude perturbation (although it is linearly stable).
The critical boundaries \\(\\delta_{c}\\) for the SA states (Fig. 13) show that \\(\\delta_{c}\\) decreases monotonically with \\(\\eta_{2}\\) and becomes zero at saddle-node bifurcation (\\(\\eta_{2}=0.6\\)). Similar to Fig. 11, the curve separates the plane into two parts. For the regime under the curve the steady state is nonlinearly stable and for the regime above the curve it is nonlinearly unstable. When \\(\\eta_{2}\\) decreases from 1.0 to 0.6, the SA steady states approach the bifurcation point \\(P\\) in Fig. 1, and the critical value \\(\\delta_{c}\\) tends to zero. This explains how the steady states lose their stability at the bifurcation point \\(P\\).
Summary and Discussion
Within a simple two-box model, we have addressed the sensitivity and nonlinear stability of (linearly stable) steady states of the thermohaline circulation. One of the remarkable results obtained by the CNOP approach is the asymmetry in the nonlinear amplification of perturbations with \\(\\Psi^{\\prime}<0\\) (interpreted as a freshwater perturbation in the northern North Atlantic) and \\(\\Psi^{\\prime}>0\\) (interpreted as a salt perturbation in the northern North Atlantic).
When we use LSV analysis, there are two singular vectors \\(x_{1}\\) and \\(-x_{1}\\) that correspond to one singular value \\(\\sigma_{1}\\) (see Fig. 4 and Fig. 8a). If \\(x_{1}\\) is a freshwater type perturbation ( \\(\\Psi^{\\prime}<0\\,\\)), \\(-x_{1}\\) must be a salinity type perturbation ( \\(\\Psi^{\\prime}>0\\,\\)). The conclusion from the linear analysis (using LSV) is that the thermohaline circulation is equally sensitive to either freshwater or salinity entering the northern North Atlantic. However, the nonlinear analysis (using CNOP) clearly reveals a difference in the response of the system to the two types of perturbations.
The asymmetry can be understood by considering the nonlinear evolution of perturbations in the two-box model. For the TH state, according to (2), the flow rate perturbation \\(\\Psi^{\\prime}=T^{\\prime}-S^{\\prime}\\) satisfies
\\[\\frac{d\\Psi^{\\prime}}{dt}=(2\\bar{S}-2\\bar{T}-1)T^{\\prime}+(2\\bar{T}-2\\bar{S}+ \\eta_{3})S^{\\prime}-\\Psi^{\\prime 2} \\tag{9}\\]
Integrating the above equation, we find
\\[\\Psi^{\\prime}(t)=\\Psi^{\\prime}(0)+\\int_{0}^{t}L(T^{\\prime},S^{\\prime})d\\tau- \\int_{0}^{t}\\Psi^{\\prime 2}d\\tau \\tag{10}\\]
where \\(L(T^{\\prime},S^{\\prime})\\) is the linear part of (9). It is well known that the two linear terms in (9) determine the linear stability of the steady state (Stommel, 1961).
For an initial perturbation \\(\\Psi^{\\prime}(0)<0\\) (freshwater type), the nonlinear term is always negative and the freshwater perturbation is amplified. This is a positive feedback and the stronger the freshwater perturbation, the stronger the nonlinear feedback destabilizing the TH steady state. A perturbation \\(\\Psi^{\\prime}(0)>0\\) (salinity type) is damped by the negative definite nonlinear term. This is a negative feedback and the stronger the salinity perturbation, the stronger the nonlinear feedback stabilizing the TH state. Nonlinear mechanisms hence make the TH steady state more stable to salinity perturbations.
This explains the results in Fig. 4. For the freshwater type initial perturbations (\\(\\Psi^{\\prime}<0\\)), the nonlinear evolution of the initial perturbation of (2) is larger than the linear evolution of the initial perturbation of (3). For the salinity type initial perturbations (\\(\\Psi^{\\prime}>0\\)), the nonlinear evolution is smaller than the linear evolution. In general, the nonlinear term yields positive (negative) feedback for negative (positive) \\(\\Psi^{\\prime}\\) in the case of thermally-dominated (TH) steady states.
On the other hand, when the basic steady flow is a SA state (\\(\\bar{\\Psi}<0\\)), then we have
\\[\\frac{d\\Psi^{\\prime}}{dt}=(2\\bar{T}-2\\bar{S}-1)T^{\\prime}+(2\\bar{S}-2\\bar{T}+ \\eta_{3})S^{\\prime}+\\Psi^{\\prime 2} \\tag{11}\\]
Similarly, we have,
\\[\\Psi^{\\prime}(t)=\\Psi^{\\prime}(0)+\\int_{0}^{t}L(T^{\\prime},S^{\\prime})d\\tau+ \\int_{0}^{t}\\Psi^{\\prime 2}d\\tau \\tag{12}\\]
Hence, due to nonlinear effects the SA steady state becomes more unstable (stable) to disturbances \\(\\Psi^{\\prime}>0\\) (\\(\\Psi^{\\prime}<0\\)) which explains the results in Fig. 8a. In general, the nonlinear term yields positive (negative) feedback for positive (negative) \\(\\Psi^{\\prime}\\) in the case of salinity-dominated (SA) steady states.
The physical mechanism behind the loss of stability of the TH state is often discussed in terms of the salt-advection feedback (Marotzke, 1995). A freshwater (salt) perturbation in the northern North Atlantic decreases (increases) the northward circulation and hence decreases (increases) the northward salt transport. The salt-advection feedback is independent of the sign of the perturbation of the flow rate \\(\\Psi^{\\prime}\\). In contrast, the nonlinear instability mechanism of the thermohaline circulation depends on the sign of the perturbation of the flow rate \\(\\Psi^{\\prime}\\) as discussed above.
The CNOP approach also allows us to determine the critical values of the finite amplitude perturbations (i.e. \\(\\delta_{c}\\)) at which the nonlinearly induced transitions can occur. The techniques are currently being generalized to be able to apply them to models of the thermohaline circulation with more degrees of freedom. The aim is to tackle these problems eventually in global ocean circulation models. When applied to the latter models, the approach may provide quantitative bounds on perturbations of the present thermohaline flow such that nonlinear instability can occur.
_Acknowledgments:_ This work was supported by the National Natural Scientific Foundation of China (Nos. 40233029 and 40221503), and KZCX2-208 of the Chinese Academy of Sciences. It was initiated during a visit of HD to Beijing in the Summer of 2002 which was partially supported from a PIONIER grant from the Netherlands Organization of Scientific Research (N.W.O.). We also appreciate the valuable suggestions by anonymous reviewers.
**Appendix: The SQP method**
The constrained nonlinear optimization problem considered in this paper, after discretization and proper transformation of the objective function \\(F\\), can be written in the form
\\[\\min_{x\\in R^{n}}F(x),\\]
subject to
\\[c_{i}(x)\\leq 0,\\quad\\mbox{for }i=1,2,3,\\cdots,n,\\]
where the \\(c_{i}\\) are constraint functions. It is assumed that first derivatives of the problem are known explicitly, i.e., at any point \\(x\\), it is possible to compute the gradient \\(\\bigtriangledown F(x)\\) of \\(F\\) and the Jacobian \\(J(x)=\\frac{\\partial(c_{1},c_{2},\\cdots,c_{n})}{\\partial(x_{1},x_{2},\\cdots,x_{n})}\\) of the constraint functions \\(c_{i}\\).
The SQP method is an iterative method which solves a quadratic programming (QP) subproblem at each iteration and it involves outer and inner iterations. The outer iterations generate a sequence of iterates \\((x^{k},\\lambda^{k})\\) that converges to \\((x^{*},\\lambda^{*})\\), where \\(x^{*}\\) and \\(\\lambda^{*}\\) are respectively a constrained minimizer and the corresponding Lagrange multipliers. At each iterate, a QP subproblem is used to generate a search direction towards the next iterate \\((x^{k+1},\\lambda^{k+1})\\). Solving such a subproblem is itself an iterative procedure, which is therefore regarded as a inner iterate of a SQP method. The following is an outline of the SQP algorithm used in this paper.
Step 1. Given a starting iterate \\((x^{0},\\lambda^{0})\\) and an initial approximate Hessian \\(H^{0}\\), set \\(k=0\\).
Step 2. Minor iteration. Solve \\(d_{k}\\) by the following QP subproblem.
\\[\\min_{d}([\
abla F(x^{k})]^{\\top}d^{k}+\\frac{1}{2}(d^{k\\top}H^{k}d^{k})),\\]
subject to
\\[c(x^{k})+[\
abla c(x^{k})]^{\\top}d^{k}\\leq 0,\\]
where \\(d^{k}\\) is a direction of descent for the objective function.
Step 3. Check if \\((x^{k},\\lambda^{k})\\) satisfies the convergence criterion, if not set \\(x^{k+1}=x^{k}+\\alpha d^{k}\\), where \\(\\alpha\\leq 1\\). For \\(\\lambda^{k+1}\\), it is also determined by \\(d^{k}\\), which is automatically realized in the solver (Barclay et al., 1997). Go to Step 4.
Step 4. Update the Hessian Lagrangian by using the BFGS quasi-Newton formula (Liu and Nocedal, 1989). Let \\(s^{k}=x^{k+1}-x^{k}\\), and \\(y^{k}=\
abla L(x^{k+1},\\lambda^{k+1})-\
abla L(x^{k},\\lambda^{k})\\), where \\(\
abla L=\
abla F+\
abla c\\lambda\\). The new Hessian Lagrangian, \\(H^{k+1}\\), can be obtained by calculating
\\[H^{k+1}=H^{k}-\\frac{H^{k}s^{k}s^{k\\top}H^{k\\top}}{s^{k\\top}H^{k}s^{k}}+\\frac{y ^{k}y^{k\\top}}{y^{k\\top}s^{k}}.\\]Then set \\(k=k+1\\) and go to Step 1.
In the SQP algorithm, the definition of the QP Hessian Lagrangian \\(H^{k}\\) is crucial to the success of an SQP solver. In Gill and Saunders (1997), \\(H^{k}\\) is a limited-memory quasi-Newton approximation to \\(G=\\bigtriangledown^{2}L\\), the Hessian of the modified Lagrangian. Another possibility is to define \\(H^{k}\\) as a positive-definite matrix related to a finite-difference approximation to \\(G\\)(Barclay et al., 1997). In this paper, we adopt the former one, which has been shown to be efficient for the nonlinearly constraint optimization problem (Mu and Duan, 2003).
## References
* Barclay et al. (1997) Barclay, A., P. E. Gill, and J. B. Rosen, 1997: SQP methods and their application to numerical optimal control. Technical Report Report NA 97-3, Dept of Mathematics, UCSD.
* Berloff and Meacham (1996) Berloff, P. S. and S. P. Meacham, 1996: Constructing fast-growing perturbations for the nonlinear regime. _J. Atmos. Sci._, **53**, 2838-2851.
* Cessi (1994) Cessi, P., 1994: A simple box model of stochastically forced thermohaline flow. _J. Phys. Oceanogr._, **24**, 1911-1920.
* Chen et al. (1997) Chen, Y. Q., S. D. Battisti, T. N. Palmer, J. Barsugli, and E. S. Sarachik, 1997: A study of the predictability of tropical pacific sst in a coupled atmosphere-ocean model using singular vector analysis: The role of the annual cycle and the enso cycle. _Monthly Weather Review_, **125**, 831-845.
* Dijkstra (2000) Dijkstra, H. A., 2000: _Nonlinear Physical Oceanography: A Dynamical Systems Approach to the Large Scale Ocean Circulation and El Nino_. Kluwer Academic Publishers, Dordrecht, the Netherlands.
* Dijkstra and Suter (2000)Durbiano, S., 2001: _Vecteurs caracteristiques de modeles oceaniques pour la reduction d'ordre er assimilation de donnees_. PhD thesis, Universite Joseph Fourier, Grenoble, France.
* Gill et al. (1997) Gill, P. E. Murray, W. and M. A. Saunders, 1997: SNOPT: An sqp algorithm for large-scale constrained optimization. Technical Report Report NA 97-2, Dept of Mathematics, UCSD.
* Griffies and Tziperman (1995) Griffies, S. M. and E. Tziperman, 1995: A linear thermohaline oscillator driven by stochastic atmospheric forcing. _J. Climate_, **8**, 2440-2453.
* Hasselmann (1976) Hasselmann, K., 1976: Stochastic climate models. I: Theory. _Tellus_, **28**, 473-485.
* Knutti and Stocker (2002) Knutti, R. and T. F. Stocker, 2002: Limited predicability of future thermohaline circulation to an instability threshold. _J. Climate_, **15**, 179-186.
* Liu and Nocedal (1989) Liu, D. C. and J. Nocedal, 1989: on the limited memory method for large scale optimization. _Mathematical Programming_, **45**, 503-528.
* Manabe and Stouffer (1999) Manabe, S. and R. J. Stouffer, 1999: Are two modes of thermohaline circulation stable? _Tellus_, **51A**, 400-411.
* Marotzke (1995) Marotzke, J., 1995: Analysis of thermohaline feedbacks. Technical Report No.39, Center for Global Change Science, M.I.T. Cambridge.
* McAvaney (2001) McAvaney, B., 2001: Chapter 8: Model evaluation. In _Climate Change 2001: The Scientific Basis_, Houghton, J. T., Ding, Y., Griggs, D. J., Noguer, M., van der Linden, P. J., and Xiaosu, D., editors. Cambridge University Press, 225-256.
* Mu et al. (2003) Mu, M., W. Duan, and B. Wang, 2003: Conditional nonlinear optimal perturbation and its applications. _Nonlinear Proc. Geophysics_, **10**, 493-501.
* Mu et al. (2004)* Mu and Duan (2003) Mu, M. and W. Duan, 2003: A new approach to study ENSO predictability: conditional nonlinear optimal perturbation. _Chinese Science Bulletin_, **48**, 1045-1047.
* Mu and Wang (2001) Mu, M. and J. Wang, 2001: Nonlinear fastest growing perturbation and the first kind of predictability. _Science in China_, **44D**, 1128-1139.
* Mu (2000) Mu, M., 2000: Nonlinear singular vectors and nonlinear singular values. _Science in China_, **43D**, 375-385.
* Palmer (1995) Palmer, T. N., 1995: A nonlinear dynamical perspective on model error: A proposal for nonlocal stochastic-dynamic parameterisation in weather and climate prediction models. _Quart. J. Roy. Meteor. Soc._, **127**, 279-304.
* Rahmstorf (1995) Rahmstorf, S., 1995: Bifurcations of the Atlantic thermohaline circulation in response to changes in the hydrological cycle. _Nature_, **378**, 145-149.
* Stocker et al. (1992) Stocker, T. F., D. G. Wright, and L. A. Mysak, 1992: A zonally averaged, coupled ocean-atmosphere model for paleoclimate studies. _J. Climate_, **5**, 773-797.
* Stommel (1961) Stommel, H., 1961: Thermohaline convection with two stable regimes of flow. _Tellus_, **2**, 244-230.
* Thompson (1998) Thompson, C. J., 1998: Initial conditions for optimal growth in couple oceanatmosphere model of ENSO. _J. Atmos. Sci._, **55**, 537-557.
* Timmermann and Lohmann (2000) Timmermann, A. and G. Lohmann, 2000: Noise-induced transitions in a simplified model of the thermohaline circulation. _J. Phys. Oceanogr._, **30**, 1891-1900.
* Tziperman and Ioannou (2002) Tziperman, E. and P. J. Ioannou, 2002: Transient growth and optimal excitation of thermohaline variability. _J. Phys. Oceanogr._, **32**, 3427-3435.
* Tziperman and Ioannou (2003)* Velez-Belchi et al. (2001) Velez-Belchi, P., A. Alvarez, P. Colet, J. Tintore, and R. L. Haney, 2001: Stochastic resonance in the thermohaline circulation. _Geophys. Res. Letters_, **28**, 2053-2056.
* Xue and Zebiak (1997a) Xue, Y., M. A. C. and S. E. Zebiak, 1997a: Predictability of a coupled model of enso using singular vector analysis. part i: Optimal growth in seasonal background and enso cycles. _Monthly Weather Review_, **125**, 2043 C2056.
* Xue and Zebiak (1997b) Xue, Y., M. A. C. and S. E. Zebiak, 1997b: Predictability of a coupled model of enso using singular vector analysis. part ii: optimal growth and forecast skill. _Monthly Weather Review_, **125**, 2057 C2073.
**Captions to the Figures**
Fig. 1
The bifurcation diagram of the Stommel two-box model for \\(\\eta_{1}=3.0\\) and \\(\\eta_{3}=0.2\\). The points labelled A and B represent the thermally-driven steady state and salinity-driven steady state, respectively, considered in section 3. The points labelled P and Q represent the bifurcation points of the model, respectively. Solid curves indicate linearly stable steady states, whereas the states on the dashed curve are unstable. There are thermally-driven (TH) stable steady states (\\(\\bar{\\Psi}>0\\)) and salinity-driven (SA, ie., the circulation is salinity-dominated) stable steady states (\\(\\bar{\\Psi}<0\\)).
Fig. 2
The values of \\(\\theta\\) for both the linear singular vectors (LSV, dashed line) and for the Conditional Nonlinear Optimal Perturbation (CNOP, solid line) of the thermally-driven state under the conditions \\(\\delta\\leq 0.3\\) and \\(t_{e}=2.5\\).
Fig. 3
Values of \\(J\\) at the endpoints of trajectories at time \\(t_{e}=2.5\\) for different values of \\(\\delta\\). These trajectories started either with Conditional Nonlinear Optimal Perturbation (CNOP-N, solid curve) and linear singular vectors (LSV-N,dash-dotted curve, hardly distinguishable from the solid curve) perturbing the thermally-driven state. Also included are the endpoints when the tangent linear model is integrated with the linear singular vectors as initial perturbation (LSV-L, dashed curve ).
Fig. 4
The magnitude of perturbations \\(J\\) obtained at \\(t_{e}=2.5\\) for the the evolutions of perturbations of the thermally-driven state in the tangent linear model and nonlinear model. The initial perturbations have the form \\((T^{\\prime}(0),S^{\\prime}(0))=(\\delta\\cos\\theta,\\delta\\sin\\theta)\\)with \\(\\delta=0.2\\).
Fig. 5
Values of (a) perturbation growth \\(J/\\delta\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP, solid curve) initial conditions superposed on the thermally-driven state. The different curves are for different values of \\(\\delta\\).
Fig. 6
The values of \\(\\theta\\) for both the linear singular vectors (LSV, dashed line) and for the Conditional Nonlinear Optimal Perturbations (CNOP) of the salinity-driven state under the conditions \\(\\delta\\leq 0.22\\) and \\(t_{e}=2.5\\).
Fig. 7
The magnitude of perturbation \\(J\\) at the endpoints of trajectories at time \\(t_{e}=2.5\\) for different values of \\(\\delta\\). These trajectories started either with Conditional Nonlinear Optimal Perturbation (CNOP-N, solid curve) and linear singular vectors (LSV-N, dash-dotted line) perturbing the salinity-driven state. Also included are the endpoints when the tangent linear model is integrated with the linear singular vectors as initial perturbation (LSV-L, dashed line).
Fig. 8
(a) Values of perturbation magnitude \\(J\\) obtained at \\(t_{e}=2.5\\) for the the evolutions of perturbations of the salinity-driven state in the tangent linear model and nonlinear model. The initial perturbations have the form \\((T^{\\prime}(0),S^{\\prime}(0))=(\\delta\\cos\\theta,\\delta\\sin\\theta)\\) with \\(\\delta=0.2\\). And (b) the contour plot of \\(J(\\theta,\\delta)\\), with contour interval of \\(0.02\\) from \\(J=0.02\\) to \\(J=0.08\\) (solid curve and dotted curve). The three group local maxima are indicated as the dashed curves.
Fig. 9
Values of (a) perturbation growth \\(J/\\delta\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP), initial conditions superposed on the salinity-driven state. The different curves are for different values of \\(\\delta\\).
Fig. 10
Values of (a) perturbation growth \\(J\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP), initial conditions superposed on the thermally-driven state for different values of \\(\\eta_{2}\\).
Fig. 11
The critical value of \\(\\delta\\) (\\(\\delta_{c}\\)) versus the parameter controlling the thermally-driven state near the saddle-node bifurcation at \\(\\eta_{2}=1.05\\).
Fig. 12
Values of (a) \\(J\\) and (b) \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP) initial conditions superposed on the salinity-driven state for different values of \\(\\eta_{2}\\).
Fig. 13
The critical value of \\(\\delta\\) (\\(\\delta_{c}\\)) versus the parameter controlling the salinity-driven state near the saddle-node bifurcation at \\(\\eta_{2}=0.6\\).
Figure 1: _The bifurcation diagram of the Stommel two-box model for \\(\\eta_{1}=3.0\\) and \\(\\eta_{3}=0.2\\). The points labelled A and B represent the thermally-driven steady state and salinity-driven steady state, respectively, considered in section 3. The points labelled P and Q represent the bifurcation points of the model, respectively. Solid curves indicate linearly stable steady states, whereas the states on the dashed curve are unstable. There are thermally-driven (TH) stable steady states (\\(\\bar{\\Psi}>0\\)) and salinity-driven (SA, ie., the circulation is salinity-dominated) stable steady states (\\(\\bar{\\Psi}<0\\))._
Figure 2: _The values of \\(\\theta\\) for both the linear singular vectors (LSV, dashed line) and for the Conditional Nonlinear Optimal Perturbation (CNOP, solid line) of the thermally-driven state under the conditions \\(\\delta\\leq 0.3\\) and \\(t_{e}=2.5\\)._
Figure 3: _Values of \\(J\\) at the endpoints of trajectories at time \\(t_{e}=2.5\\) for different values of \\(\\delta\\). These trajectories started either with Conditional Nonlinear Optimal Perturbation (CNOP-N, solid curve) and linear singular vectors (LSV-N,dash-dotted curve, hardly distinguishable from the solid curve) perturbing the thermally-driven state. Also included are the endpoints when the tangent linear model is integrated with the linear singular vectors as initial perturbation (LSV-L, dashed curve )._
Figure 4: _The magnitude of perturbations \\(J\\) obtained at \\(t_{e}=2.5\\) for the the evolutions of perturbations of the thermally-driven state in the tangent linear model and nonlinear model. The initial perturbations have the form \\((T^{\\prime}(0),S^{\\prime}(0))=(\\delta\\cos\\theta,\\delta\\sin\\theta)\\) with \\(\\delta=0.2\\)._
Figure 5: _Values of (a) perturbation growth \\(J/\\delta\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP, solid curve) initial conditions superposed on the thermally-driven state. The different curves are for different values of \\(\\delta\\)._
Figure 6: _The values of \\(\\theta\\) for both the linear singular vectors (LSV, dashed line) and for the Conditional Nonlinear Optimal Perturbations (CNOP) of the salinity-driven state under the conditions \\(\\delta\\leq 0.22\\) and \\(t_{e}=2.5\\)._
Figure 7: _The magnitude of perturbation \\(J\\) at the endpoints of trajectories at time \\(t_{e}=2.5\\) for different values of \\(\\delta\\). These trajectories started either with Conditional Nonlinear Optimal Perturbation (CNOP-N, solid curve) and linear singular vectors (LSV-N, dash-dotted line) perturbing the salinity-driven state. Also included are the endpoints when the tangent linear model is integrated with the linear singular vectors as initial perturbation (LSV-L, dashed line)._
Figure 9: _Values of (a) perturbation growth \\(J/\\delta\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP), initial conditions superposed on the salinity-driven state. The different curves are for different values of \\(\\delta\\)._
Figure 10: _Values of (a) perturbation growth \\(J\\) and (b) flow stream function \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP), initial conditions superposed on the thermally-driven state for different values of \\(\\eta_{2}\\)._
Figure 11: _The critical value of \\(\\delta\\) (\\(\\delta_{c}\\)) versus the parameter controlling the thermally-driven state near the saddle-node bifurcation at \\(\\eta_{2}=1.05\\)._
Figure 12: _Values of (a) \\(J\\) and (b) \\(\\Psi\\) along the trajectories computed with Conditional Nonlinear Optimal Perturbation (CNOP) initial conditions superposed on the salinity-driven state for different values of \\(\\eta_{2}\\)._
Figure 13: _The critical value of \\(\\delta\\) (\\(\\delta_{c}\\)) versus the parameter controlling the salinity-driven state near the saddle-node bifurcation at \\(\\eta_{2}=0.6\\)._ | Within a simple model context, the sensitivity and stability of the thermohaline circulation to finite amplitude perturbations is studied. A new approach is used to tackle this nonlinear problem. The method is based on the computation of the so-called Conditional Nonlinear Optimal Perturbation (CNOP) which is a nonlinear generalization of the linear singular vector approach (LSV). It is shown that linearly stable thermohaline circulation states can become nonlinearly unstable and the properties of the perturbations with optimal nonlinear growth are determined. An asymmetric nonlinear response to perturbations exists with respect to the sign of finite amplitude freshwater perturbations, on both thermally dominated and salinity dominated thermohaline flows. This asymmetry is due to the nonlinear interaction of the perturbations through advective processes. | Summarize the following text. |
arxiv-format/0702167v1.md | **The modeling of the ENSO events with the help of a simple model**
Vladimir N. Stepanov
Proudman Oceanographic Laboratory, Merseyside, England
February 20, 2007
## 1 Introduction
Alvarez-Garcia et al. 2006 from a coupled global climate model simulation have identified three classes of ENSO events. The first two classes are characterized by well known paradigms: the first is the delayed oscillator where equatorial coupled waves produce a delayed-negative feedback to the warm sea surface temperature SST anomalies in the tropics (e.g., see Suarez and Schopf 1988); the second model is the recharge oscillator where fast wave processes adjust the thermocline tilt as a result of wind stress variability (see, e.g., Jin 1996). However, Kessler 2002, argued that ENSO events are like disturbances with respect to a stable basic state, requiring an initiating impulse not contained in the dynamics of the cycle itself, andthe initiation might be carried out by some other climate variation. The third class of ENSO events identified by Alvarez-Garcia et al. 2006 is characterized by a relatively quick development of ENSO events (less than 9 months after the changes of the above mentioned plausible exciting forcings in the tropics) and it supports the conclusion of Kessler 2002 about some initiating impulse. This article considers the variability arising from the joint effect of bottom topography, coastlines and atmospheric conditions over the ACC as an amplifier/trigger for ENSO events.
Numerical experiments with the help of a barotropic ocean model (Stepanov and Hughes 2004) forced by 6-hourly global atmospheric winds and pressures from the European Centre for Medium-Range Weather Forecasts (ECMWF), have demonstrated that changes in wind strength over the ACC together with the effect of bottom topography induce some pressure/density anomalies in the Southern Ocean (Stepanov and Hughes 2006; Stepanov, submitted manuscript 2006 (see, also [http://arxiv.org/abs/physics/0702159](http://arxiv.org/abs/physics/0702159), henceforth S06). These anomalies lead to short-period variations of meridional flows in the Pacific sector of the Southern Ocean to the north of 47\\({}^{\\rm o}\\)S, that result in the mean value of daily water mass variability _M(t)_ in the Pacific about of 500 Gt (Gigatonns) for a summer period preceding the cold or warm ENSO events (although the real water exchange between the Pacific and the Southern Ocean is more than an order of magnitude higher because of the link between the total meridional flux and mass flux in the western portion of the Pacific obtained in these numerical experiments). These mass variations are anticorrelated with the strength of the wind over the ACC at the 99% confidence level. All these above described features are clear seen from Figure 1, where the transport through Drake Passage and the daily mass variability in the Pacific Ocean _M(t)_\\(\\sim{}_{o}\\int^{\\prime}Q_{P}(t)\\ dt\\) due to meridional transport fluctuations \\(Q_{P}\\) through the latitude of 40\\({}^{\\circ}\\)S, averaged for July-September from the model's 20-year time series, are shown. There is a strong coincidence between the minimums and maximums of this mass variability in the Pacific, and cold and warm ENSO events, respectively, i.e., short-term fluctuations in _M(t)_ are related to the onset of ENSO events (the correlation coefficient between the mean summer's mass variability _M(t)_ presented on Fig. 1 and the winter's NINO4-index (dashed line on Fig. 1) is 0.84 at the 99% confidence level).
This meridional flux variability in the Pacific sector of the Southern Ocean can induce some short-period density anomalies in the vicinity of these regions with high meridional flux variability at the time scales of several months when the effect of barotropic changes has not yet transformed into baroclinic ones. These anomalies can be transferred to low latitudes by the wave mechanism described by Ivchenko et al. 2004, henceforth IZD04, that subsequently interact with stratification changing the tropical thermocline tilt which can amplify an ENSO event. IZD04 showed that signals due to salinity anomalies generated near to Antarctica can propagate almost without changes of disturbance amplitude in the form of fast-moving barotropic Rossby waves. Such waves propagate from Drake Passage (where the ACC is constricted to its narrowest meridional extent) to the western Pacific and are reflected at the western boundary of the Pacific before moving equatorwards and further northwards along the coastline as coastally trapped Kelvin waves. Such signals propagate from Drake Passage to the equator in only a few weeks and through the equatorial region in a few months (henceforth denote via T\\({}_{\\rm B}\\) the period needed for anomalies from the Southern Ocean to reach low latitudes). In a more realistic coupled ocean-atmosphere general circulation model, Richardson et al. 2005 observed a similar rapid response of the Pacific to a similar density anomaly in the ACC.
The above described mass variability in the Southern Ocean can significantly influence the tropics. The value of daily mass variability in the Pacific about 5000 Gt obtained by S06 (it takes into account the link between the total meridional flux and mass flux in the western portion of the Pacific) gives an estimate of the typical size of the signal arriving in the tropical Pacific. This signal is substantial, as a positive (negative) mass change corresponds to thermocline elevation (depression) in the tropics about 50 m over an area of 10 degree by 100 degrees in the period of 3 months. This signal propagates across the equatorial Pacific by means of short time scales of Kelvin waves and influences the tropical SST via the deepening (shallowing) of thermocline depth. Thus these wave interactions can define the interannual SST fluctuations in the tropics via the variability of dynamics in the Southern Ocean, i.e., as it was shown by S06, these interannual SST fluctuations are associated with the interannual variability of atmospheric conditions over the ACC in the preceding 4-6 months, which in turn, is initiated by the variability in the tropics in the preceding couple months. S06 proposed a plausible explanation of coupled interaction of the tropics and high latitudes and their influence on ENSO events which includes the following processes. Upper ocean warming (cooling) in the tropics (that, e.g., could be associated with a seasonal cycle) leads to an enhanced (decreased) heating in the upper troposphere over the tropical ascending region in the Pacific. It means warmer (colder) air is transferred by the Hadley cell from the tropics to the descending regions in the subtropics that slows down (speeds up) here the atmospheric downwelling which then weakens (strengthens) wind over the Southern Ocean. As it was mentioned earlier (due to the anticorrelation between mass variations in the Pacific and the strength of wind over the ACC), the weak (strong) wind over the Southern Ocean is associated with equatorward (poleward) mass flux in the Southern Ocean to the north in the vicinity of 47\\({}^{\\circ}\\)S that leads to the amplification of a warm (cold) ENSO signal via propagation of pressure/density anomalies from the Southern Ocean to the tropics by means of fast wave processes.
The interaction between the tropics and the Southern Ocean depends on the stochastic processes of ocean-atmospheric interactions in these regions. A substantial role in these stochastic processes, as it was shown by S06, can be due to the mass flux variability in the Southern Ocean associated with the changes in atmospheric forcings over the ACC, and would the interaction between the tropics and high latitudes lead to the ENSO event or usual seasonal variability, depends on the processes in the Southern Ocean.
Burgers et al. 2005, henceforth BJO05, presented the simplest form of the ENSO recharge oscillator model which is based on two equations: one for the eastern Pacific sea surface temperature anomaly \\(T_{E}\\) and the other for the mean equatorial Pacific thermocline depth anomaly \\(h\\) with the damping on \\(T_{E}\\) much stronger than the damping on \\(h\\). The interaction between \\(T_{E}\\) and \\(h\\) is characterized by the time delay between the east and the west of the Pacific that is due to both finite Kelvin wave speed and SST dynamics. In this paper authors have taken into account a parameterization of the fast wave process by which the thermocline tilt adjusts to the wind stress into the recharge oscillator. The parameters of the recharge oscillator model were obtained from two different methods, but both are based on a standard fit, that minimizes the rms error of model forecasts for the model variables and the observed values. This fit gives an oscillation period T and decay time \\(\\gamma^{\\text{-1}}\\) of about 3 and 2 years, respectively, since the observed periods of ENSO lie in the range from 2 to 4 years.
In reality, as it was described earlier, the density anomalies defining the ocean dynamics in the vicinity of the equator propagate very fast meaning that an equatorial Kelvin wave would take about 2 months to cross the whole Pacific and therefore, any density/pressure disturbances appeared in western tropics will be revealed in eastern ones very quickly by means of propagation of internal (baroclinic) Kelvin waves (see, e.g., Blaker et al. 2006). Hence we can directly investigate the tropical variability at short periods by using the shorter period and decay time parameters than in BJO05 assuming that the propagation of short time scales of Kelvin waves and associated SST reaction lead to rise to interannual fluctuations that are only dependent on external forcing describing the variability of dynamics in the Southern Ocean and in the tropics.
## 2 Method and Results
The recharge oscillator of BJO05 is based on two equations for the \\(T_{E}\\) and the \\(h\\). A third equation describing the variability of bottom water thickness anomaly in the tropical western Pacific, \\(z\\), is added here to parameterize the variability of the thermocline depth \\(h\\) due to the fluctuations of meridional fluxes in the Southern Ocean. For this case, the equations of BJO05 system can be rewritten as:
\\[\\mathbf{\\partial_{t}}\\,\\mathbf{T_{E}}=\\mathbf{-2\\gamma}\\,\\mathbf{T_{E}}\\,+\\mathbf{ \\omega_{0}}\\,\\mathbf{h}, \\tag{1}\\] \\[\\mathbf{\\partial_{t}}\\,\\mathbf{h}=\\mathbf{-\\omega_{0}}\\,\\mathbf{T_{E}}\\,-2\\gamma_ {\\mathrm{B}}h+\\omega_{\\mathrm{B}}\\,z,\\] (2) \\[\\partial_{t}\\,z=\\mathbf{-2\\gamma_{\\mathrm{B}}\\,z}+\\mathrm{F_{ex}}, \\tag{3}\\]
where T=2\\(\\pi\\)co-1 and T\\({}_{\\mathrm{B}}\\)=2\\(\\pi\\)\\(\\Omega\\)-1 are the periods, and \\(\\gamma\\)-1 and \\(\\gamma_{\\mathrm{B}}\\)-1 are the decay times describing the processes in the tropics and the middle latitudes respectively; \\(\\omega\\)2=\\(\\omega_{0}\\)2-\\(\\gamma\\)2; \\(\\Omega\\)2=\\(\\omega_{0}\\)2-\\(\\gamma_{\\mathrm{B}}\\)2; and F\\({}_{\\mathrm{ex}}\\) denotes the external forcing. Bold font in (1) - (2) denotes the terms of the original BJO5 model. The terms with subscript \"B\" describe the processes of interaction between the Southern Ocean and the tropics, that are due to fast-moving barotropic Rossby wave processes from the Southern Ocean to low latitudes (see IZD04) and they are added in equations (1)-(3) similar processes for the tropics that will be described below.
The external forcing added in the equation (3) is defined as being proportionate to the scaled monthly averaged mass variability of _M(t)_ in the Pacific Ocean due to meridional transport fluctuations through the latitude of 40\\({}^{\\rm o}\\)S (i.e., \\(|\\)_M(t)\\(|\\)\\(\\leq\\)1_) from 1985 to the end of 2004 (S06) with the coefficient C\\({}_{\\rm o}\\) (F\\({}_{\\rm ex}\\)= C\\({}_{\\rm o}\\)_M(t)_). In the model this forcing describes the short period mass variability in the Pacific due to meridional transport fluctuations through the latitude of 40\\({}^{\\rm o}\\)S which lead to the appearance of a cold (warm) temperature anomalies in the Southern Ocean (described by F\\({}_{\\rm ex}\\) in (3)), that are transferred to low latitudes by the wave mechanism by IZD04 (having the time scale of T\\({}_{\\rm B}\\)). This increases (decreases) the bottom water thickness in the western Pacific, \\(z\\), that leads to the elevation (depression) of the thermocline in the west equatorial Pacific. This mass surplus (lack) near the equator begins to disperse eastward as a so-called downwelling (upwelling) Kelvin wave resulting in deepening (shallowing) of the mean thermocline depth via the last term \\(\\omega_{\\rm B}\\)\\(z\\) in (2) (it is similar to the term \\(\\omega_{\\rm o}\\)_h_ in (1) which describes the dependence of \\(T_{E}\\) variability on the mean thermocline depth (with the time scale of T)). The terms with \\(\\gamma_{\\rm B}\\) define damping processes.
Thus, the terms of equations (1)-(3) with subscript \"B\" properly describe processes in the model: the high (low) value of F\\({}_{\\rm ex}\\) (i.e., _M(t)_) leads to an increase (decrease) of variable \\(z\\) that increases (decreases) \\(h\\), and \\(T_{E}\\) increases (decreases) too. The periods and decay times in the experiments were varied across a broad range: from 1 till 10 for the period and from 1 till 36 months for the decay time.
The SST anomalies averaged over the region between latitudes 5\\({}^{\\circ}\\)S and 5\\({}^{\\circ}\\)N, and between longitudes 160\\({}^{\\circ}\\)E and 150\\({}^{\\circ}\\)W of the Pacific (NINO4-index from [http://climexp.knmi.nl](http://climexp.knmi.nl)) have been used for a comparison with the model \\(T_{E}\\).
The statistical significance of all correlation coefficients presented in the paper is statistically significant at the 99% level that was determined via an effective sample size following Bretherton et al. (1999).
The scaled values for the _M(t)_ and the NINO4-index for the model's period are presented in Figure 2a. The correlation coefficient between the monthly averaged values of _M(t)_ and NINO4 is 0.27. The positions of major peaks for these curves are consistent, though the _M(t)_ curve displays more high frequency fluctuations than NINO4.
The equation system (1)-(3) was solved numerically from 1985 onwards with the initial conditions: \\(T_{E}\\)\\({}_{|\\)=1985\\(=\\)0, \\(h\\)\\({}_{|\\)\\({}_{\\rm t=1985}}\\)=0 (henceforth called experiment E1). With model parameters T=2 months, \\(\\gamma^{-1}=7\\) months, and T\\({}_{\\rm B}\\)=5 months, \\(\\gamma_{\\rm B}\\)-1 = 10 months the oscillations with periods corresponding to ENSO were excited in the system. The value of parameter C\\({}_{\\circ}\\)=1.3 was chosen in the experiment that corresponds to the maximum amplitude of variability of \\(T_{E}\\sim\\) 1\\({}^{\\circ}\\)C. The upper solid and the middle lines on Figure 2a correspond to the scaled model SST \\(T_{E}\\) and thermocline depth \\(h\\) obtained in this experiment. The oscillation of \\(h\\) leads that of \\(T_{E}\\) by about 2 months and the majority of its variability is due to the _M(t)_: the correlation of _M(t)_ with \\(h\\) is 0.84. The correlation between model \\(T_{E}\\) and NINO4 is 0.68. The percentage of variance explained was calculated to be about 43%. Figure 2b shows winter's \\(T_{E}\\) and the NINO4-index (averaged during the three months from December to February when the warm or cold ENSO events usually achieve the maximum phase of their development) and, for comparison, the preceding scaled summer's mass variability of _M(t)_ in the Pacific Ocean due to meridional transport fluctuations through the latitude of 40\\({}^{\\circ}\\)S. The latter is from the model's time series (S06), averaged from July to September. The correlation coefficient between winter's \\(T_{E}\\) and the NINO4-index is 0.72 and the percentage of variance explained is about 48%.
Experiments in which the values of the model parameters were varied show that the model results have little dependence on the choice of the period \\(T_{\\rm B}\\) or the damping coefficient \\(\\gamma\\), but they are sensitive to the decrease of \\(\\gamma_{\\rm B}\\)-1 and the increase of T. The correlation between \\(T_{E}\\) and NINO4 decreases with increasing of T (decreasing \\(\\gamma_{\\rm B}\\)-1) and drops to about 0.5 when T (\\(\\gamma_{\\rm B}\\)-1) equals to 6 (5) months. An increase in the parameter \\(\\gamma\\)-1 leads to noisier behaviour of the variable \\(h\\), but these changes are not significant for \\(T_{E}\\).
It is seen clearly from Figure 2b that the warm ENSO events of 1986-87, 1991-92, 2003 and partly for 1997, along with the cold ENSO events of 1988-89 and 1998-2000 can be reproduced by this simple model. The warm and cold ENSO events occurred when the maximums and minimums of the ocean model's summer meridional flow from the Southern Ocean were observed, so the joint effect of atmospheric conditions over the ACC and bottom topography in the Southern Ocean could be considered as the mechanism amplifying (or may be triggering, since there is no other external forcing in the model) ENSO events. However, the warm ENSO event in 1994-95 was omitted by the model that demonstrates the nature of ENSO events is more complicated than this simple model, due to the interaction between the ocean and atmosphere over a much broader area.
The effect of westerly wind variability in the model
It has been established that the onset of ENSO depends on equatorial wind anomalies in the western Pacific during the preceding spring and summer, though these wind anomalies can trigger the ENSO when the oceanic conditions in the tropical Pacific are favourable to the development of the ENSO (see e.g. Lengaigne et al., 2004). ). It can also be seen from Figure 2 that the short-period meridional mass variability in the Southern Ocean can be considered as a \"favourable\" condition to set up the ENSO. Figure 3 shows the correlation between the wind stress zonally averaged over the Pacific from the ECMWF reanalysis, \\(<\\)\\(\\tau_{\\rm x}\\)\\(>\\), (June-September average) and winter's NINO4-index (averaged December-February). There are high correlations between ENSO and winds in the tropics, in the Trade Wind region and over the ACC. The interpretation of these high correlations is that weak winds in southern hemisphere set up warm ENSO events (for clarity the dashed line on Figure 3 represents the scaled profile of time averaged \\(<\\)\\(\\tau_{\\rm x}\\)\\(>\\)). As mentioned above, there is an anticorrelation between the strength of the wind over the ACC and the variability of meridional mass fluxes in the Pacific which in turn, is significantly correlated with winter's NINO4-index in the latitude band from 45 \\({}^{\\circ}\\) to 35\\({}^{\\circ}\\) S (dotted line in Figure 3). From the value of correlation coefficient between the mean summer's _M(t)_ and winter's NINO4 (\\(\\sim\\)0.8), it can be calculated that the mean summer's _M(t)_ describes about half of winter's NINO4-index variance. Thus the variability of wind over the ACC in the ENSO model is taken into account via _M(t)_. To account for the effect of westerly wind variability in the tropics, the SOI-index will be used in the following experiments.
The SOI-index is defined as the normalized atmospheric pressure difference between Tahiti and Darwin, i.e. the higher SOI-index, the stronger tropical wind. There are several slight variations in the SOI values calculated at various centres. In following experiments the series from 1950 onwards calculated by the method of Ropelewski and Jones (1987) (obtained from the website [http://climexp.knmi.nl](http://climexp.knmi.nl)) and data by Trenberth ([http://www.cgd.ucar.edu/cas/catalog/climind](http://www.cgd.ucar.edu/cas/catalog/climind), where the standardizing is done using the approach outlined by Trenberth (1984) to maximize the signal-to-noise ratio) were used. However, model results slightly depend on the choice of these data.
The additional external tropical force \\(\\rm F_{T}\\) which parameterizes the effect of westerly wind variability was added to the right side of equation (2) of ENSO system model (1)-(3) which can be written in form:
\\[\\rm t\\] t \\[\\rm F_{T}\\,(t,\\Delta t_{T})\\] = - \\[\\rm C_{T}\\,/\\Delta t_{T}\\,\\int\\,SOI(t)\\ dt=C_{T}\\,/\\Delta t_{T
This modified equation system (1), (3) and (5)-(6) was solved numerically (henceforth E2) in a similar way to experiment E1. The value of parameter C\\({}_{\\rm T}\\) =5.1 was chosen in this experiment, so that the maximum amplitude of variability in \\(T_{E}\\) corresponds to 1.6\\({}^{\\rm o}\\)C and so the contribution of both external forcings to this variability would be comparable (that follows from the correlation coefficient between the _M(t)_ and NINO4-index). The correlation between SOI\\({}^{\\rm T}\\) and NINO4 for the period from 1985 to 2005 is 0.38 (though for 1950-2005 this coefficient is 0.57), increasing up to 0.62 at 4 months lag, where the SOI\\({}^{\\rm T}\\) leads the NINO4-index (0.70 for 1950-2005). On this basis the parameter \\(\\Delta\\)\\({}_{\\rm T}\\)=4 months was chosen and after that the correlation between the forcing F\\({}_{\\rm T}\\)(t, \\(\\Delta\\)\\({}_{\\rm T}\\)=4) and NINO4 increased.
The solution of the modified ENSO system is presented in Figure 4. All warm and cold ENSO events (including the warm ENSO event in 1994-95 omitted before) are now reproduced by this simple model. The analysis of the SOI-index and _M(t)_ for 1994 shows that the previous failure to reproduce the ENSO of 1994-95 in experiment E1 was due to the presence of weak winds in southern hemisphere having low variability on time almost continuously during whole of 1994, that minimizes the joint effect of the variability of atmospheric conditions over the ACC on ocean dynamics in the Southern Ocean. However, the long-term weakness of westerly winds in the tropics leads to the onset of ENSO, which is now taken into account by the parameterization of the external tropical forcing. The correlation coefficient between the model \\(T_{E}\\) and NINO4 is 0.83. The percentage of NINO4 variance explained by \\(T_{E}\\) is more than 65%. The correlation coefficient between winter's \\(T_{E}\\) and NINO4-index is 0.87, and the percentage of variance explained in this case is about 76%. Note that for the case of \\(\\Delta\\)\\({}_{\\rm T}\\)=0 the correlation between the model T\\({}_{\\rm E}\\) and NINO4-index is 0.72 and the percentage of NINO4 variance explained by \\(T_{E}\\) is 46%, i.e., the model results are similar to the results of experiment E1, when only forcing due to the effect of the Southern Ocean was taken into account. The choice of a larger scale factor \\(\\rm C_{T}\\) for the model forcing due to the SOI, to obtain results comparable with the Southern Ocean contribution to variability in the eastern Pacific SST anomaly, demonstrates that the variability of ocean dynamics in the Southern Ocean makes a major contribution to the variability of tropical SST.
Experiments in which the values of the model parameters (T and \\(\\gamma\\)) were varied demonstrated a similar dependence to experiment E1. The variation of \\(\\rm\\Delta\\tau_{T}\\) (\\(\\pm\\)2months) from \\(\\rm\\Delta\\tau_{T}\\) =4 months has no significant effect on the correlation between \\(T_{E}\\) and NINO4-index.
Thus, this simple ENSO model is able to reproduce ENSO events very well.
## 4 The simplified forecast ENSO model
The modified system of the ENSO model (1), (3) and (5)-(6) can be reduced for the forecast of ENSO events (similar to the model of BJ005) by the following representation of the expression for external forcings in the Southern Ocean:
\\[\\partial_{t}\\,\\rm h=-\\omega_{o}\\,\\rm T_{E}+F_{ex}+F_{T}, \\tag{7}\\]
where
\\[\\rm t\\] \\[\\rm F_{ex}\\,(t,\\rm\\Delta\\tau)=C_{o}\\,/\\Delta\\tau\\int M(t)\\ dt. \\tag{8}\\] \\[\\rm t\\mbox{-}\\Delta\\tau\\]
Here \\(\\rm C_{o}\\) is a scale factor and \\(\\Delta\\tau\\) is the time delay due to the propagation of the signal from the Southern Ocean to the equator (IZD04). Thus, the effects of variability due to both _M(t)_and the westerly wind variability in the tropics averaged for the previous \\(\\Delta\\tau\\) and \\(\\Delta\\tau_{\\rm T}\\) months will be used in the following numerical experiments.
Relying on IZD04's estimate of 4-6 months as the time needed for anomalies from the Southern Ocean to reach the low latitudes of the Pacific, the parameter \\(\\Delta\\tau\\) in experiments was varied from 1 to 6 months. As in experiment E2, the parameter \\(\\Delta\\tau_{\\rm T}\\)=4 months was used. The values of the scaling factors C\\({}_{\\rm o}\\) =6.3 and C\\({}_{\\rm T}\\) =12.6 were also adopted in this experiment so that the maximum amplitude of the total variability of \\(T_{E}\\) is 1.6\\({}^{\\rm o}\\)C and the contribution of both external forcings is comparable (it is about 1\\({}^{\\rm o}\\)C for both F\\({}_{\\rm T}\\) and F\\({}_{\\rm ex}\\)). The parameters T and \\(\\gamma\\) in these experiments varied as in experiment E1
The equation system (1), (5) and (7)-(8) was solved numerically from 1951 onwards with the initial conditions \\(T_{E}|_{\\rm t=1951+\\Delta\\tau}\\)=0, \\(h|_{\\rm t=1951+\\Delta\\tau}\\)=0 (henceforth experiment E3). A time shift in the initial conditions is determined by the lag between the external forcing in the Southern Ocean (see Fig. 1) and in the tropics (Fig. 3), and the onset of an ENSO event. The force F\\({}_{\\rm ex}\\) describing the effect of dynamic variability in the Southern Ocean, appeared in the system after 1985 that due to available model data for _M(t)_. With model parameters T=2 months, \\(\\gamma^{\\rm-1}\\) = 7 months, and \\(\\Delta\\tau_{\\rm T}\\)=\\(\\Delta\\tau\\)=4 months the oscillations with the periods corresponding to ENSO were established in the system and reproduced all warm and cold ENSO events.
The scaled values for SOI' (with a time lag of 4 months) and NINO4-index for the period from 1951 to 2005 are shown in Figure 5a. The behaviour of these curves are very similar (correlation coefficient is 0.70) though the original SOI' curve contains more noise than NINO4, that is natural for atmospheric pressure variability in comparison with SST. The upper (solid) and middle lines on Figure 5a correspond to the model scaled values of SST, \\(T_{E}\\), and thermocline depth \\(h\\) obtained in experiment E3 for the period from 1951 to 2005. The variability in \\(T_{E}\\) and \\(h\\) is observed to increase after 1985 when F\\({}_{\\rm ex}\\) is included. Figure 5b shows the curves of \\(T_{E}\\) and \\(h\\) from 1985 in more detail. It can be seen that the oscillation of \\(h\\) leads that of \\(T_{E}\\) by about 2 months, similar to experiment E1.
The correlation coefficients for SOI' and _M(t)_ with model \\(T_{E}\\) at 4-months lag, for the period of 1985-2005, are about 0.65 (SOI' and _M(t)_ lead \\(T_{E}\\)). In comparison, the correlation coefficient obtained for this period between the model \\(T_{E}\\) and NINO4 is 0.82 (0.78 for 1951-2005). The percentage of variance explained is about 60% which is slightly less than in experiment E2. The correlation between the winter T\\({}_{\\rm E}\\) and the NINO4-index for 1985-2005 is 0.92 and the percentage of variance explained is about 84% that is slightly better than the results of experiment E2. Thus, this simple ENSO model is able to forecast ENSO events for 4 months in advance by using the model _M(t)_ and SOI-index averaged for the previous 4 months.
Experiments in which the values of the model parameters (T and \\(\\gamma\\)) were varied demonstrated a similar dependence to experiment E1. Variation of \\(\\Delta\\tau\\) from \\(\\Delta\\tau\\)=4 months slightly decreases the correlation coefficient between \\(T_{E}\\) and NINO4-index, decreasing by about 0.2 and 0.1 for \\(\\Delta\\tau\\)=2 and \\(\\Delta\\tau\\)=6, respectively. Hence, the \\(\\Delta\\tau\\)=4 months is the optimum choice.
## 5 Summary
A modified version of the simple model of BJO05 which is a classical damped oscillator, with eastern Pacific SST and the mean equatorial Pacific thermocline depth representing momentum and position respectively, was used to model ENSO events. The main difference between the original BJ005 and the modified model is the presence of external forcings and the use of shorter period and decay time parameters meaning that the propagation of short time scales of Kelvin waves and associated SST reaction lead to rise to interannual fluctuations that are completely dependent on external forcings describing the variability of dynamics in the Southern Ocean and in the tropics. The external forcings in the model are parameterized by the short period mass variability in the Pacific sector of the Southern Ocean due to meridional transport fluctuations through the latitude of 40\\({}^{\\circ}\\)S, and the SOI' index (defined with the opposite sign to convention). The first forcing describes the variability due to the joint effect of the atmospheric variability over the ACC, bottom topography and coastlines in the Southern Ocean; the second forcing describes westerly wind variability in the tropics. Both forcings are results of coupled interactions between the tropics and high latitudes. Under such conditions oscillations arise in the ENSO system as a result of propagation of signals due to both initial signals appeared in the Southern Ocean and the tropical westerly wind anomaly, that propagate then across the equatorial Pacific by means of fast wave processes.
The external forcings are the main factor in the establishment of the oscillation pattern in the ENSO forecast. To obtain results comparable with the Southern Ocean contribution to variability in the eastern Pacific SST anomaly, a larger scale factor for the model forcing due to the weakness of westerly winds in the tropics was chosen. It demonstrates that the variability of ocean dynamics in the Southern Ocean makes a major contribution to the variability of tropical SST though it is initiated by the variability in the tropics in the preceding couple months. However, the westerly wind variability in the tropics becomes more important when weak westerly winds established in the tropics during very long periods (about year) leading to the onset of ENSO, while weak winds in southern hemisphere having low variability on time minimize the joint effect of the variability of atmospheric conditions on ocean dynamics in the Southern Ocean.
It was shown that this simple ENSO model is able to forecast the ENSO events for 4 months in advance by using the short period model mass variability in the Pacific sector of the Southern Ocean due to transport fluctuations through its open boundary, along with the SOI-index, each averaged over the previous 4 months. A model skill of 0.92 for a four-month lead forecast of the December-February ENSO is comparable with the correlation between August-September NINO4-index and the subsequent December-February NINO4-index and it may be seemed not so impressive. The most important point of these model results is the establishment of two major and comparable feedbacks (in a system with so many feedbacks and connections) responsible for the onset of ENSO, accounting for about 84% of the percentage of variance explained:
- short-period meridional mass fluctuations in the Pacific sector of the Southern Ocean due to the joint effect of the atmospheric variability over the ACC, bottom topography and coastlines;
- the variability of westerly winds in the tropics.
**Acknowledgments**. This work was funded by the Natural Environment Research Council. Thanks to Simon Holgate for commenting on this manuscript.
## References
* [1] Alvarez-Garcia F., W.C. Narvaez, and M.J. Ortiz Bevia (2006) An assessment of differences in ENSO Mechanisms in a Coupled GCM Simulation, J. Climate, 19, 69-87.
* [2] Blaker, A.T., B. Sinha, V.O. Ivchenko, N.C. Wells, and V.B. Zalesny (2006), Identifying the roles of the ocean and atmosphere in creating a rapid equatorial response to a Southern Ocean anomaly, Geophysical Research Letters, v.33, L06720, doi:10.1029/2005GL025474.
* [3] Bretherton, C. S., M. Widmann, V.P. Dymnikov, J. M. Wallace, and I. Blade (1999), The effective number of spatial degrees of freedom of a time-varying field, J. Clim., 12(7), 1990-2009.
* [4] Burgers G., F.-F. Jin, and G.J. van Oldenborgh (2005), The simplest ENSO recharge oscillator, Geophysical Research Letters, v.32, L13706, doi:10.1029/2005GL022951.
* [5] Ivchenko V.O., V. B. Zalesny, and M.R. Drinkwater (2004), Can the equatorial ocean quickly respond to Antarctic sea ice/salinity anomalies?, Geophysical Research Letters, v.31, L15310, doi:10.1029/2004GL020472.
* [6] Jin, F.-F., (1996), Tropical ocean-atmosphere interaction, the Pacific Cold Tongue, and the El Nino/Southern Oscillation, Science, 274, 76-78.
* [7] Kessler, W.S. (2002), In ENSO a cycle or series of events? Geophys. Res. Lett., 29 (23), 2135, doi:10.1029/2002GL015924.
* [8] Lengaigne M., E. Guilyardi, J.-P. Boulanger, C. Menkes, P. Delecluse, P. Inness, J.Cole, J. Slingo (2004), Triggering of El Nino by westerly wind events in a coupled general circulation model. Climate Dynamics, 23, 601-620, doi:10.1007/s00382-004-0457-2.
* [9] Richardson G., M. R. Wadley, and K. Heywood (2005), Short-term climate response to a freshwater pulse in the Southern Ocean, Geophysical Research Letters, v.32, L03702, doi:10.1029/2004GL021586.
* [10] Ropelewski, C.F. and Jones, P.D. (1987) An extension of the Tahiti-Darwin Southern Oscillation Index. Monthly Weather Review, 115, 2161-2165.
* [11] Stepanov V.N., and C.W. Hughes (2004), The parameterization of ocean self-attraction and loading in numerical models of the ocean circulation, J. Geophys. Res., 109, C0037, doi:10.1029/2003JC002034.
* [12] Stepanov V.N., and C.W. Hughes (2006) Propagation of signals in basin-scale bottom pressure from a barotropic model, J. Geophys. Res., 111, C12002, doi:10.1029/2005JC003450.
* [13] Suarez, M., and P.S. Schopf (1988), A delayed action oscillator for ENSO, J. Atmos. Sci., 45, 3283-3287.
* [14] Trenberth (1984), Signal versus Noise in the Southern Oscillation, Monthly Weather Review 112:326-332List of Figure Captions
**Fig. 1** - The values of transport through Drake Passage in Sv (thin solid line) and daily mass variability _M(t)_ in the Pacific Ocean due to meridional transport fluctuations through the latitude of 40\\({}^{\\rm o}\\)S in Gt (Gigatonns) (thick solid line) averaged for July-September. Symbols EL and LA denote warm and cold ENSO events, respectively. Dashed line corresponds to scaled winter's NINO4-index.
**Fig. 2****a** - Mass variability _M(t)_ due to meridional transport fluctuations through 40\\({}^{\\rm o}\\)S in the Pacific Ocean (the lowest line), scaled by a maximum of its value, the thermocline depth anomaly, \\(h\\) (middle line), the SST anomaly, _T\\({}_{E}\\)_ (upper solid line) and the NINO4-index (upper dashed line) for the period from 1985 to 2005; the value of NINO4-index is scaled by a factor of 1.6; **b**- the winter _T\\({}_{E}\\)_ (solid line) and NINO4-index (averaged from December to February) (dashed line) and the preceding summer's _M(t)_, averaged from July to September (dashed-dotted line).
**Fig. 3** The correlation coefficient: dotted line for the meridional summer's mass fluxes averaged from July until September and solid line for the zonal wind stress over the Pacific averaged from June to September, each with winter's NINO4-index (averaged during three months from December until February). The dashed line represents the scaled profile of time averaged zonal wind stress. The positive sign corresponds to the eastern direction.
**Fig. 4** As Figure 2 but for experiment E2.
**Fig. 5****a** - the scaled values for SOI\\({}^{\\rm t}\\) index with a 4 months time lag (the lowest line), the thermocline depth anomaly, \\(h\\) (the middle line), the SST anomaly, _T\\({}_{E}\\)_ (upper solid line) and the NINO4-index (the upper dashed line) for the period from 1951 to 2005; the value of NINO4-index and _T\\({}_{E}\\)_ are scaled by a factor of 1.6; **b**- the scaled winter's _T\\({}_{E}\\)_ and NINO4-index with preceding mean summer's model _M(t)_ and SOI\\({}^{\\rm t}\\) index (top); \\(h\\), _T\\({}_{E}\\)_ and the NINO4-index for the period from 1985 to 2005 (bottom).
Figure 1
Figure 2
Figure 3
Figure 4
Figure 5 | The El Nino Southern Oscillation (ENSO) is modelled with the help of a simple model representing a classical damped oscillator forced by external forcing. Eastern Pacific sea surface temperature (SST) and the mean equatorial Pacific thermocline depth correspond to the roles of momentum and position. The external forcing of the system is supplied by short-period meridional mass fluctuations in the Pacific sector of the Southern Ocean due to the joint effect of the atmospheric variability over the Antarctic Circumpolar Current (ACC), bottom topography and coastlines, and also by the variability of westerly winds in the tropics. Under such conditions the ENSO-like oscillations arise as a result of propagation of signals due to both initial signals appeared in the Southern Ocean and the tropical westerly wind anomaly, that propagate then across the equatorial Pacific by means of fast wave processes. The external forcings are the main factor in establishing the oscillation pattern. | Provide a brief summary of the text. |
arxiv-format/0703004v1.md | # Equation of State of Nuclear Matter at high baryon density
M Baldo and C Maieron
Istituto Nazionale di Fisica Nucleare, Sez. di Catania, Via S. Sofia 64, 95123 Catania, Italy [email protected], [email protected]
November 7, 2021
## 1 Introduction
The knowledge of the nuclear Equation of State (EoS) is one of the fundamental goals in nuclear physics which has not yet been achieved. The possibility to extract information on the nuclear EoS, in particular at high baryon density, is restricted to two fields of research. The interplay between the theory and the observations of astrophysical compact objects is of great relevance in constraining the nuclear EoS. The enormous work that has been developing since the last two decades on the study of heavy ion reactions at intermediate and relativistic energies is the other pillar on which one can hope two build a reasonable model of the nuclear EoS. On the other hand, theoretical predictions of the EoS are essential for modeling heavy ion collisions, at intermediate and relativistic energies, and the structure of neutron stars, supernova explosions, binary collisions of compact stellar objects and their interactions with black holes. In the astrophysical context the dynamics is slow enough and the size scale large enough to ensure the local equilibrium of nuclear matter, i.e. hydrodynamics can be applied, and therefore the very concept of EoS is extremely useful. On the contrary, in nuclear collisions the time scale is the typical one for nuclear processes and the size of the system is only one order of magnitude larger than the interaction range or possibly of the particle mean free path. The physical conditions in the two contexts are therefore quite different. Despite that, by a careful analysis of experimental data on heavy ion collisions and astrophysical observations it is possible to connect the two realms of phenomena which involve nuclear processes at fundamental level, and the EoS provides the crucial concept to establish this link.
From the theoretical point of view the microscopic theory of nuclear matter has a long history and impressive progress has been made along the years. In this topical review paper we will first review the many-body theory of nuclear matter and compare the predictions of different approaches, Sec. 2-8. In Sections 9-10 possible hints from astrophysical observations and heavy ion reactions on the nuclear EoS will be critically reviewed, with emphasis on the connections that can be established between the two fields.
## 2 Many-body theory of the EoS.
The many-body theory of nuclear matter, where only nucleonic degrees of freedom are considered, has developed since several decades along different lines and methods. We summarize the most recent results in this field and compare the different methods at formal level, as well as the final EoS calculated within each one of the considered many-body schemes. The outcome of this analysis should help in restricting the uncertainty of the theoretical predictions for the nuclear EoS.
Within the non-relativistic approach the main microscopic methods are the Bethe-Brueckner-Goldstone (BBG) approach and the variational method (VM). The Bethe-Brueckner-Goldstone is a general many-body method particularly suited for nuclearsystems. It has been extensively applied to homogeneous nuclear matter since many years and it has been presented in several review articles and textbooks. For a pedagogical review see Baldo (1999), where a short historical introduction and extended references can be found. Here we restrict the presentation to the basic structure of the method, but we will go to some detail in order to prepare the material needed for a formal comparison with other methods. We follow closely the presentation of Baldo (1999), at least for the more elementary parts.
Let us suppose for the moment that only a two-body interaction is present. Then the Hamiltonian can be written
\\[H\\,=\\,H_{0}\\,+\\,H_{1}\\,=\\,\\sum_{k}\\frac{\\hbar^{2}{\\bf k}^{2}}{2m}a_{k}^{\\dagger }a_{k}\\,+\\,\\frac{1}{2}\\sum_{\\{k_{i}\\}}\\langle k_{1}k_{2}|v|k_{3}k_{4}\\rangle a _{k_{1}}^{\\dagger}a_{k_{2}}^{\\dagger}a_{k_{4}}a_{k_{3}}\\quad, \\tag{1}\\]
where the operators \\(a^{\\dagger}\\) (\\(a\\)) are the usual creation (annihilation) operators. The state label \\(\\{k\\}\\) includes both the three-momentum \\({\\bf k}\\) and the spin-isospin variables \\(\\sigma,\\tau\\) of the single particle state. As usual we will represent the interaction matrix elements as in Fig. 1, where a particle (hole) state is represented by a line with an up (down) arrow.
To be definite, for a purely central local interaction the matrix elements read, in general
\\[\\langle\\alpha\\beta|v|\\gamma\\delta\\rangle\\,=\\,\\int d^{3}r_{1}d^{3}r_{2}\\phi_{ \\alpha}^{*}({\\bf r}_{1})\\phi_{\\beta}^{*}({\\bf r}_{2})v({\\bf r}_{1}-{\\bf r}_{2} )\\phi_{\\gamma}({\\bf r}_{1})\\phi_{\\delta}({\\bf r}_{2}). \\tag{2}\\]
where the \\(\\phi\\)'s are the single particle wave functions. The graph of Fig. 1 can be interpreted as an interaction of the particle \\(k_{3}\\) and the hole \\(k_{4}\\) which scatter after the interaction to \\(k_{1}\\) and \\(k_{2}\\) respectively. Incoming arrows in Fig. 1 correspond to states appearing on the right of the matrix elements, while the first (second) dot indicates their position in the two-body state.
These graphs for the matrix elements of \\(v\\) are the building blocks for the more complete graphs representing the energy perturbation expansion.
The starting point of the perturbation expansion is the Gell-Mann and Low theorem (Gell-Mann and Low 1951). The theorem is quite general and applies to all systems which possess a non-degenerate ground state (with a finite energy). If we call \\(|\\psi_{0}\\rangle\\) the ground state of the full Hamiltonian \\(H\\), the theorem states that it can be obtained from the ground state \\(|\\phi_{0}\\rangle\\) of the unperturbed Hamiltonian \\(H_{0}\\) (in our case the free Fermi
Figure 1: Graphical representation of the NN interaction matrix element.
gas ground state) by a procedure usually called the adiabatic \"switching on\" of the interaction
\\[|\\psi_{0}\\rangle\\,=\\,\\lim_{\\epsilon\\to 0}\\frac{U^{(\\epsilon)}(-\\infty)|\\phi_{0} \\rangle}{\\langle\\phi_{0}|U^{(\\epsilon)}(-\\infty)|\\phi_{0}\\rangle}\\ \\, \\tag{3}\\]
which entails the normalization \\(\\langle\\phi_{0}|\\psi_{0}\\rangle=1\\). In Eq. (3), \\(U^{(\\epsilon)}(t)\\) is the evolution operator, in the interaction picture, from the generic time \\(t\\) to the time \\(t=0\\) of the modified Hamiltonian
\\[H^{\\epsilon}(t)\\,=\\,H_{0}\\,+\\,e^{-\\epsilon|t|}H_{1}\\ \\, \\tag{4}\\]
where \\(\\epsilon>0\\). Equation (4) implies that the Hamiltonian \\(H^{\\epsilon}\\) coincides with \\(H_{0}\\) in the limit \\(t\\rightarrow-\\infty\\) and with \\(H\\) at \\(t=0\\) and that the interaction is switched on following an infinitely slow evolution, namely, adiabatically. Equation (3) includes also the limit \\(t\\rightarrow-\\infty\\). The order of the two limits is of course essential and cannot be interchanged.
Intuitively the content of the Gell-Mann and Low theorem is simple: if the Hamiltonian evolves adiabatically and if we start from the ground state of the Hamiltonian \\(H(t_{0})\\) at a given initial time \\(t_{0}\\), the system will remain in the ground state of the local Hamiltonian \\(H(t)\\) at any subsequent time \\(t\\), since an infinitely slow evolution cannot excite any system by a finite amount of energy. It is therefore essential for the validity of the theorem that, during the evolution, the local ground state never becomes degenerate, e.g. no phase transition occurs. In the latter case, Eq. (3) will provide a state \\(\\psi_{0}\\) which is not the ground state of \\(H\\) but the state which can be obtained smoothly from the unperturbed ground state \\(\\phi_{0}\\) through the adiabatic switching on of the interaction.
The operator \\(U^{(\\epsilon)}(t)\\) can be obtained by a perturbation expansion from the free evolution operator \\(U_{0}(t)=\\exp(-iH_{0}t/\\hbar)\\), and for the present purpose one can write
\\[U^{(\\epsilon)}(-\\infty)=1-\\frac{i}{\\hbar}\\int_{-\\infty}^{0}H_{I}(t_{1})dt_{1}+ (-\\frac{i}{\\hbar})^{2}\\int_{-\\infty}^{0}H_{I}(t_{2})dt_{2}\\int_{-\\infty}^{t_{2 }}H_{I}(t_{1})dt_{1}\\cdots\\]
\\[=1+\\sum_{n=1}^{\\infty}(-\\frac{i}{\\hbar})^{n}\\frac{1}{n!}\\int_{-\\infty}^{0}dt_{ n}\\int_{-\\infty}^{0}dt_{n-1}\\cdots\\cdots\\]
\\[\\cdots\\cdots\\int_{-\\infty}^{0}dt_{1}T\\left[H_{I}(t_{n})H_{I}(t_{n-1})\\cdots H_ {I}(t_{1})\\right] \\tag{5}\\]
where \\(T\\) is the time ordered operator and
\\[H_{I}(t)\\,=\\,e^{iH_{0}t/\\hbar}H_{1}^{\\epsilon}(t)e^{-iH_{0}t/\\hbar}\\ . \\tag{6}\\]
In Eq. (6) the indication of the dependence of \\(H_{I}\\) on \\(\\epsilon\\) was omitted for simplicity. The limit \\(\\epsilon\\to 0\\) has to be taken after all the necessary manipulations have been performed. The demonstration of the Gell-Mann and Low theorem, based on the expansion of Eq. (5), can be found in the original paper or in textbooks on general many-body theory (Fetter and Walecka 1971). From Eq. (3), it follows that the energy shift \\(\\Delta E\\) due to the nucleon-nucleon interaction is given by
\\[\\Delta E\\,=\\,\\lim_{\\epsilon\\to 0}\\frac{\\langle\\phi_{0}|H_{1}U^{( \\epsilon)}(-\\infty)|\\phi_{0}\\rangle}{\\langle\\phi_{0}|U^{(\\epsilon)}(-\\infty)| \\phi_{0}\\rangle}\\ \\, \\tag{7}\\]where the expansion of Eq. (5) has to be used both in the numerator and in the denominator. The procedure is ill-defined in the limit \\(\\epsilon\\to 0\\), as one can see by considering the first non-trivial term (\\(n=1\\)) of the expansion of Eq. (5) and taking the matrix elements appearing in Eq. (7). They blow up in that limit. Fortunately, here we can get help from the so called \"linked cluster\" theorem. The formulation of the theorem is better stated in the language of the diagrammatic method, as explained below. The theorem shows that the numerator and the denominator possess a common factor, which includes all the diverging terms, and therefore they cancel out exactly, leaving a well defined finite result.
Finally, each term of the perturbation expansion can be explicitly worked out by means of Wick' s theorem, which allows one to evaluate the mean value of an arbitrary product of annihilation and creation operators in the unperturbed ground state. Then the perturbative expansion of the interaction energy part \\(\\Delta E\\) of the ground state energy can be expressed in terms of \"Goldstone diagrams\", as devised by Goldstone (1957). Each diagram represents, in a convenient graphical form, a term of the expansion, in order to avoid lengthy analytical expressions and to make their structure immediately apparent. The general rules (from \\(i\\) to _vi_ below) for associating the analytical expression to a given diagram are described in the following. The expression is constructed by the following factors.
(_i_) Each drawing of the form of Fig. 1, which can be called conventionally a \"vertex\", as usual represents a matrix element of the two-body interaction, according to the rules discussed previously.
(_ii_) A line with an upward (downward) arrow indicates a particle (hole) state, and it will be labeled by a momentum \\(k\\) (including spin-isospin), a different one for each line.
(_iii_) Between two successive vertices a certain number of lines (holes or particles) will be present in the diagrams. Then, the energy denominator
\\[\\frac{1}{e}\\,=\\,\\frac{1}{\\sum_{k_{i}}E_{k_{i}}-\\sum_{k^{\\prime}_{i}}E_{k^{ \\prime}_{i}}+i\\eta} \\tag{8}\\]
is introduced, where now the summation runs only on the particle and hole energies which are present in the diagram between the two vertices.
(_iv_) Each diagram is given an overall sign \\((-1)^{h+l+n-1}\\), where \\(n\\) is the order of the diagram in the expansion, \\(h\\) is the total number of hole lines in the diagram, and \\(l\\) the number of closed loops. A \"loop\" is a fermion line (hole or particle) which closes on itself when followed along the diagrams, as indicated by the directions of the arrows, passing eventually through the dots of vertices.
(_v_) Finally a \"symmetry factor\" of the form \\((\\frac{1}{2})^{s}\\), \\(s=0,1,2\\cdots\\), has to be put in front of the whole expression. In general, the factor is connected with the symmetry of the diagram, and to find its correct value it is necessary to analyze the formalism in more detail. Let us consider the case where two lines, both particles or both holes, connect the same two interaction vertices, without being involved in any other part of the diagram.
They can be called \"equivalent lines\". In this case, the only one that will be considered, the symmetry factor is \\(\\frac{1}{2}\\).
(_vi_) Of course, one must finally sum over all the momenta labeling the lines of the diagram.
Since we are considering the ground state energy, only \"closed\" diagrams must be included, i.e. no external line must be present. Furthermore, according to the linked-cluster theorem, only connected diagrams must be considered, i.e. diagrams which cannot be separated into two or more pieces with no line joining them.
In conclusion, the ground state energy shift is obtained by summing up all possible closed and linked diagrams
\\[\\Delta E\\,=\\,\\lim_{\\epsilon\\to 0}\\langle\\phi_{0}|H_{1}U^{(\\epsilon)}(-\\infty)|\\phi_{0} \\rangle_{CL}\\ \\, \\tag{9}\\]
where the subscript \\(CL\\) means connected diagrams only, linked with the first interaction \\(H_{I}\\). The latter specification means simply, in this case, that the diagram must be complete, namely it must involve all the interactions.
Another fundamental consequence of the restriction to connected diagrams is that the energy shift \\(\\Delta E\\) is proportional to the volume of the system, as it must be for extended systems with short range interactions only. Disconnected diagrams have the unphysical property to be proportional to higher powers of the volume.
Let us consider the nuclear matter case with a typical NN interaction. The NN interaction is characterized by a strong repulsion at short distance. The simplest assumption would be to consider an infinite hard core below a certain core radius. Such a NN potential has obviously infinite matrix elements in momentum representation, and a perturbation expansion has no meaning. All modern realistic NN interactions introduce a finite repulsive core, which however is quite large, and therefore in any case a straightforward perturbative expansion cannot be applied. The repulsive core is expected to modify strongly the ground state wave function whenever the coordinates of two particles approach each other at a separation distance smaller than the core radius \\(c\\). In such a situation the wave function should be sharply decreasing with the two particle distance. The \"wave function\" of two particles in the unperturbed ground state \\(\\phi_{0}\\) can be defined as (\\(k_{1},k_{2}\\leq k_{F}\\))
\\[\\phi(r_{1},r_{2})\\,=\\,\\langle\\phi_{0}|\\psi^{\\dagger}_{\\xi_{1}}({\\bf r_{1}}) \\psi^{\\dagger}_{\\xi_{2}}({\\bf r_{2}})a_{k_{1}}a_{k_{2}}|\\phi_{0}\\rangle\\,=\\,e^ {i({\\bf k_{1}}+{\\bf k_{2}})\\cdot{\\bf R}}e^{i({\\bf k_{1}}-{\\bf k_{2}})\\cdot{ \\bf r}/2}\\ \\, \\tag{10}\\]
where \\(\\xi_{1}\
eq\\xi_{2}\\) are spin-isospin variables, and \\({\\bf R}=({\\bf r_{1}}+{\\bf r_{2}})/2\\), \\({\\bf r}=({\\bf r_{1}}-{\\bf r_{2}})\\) are the center of mass and relative coordinate of the two particles respectively. Therefore the wave function of the relative motion in the \\(s\\)-wave is proportional to the spherical Bessel function of order zero \\(j_{0}(kr)\\), with \\(k\\) the modulus of the relative momentum vector \\({\\bf k}=({\\bf k_{1}}-{\\bf k_{2}})/2\\). The core repulsion is expected to act mainly in the \\(s\\)-wave, since it is short range, and therefore this behaviour must be strongly modified. In the simple case of \\(k=0\\) the free wave function \\(j_{0}(kr)\\to 1\\), and schematically one can expect a modification, due to the core, as depicted in Fig. 2. The main effect of the core is to \"deplete\" the wave function close to \\(r=0\\), in a region of the order of the core radius \\(c\\). Of course, the attractive part of the interaction will modify this simple picture at \\(r>c\\). If the core interaction is the strongest one, then the average probability \\(p\\) for two particles to be at distance \\(r<c\\) would be a measure of the overall strength of the interaction. If \\(p\\) is small, then one can try to expand the total energy shift \\(\\Delta E\\) in power of \\(p\\). The power \\(p^{n}\\) has, in fact, the meaning of probability for \\(n\\) particles to be all at a relative distance less than \\(c\\). In a very rough estimate \\(p\\) is given by the ratio between the volume occupied by the core and the average available volume per particle
\\[p\\,\\approx\\,\\left(\\frac{c}{d}\\right)^{3} \\tag{11}\\]
with \\(\\frac{4\\pi}{3}d^{3}=\\rho^{-1}\\). From Eq. (11) one gets \\(p\\approx\\frac{8}{9\\pi}(k_{F}c)^{3}\\), which is small at saturation, \\(k_{F}=1.36\\,fm^{-1}\\), and the commonly adopted value for the core is \\(c=0.4\\,fm^{-1}\\). The parameter remains small up to few times the saturation density.
The graphs of the expansion can now be ordered according to the order of the correlations they describe, i.e. the power in \\(p\\) they are associated with. It is easy to recognize that this is physically equivalent to grouping the diagrams according to the number of hole lines they contain, where \\(n\\) hole lines correspond to \\(n\\)-body correlations. In fact, an irreducible diagram with \\(n\\) hole lines describes a process in which \\(n\\) particles are excited from the Fermi sea and scatter in some way above the Fermi sea. Equivalently, all the diagrams with \\(n\\) hole lines describe the effect of clusters of \\(n\\) particles, and therefore the arrangement of the expansion for increasing number of hole lines is called alternatively \"hole expansion\" or \"cluster expansion\".
The series of two hole-line diagrams starts with the diagrams depicted in Fig. 3 (first order) and in Fig. 4 (second order) and continues with the ones shown in Fig. 5.
The infinite set of diagrams depicted in Fig. 5 can be summed up formally by introducing the two-body scattering matrix \\(G\\), as schematically indicated in Fig. 6. In
Figure 3: Direct and exchange first order diagrams.
Figure 2: Schematic representation of the expected effect of the core repulsion on the two-body wave function in nuclear matter.
Figure 4: Direct (a) and exchange (b) second order diagrams.
Figure 5: Higher order ladder diagrams.
Figure 6: The geometric series for the G-matrix.
the second line of Fig. 6 the geometric series has been re-introduced, once the initial interaction has been isolated. This corresponds to the following integral equation
\\[\\langle k_{1}k_{2}|G(\\omega)|k_{3}k_{4}\\rangle\\;=\\;\\langle k_{1}k_{2}|v|k_{3}k_{4 }\\rangle+\\]
\\[+\\sum_{k_{3}^{\\prime}k_{4}^{\\prime}}\\langle k_{1}k_{2}|v|k_{3}^{\\prime}k_{4}^{ \\prime}\\rangle\\;\\frac{\\left(1-\\Theta_{F}(k_{3}^{\\prime})\\right)\\left(1-\\Theta_ {F}(k_{4}^{\\prime})\\right)}{\\omega-e_{k_{3}^{\\prime}}-e_{k_{4}^{\\prime}}}\\, \\langle k_{3}^{\\prime}k_{4}^{\\prime}|G(\\omega)|k_{3}k_{4}\\rangle\\;\\;. \\tag{12}\\]
In the diagrams, the intermediate states are particle states, and this is indicated in Eq. (12) by the two factors \\(1-\\Theta_{F}(k)\\). One can consider the diagrams of Fig. 6 as part of a given complete diagram of the total energy expansion. Therefore all the energy denominators contain the otherwise undefined quantity \\(\\omega\\), usually indicated as the \"entry energy\" of the \\(G\\) matrix. The precise value of \\(\\omega\\) will depend on the rest of the diagram where the \\(G\\) matrix appears, as we will see soon. Equation (12) is anyhow well defined for any given value of \\(\\omega\\). It has to be noticed that Eq. (12) is very similar to the equation which defines the usual off-shell scattering \\(T\\) matrix between two particles in free space. The \\(G\\) matrix of Eq. (12) can be considered the generalization of the \\(T\\) matrix to the case of two particles in a medium (nuclear matter in our case). Actually in the zero density limit, \\(\\Theta_{F}(k)\\to 0\\) and the \\(G\\) matrix indeed coincides with the scattering \\(T\\) matrix. Once the \\(G\\) matrix has been introduced, the full set of two hole-line diagrams can be expressed as in Fig. 7, where the \\(G\\) matrix is now indicated by a wiggly line. This notation stresses the similarity between the \\(G\\) matrix and the bare nucleon-nucleon interaction \\(v\\). This result, as depicted in Fig. 7, can be checked by expanding Eq. (12) (by iteration). The entry energy in this case is \\(\\omega=e_{k_{1}}+e_{k_{2}}\\), which means that the \\(G\\) matrix is \"on the energy shell\", i.e. the \\(G\\) matrix is calculated at the energy of the initial state. The diagrams need a factor \\(\\frac{1}{2}\\), since the two hole-lines are equivalent, according to rule (_v_). Therefore the correction \\(\\Delta E_{2}\\) to the unperturbed total energy (just the kinetic energy), at the two hole-line level of approximation, is given by
\\[\\Delta E_{2}\\,=\\,\\frac{1}{2}\\sum_{k_{1},k_{2}<k_{F}}\\langle k_{1}k_{2}|G(e_{k _{1}}+e_{k_{2}})|k_{1}k_{2}\\rangle_{A}\\quad, \\tag{13}\\]
where the label \\(A\\) indicates that both direct and \"exchange\" matrix elements have to be considered, i.e. \\(|k_{1}k_{2}\\rangle_{A}=|k_{1}k_{2}\\rangle-|k_{2}k_{1}\\rangle\\). One of the major virtues of the \\(G\\) matrix is to be defined even when the interaction \\(v\\) is singular (e.g. it presents an infinite hard core). This shows that the \\(G\\) matrix is in some sense \"smaller\" than the NN interaction
Figure 7: Two hole-line diagrams for the ground state energy in terms of the G-matrix (wiggly lines).
\\(v\\), and an expansion of the total energy shift \\(\\Delta E\\) in \\(G\\), instead of \\(v\\), should have a better degree of convergence. To substitute \\(v\\) with the matrix \\(G\\) in the original expansion is always possible, since a \"ladder sum\" (a set of diagrams of the type in Fig. 6) can always be inserted at a given vertex and the corresponding series of diagrams summed up (with the proviso of avoiding double counting). In general, however, the resulting \\(G\\)-matrix will be \"off the energy shell\", which complicates the calculations considerably. It turns out, anyhow, that also the bare expansion of \\(\\Delta E\\) in terms of the \\(G\\)-matrix, in place of the NN interaction \\(v\\), is still badly divergent.
The solution of this problem is provided by the introduction of an \"auxiliary\" single particle potential \\(U(k)\\). The physical reason of such a procedure becomes apparent if one notices that the energies of the hole or particle states are surely modified by the presence of the interaction \\(H_{1}\\), and intuitively they should have some relevant effects on the total energy of the system. However, in the Goldstone expansion of Eq. (9), or similar, such an effect does not appear explicitly, and therefore it should be somehow introduced into (or extracted from) the expansion, since, physically speaking, any two-body or higher correlations should be evaluated as corrections to some mean field contribution. The genuine strength of the correlations has to be estimated in comparison with a reference mean field energy, rather than to the free particle energy. The explicit form of the auxiliary single particle potential has to be chosen in such a way to minimize the effect of correlations, which is equivalent to speed up the rate of convergence of the expansion. Formally, one can re-write the original Hamiltonian by adding and subtracting the auxiliary single particle potential \\(U\\)
\\[H= (H_{0}+U)\\,+\\,(H_{1}-U)\\,=\\,H_{0}^{\\prime}+H_{1}^{\\prime}\\] \\[H_{0}^{\\prime} = \\sum_{k}\\left[\\frac{\\hbar^{2}{\\bf k}^{2}}{2m}+U(k)\\right]\\,\\equiv \\,\\sum_{k}e_{k}a_{k}^{\\dagger}a_{k}\\quad, \\tag{14}\\]
and consider \\(e_{k}\\) as the new single particle spectrum. The expansion is now in the new perturbation interaction \\(H_{1}^{\\prime}\\). The final result should be, of course, not dependent on \\(U\\), at least in principle. A \"good\" choice of the auxiliary potential \\(U\\) is surely one which is able to strongly reduce the contribution of \\(H_{1}^{\\prime}\\) to the total energy of the system. The perturbation expansion in \\(H_{1}^{\\prime}\\) can be formulated in terms of the same Goldstone diagrams discussed previously, where the single particle kinetic energies \\(t_{k}\\) are substituted by the energies \\(e_{k}=t_{k}+U(k)\\) in all energy denominators. Furthermore, new terms must be introduced, which correspond to the so called \"\\(U\\) insertions\". More precisely, the rules _i-vi_ above must be supplemented by the following two other additional rules.
(_i - bis_) A symbol of the form reported in Fig. 8 indicates a \\(U\\) insertion, which corresponds to a factor \\(U(k_{1})\\delta_{K}({\\bf k}_{1}-{\\bf k}_{2})\\delta_{\\xi_{1}\\xi_{2}}\\) in the diagram.
(_iv - bis_) A diagram with a number \\(u\\) of \\(U\\) insertions contains the additional phase \\((-1)^{u}\\). This is a trivial consequence of the minus sign with which \\(U\\) appears in \\(H_{1}^{\\prime}\\).
The first \\(U\\) insertion of Fig. 9 cancels out exactly the potential energy part of the single particle energies \\(e_{k}\\) as contained in \\(H_{0}^{\\prime}\\), see Eq. (14), and therefore the total energy at the two hole-line level is given by
\\[E_{2}\\!\\!=\\,\\sum_{k<k_{F}}\\frac{\\hbar^{2}{\\bf k}^{2}}{2m}\\,+\\,\\frac{1}{2}\\sum_{k _{1},k_{2}<k_{F}}\\langle k_{1}k_{2}|G(e_{k_{1}}+e_{k_{2}})|k_{1}k_{2}\\rangle_{A}\\quad. \\tag{15}\\]
One has to keep in mind that the \\(G\\) matrix depends now on \\(U\\), since the auxiliary potential appears in the definition of the single particle energies \\(e_{k}\\). The appearance of the unperturbed kinetic energy is valid for any choice of the auxiliary potential and it is not modified by the addition of the higher order terms in the expansion. It is a distinctive feature of the Goldstone expansion that all correlations modify only the interaction part and leave the kinetic energy unchanged. Of course this property is pertinent only to the expression of the ground state energy.
It is time now to discuss the choice of the auxiliary potential. A good choice of \\(U\\) should minimize the contributions from higher order correlations, i.e. the contributions of the diagrams with three or more hole-lines. In other words, the \\(U\\) insertion diagrams must counterbalance the diagrams with no \\(U\\) insertion. An exact cancellation is of course not possible, however one can select some graphs which are expected to be large and try to cancel them out exactly. At the three hole-line level, one of the largest contributions is expected to be given by the graph of Fig. 10a. In this diagram the symbol, already introduced, for indicating a \\(G\\) matrix stands for the corresponding ladder summation inside the diagram. This can be done systematically along the expansion, but one has to be careful in checking the energy at which the \\(G\\) matrix has to be calculated, i.e. if it is on shell or off shell. It has been shown by Bethe, Brandow and Petschek (1962) that it is possible to choose \\(U\\) in such a way that the corresponding potential insertion diagram, shown in Fig. 10b, cancels out the (hole) \"bubble diagram\" of Fig. 10a. This is
Figure 8: Representation of a potential insertion factor.
Figure 9: The first potential insertion diagram.
indeed possible by virtue of the so called BBP theorem established by the authors, which states that the \\(G\\) matrix connected with the bubble in the diagram of Fig. (10a) must be calculated on the energy shell, namely \\(\\omega=e_{k_{1}}+e_{k_{2}}\\). For the other two \\(G\\) matrices appearing in the diagram this property is also valid, but this is a trivial consequence of the theorem. Therefore, if one adopts for the auxiliary potential the choice
\\[U(k)\\,=\\,\\sum_{k^{\\prime}<k_{F}}\\langle kk^{\\prime}|G(e_{k_{1}}+e_{k_{2}})|kk^{ \\prime}\\rangle\\quad, \\tag{16}\\]
it is straightforward to see that the diagram of Fig. 10b is equal to minus the diagram of Fig. 10a (remind the rule _iv - bis_).
The choice of Eq. (16) for \\(U\\) was originally devised by Brueckner, on the basis of physical considerations. The choice of Eq. (16) is therefore called the Brueckner potential; it implies a self-consistent determination of \\(U\\), since, as already mentioned, the \\(G\\) matrix itself depends on \\(U\\). The hole expansion with the Brueckner choice for \\(U\\) is called the Bethe-Brueckner-Goldstone (BBG) expansion.
In the original Brueckner theory the potential \\(U\\) was assumed to be zero above \\(k_{F}\\). This is called the \"standard choice\", or \"gap choice\", since it necessarily implies that the single particle energy \\(e_{k}\\) is discontinuous at \\(k=k_{F}\\). This choice also implies that the potential insertion diagram of Fig. 11b is automatically zero. The corresponding diagram, with the \\(G\\) matrix replacing the auxiliary potential, depicted in Fig. 11a, is therefore in no way counterbalanced. The \\(G\\) matrix in this diagram is off shell. In fact, the BBP theorem does not hold for it. The graph of Fig. 11a is usually referred to also as the particle bubble diagram, or simply \"bubble diagram\", and in the following this terminology is adopted.
Another possible choice for the auxiliary potential \\(U(k)\\) is the so called \"continuous choice\", where \\(U(k)\\) is defined by Eq. (16) for all values of \\(|{\\bf k}|\\). In this case the potential is continuous through the Fermi surface and \\(e({\\bf k})\\) can be interpreted as a single particle spectrum. Furthermore the two diagrams of Fig. 11 can have some degree of compensation, as we will see in the applications. Since the final results must be independent of the choice of the auxiliary potential, the sensitivity of the results to \\(U(k)\\) at a given order of the expansion can be used as a criterion for the degree of
Figure 10: One of the lowest order (in the G-matrix) three hole-line diagrams (a) and the corresponding potential insertion diagram (b).
convergence reached at that level of approximation. No sensitivity would correspond to a complete convergence.
The bubble diagram of Fig. (11a) can be considered the first term of the full set of three hole-line diagrams. The two hole-line diagrams have been summed up by introducing the two-body \\(G\\) matrix, which is the generalization to the nuclear medium of the two-body scattering matrix in free space. From Eq. (12) it is apparent that the only difference between the \\(G\\) matrix and the free space scattering matrix is the presence of the \"Pauli operator\" \\(Q(k_{1},k_{2})=(1-\\Theta_{F}(k_{1}))(1-\\Theta_{F}(k_{2}))\\), with \\(\\Theta_{F}(k)\\) the (zero temperature) Fermi distribution, and the presence of the energies \\(e_{k}\\) in place of the kinetic energies. This has far-reaching consequences.
It is therefore conceivable that the three hole-line diagrams could be summed up by introducing some similar generalization of the scattering matrix for three particles in free space, which would correspond physically to consider the contribution of the three-body clusters. The three-body scattering problem has a long history by itself, and has been given a formal solution by Fadeev (1965). For three distinguishable particles the three-body scattering matrix \\(T^{(3)}\\) is expressed as the sum of three other scattering matrices, \\(T^{(3)}=T_{1}+T_{2}+T_{3}\\). The scattering matrices \\(T_{i}\\) satisfy a system of three coupled integral equations. The kernel of this set of integral equations contains explicitly the two-body scattering matrices pertaining to each possible pair of particles. Also in this case, therefore, the original two-particle interaction disappears from the equations in favor of the two-body scattering matrix. The formal reason for this substitution is the need of avoiding \"disconnected processes\", which introduce spurious singularities in the equations (Fadeev 1965). For identical particles the three integral equations reduce to one, because of symmetry. In fact, the three functions \\(T_{i}\\) must coincide within a change of variable with a unique function, which we can still call \\(T^{(3)}\\). The analogous equation and scattering matrix in the case of nuclear matter (or other many-body systems in general) has been introduced by Rajaraman and Bethe (1967). The integral equation,
Figure 11: Particle bubble diagram (a) and the corresponding potential insertion diagram (b).
the Bethe-Fadeev equation, reads schematically
\\[T^{(3)}\\ =\\ G\\ +\\ G\\ X\\ \\frac{Q_{3}}{e}\\ T^{(3)}\\]
\\[\\langle k_{1}k_{2}k_{3}|T^{(3)}|k_{1}^{\\prime}k_{2}^{\\prime}k_{3}^{\\prime}\\rangle = \\langle k_{1}k_{2}|G|k_{1}^{\\prime}k_{2}^{\\prime}\\rangle\\delta_{K}( k_{3}-k_{3}^{\\prime})+ \\tag{17}\\] \\[+\\ \\langle k_{1}k_{2}k_{3}|G_{12}\\,X\\,\\frac{Q_{3}}{e}\\,T^{(3)}|k_ {1}^{\\prime}k_{2}^{\\prime}k_{3}^{\\prime}\\rangle\\quad.\\]
The factor \\(Q_{3}/e\\) is the analogous of the similar factor appearing in the integral equation for the two-body scattering matrix \\(G\\), see Eq. (12). Therefore, the projection operator \\(Q_{3}\\) imposes that all the three particle states lie above the Fermi energy, and the denominator \\(e\\) is the appropriate energy denominator, namely the energy of the three-particle intermediate state minus the entry energy \\(\\omega\\), in close analogy with the equation for the two-body scattering matrix \\(G\\), Eq. (12). The real novelty with respect to the two-body case is the operator \\(X\\). This operator interchanges particle 3 with particle 1 and with particle 2, \\(X=P_{123}+P_{132}\\), where \\(P\\) indicates the operation of cyclic permutation of its indices. It gives rise to the so-called \"endemic factor\" in the Fadeev equations, since it is an unavoidable complication intrinsic to the three-body problem in general. The reason for the appearance of the operator \\(X\\) in this context is that no two successive \\(G\\) matrices can be present in the same pair of particle lines, since the \\(G\\) matrix already sums up all the two-body ladder processes. In other words, the \\(G\\) matrices must alternate from one pair of particle lines to another, in all possible ways, as it is indeed apparent from the expansion by iteration of Eq. (17), which is represented in Fig. 12.
Therefore, both cyclic operations are necessary in order to include all possible processes. In the structure of Eq. (17) the third particle, with initial momentum \\(k_{3}\\), is somehow singled out from the other two. This choice is arbitrary, but it is done in view of the use of the Bethe-Fadeev equation within the BBG expansion.
In order to see how the introduction of the three-body scattering matrix \\(T^{(3)}\\) allows
Figure 12: Expansion of the Bethe-Fadeev integral equation.
one to sum up the three hole line diagrams, we first notice, following Day (1981), that this set of diagrams can be divided into two distinct groups. The first one includes the graphs where two hole lines, out of three, originate at the first interaction of the graph and terminate at the last one without any further interaction in between. Schematically the sum of this group of diagrams can be represented as in Fig. 13a. The third hole line has been explicitly indicated, out from the rest of the diagram. The remaining part of the diagram describes the scattering, in all possible ways, of three particle lines, since no further hole line must be present in the diagram. This part of the diagram is indeed the three-body scattering matrix \\(T^{(3)}\\), and the operator \\(Q_{3}\\) in Eq. (17) ensures, as already mentioned, that only particle lines are included.
The second group includes the diagrams where two of the hole lines enter their second interaction at two different vertices in the diagram, as represented in Fig. 13b. Again the remaining part of the diagram is \\(T^{(3)}\\), i.e. the sum of the amplitudes for all possible scattering process of three particles. It is easily seen that no other structure is possible. The set of diagrams indicated in Fig. 13b can be obtained by the ones of Fig. 13a by simply interchanging the final (or initial) point of one of the \"undisturbed\" hole lines with the final (or initial) point of the third hole line. This means that one can obtain each graph of the group depicted in Fig. 13b by acting with the operator \\(X\\) on the bottom of the corresponding graph of Fig. 13a. In this sense the diagrams of Fig. 13b can be considered the \"exchange\" diagrams of the ones in Fig. 13a (not to be confused with the term \"exchange\" previously introduced for the matrix elements of \\(G\\)). If one inserts the terms obtained by iterating Eq. (17) inside these diagrams in substitution of the scattering matrix \\(T^{(3)}\\) (the box in Fig. 13), the first diagram, coming from the inhomogeneous term in Eq. (17) is just the bubble diagram of Fig. 11a. The corresponding exchange diagram is the so called \"ring diagram\", reported in Fig. 14. It turns out that for numerical reasons it is convenient to separate both bubble and ring diagrams from the rest of the three hole-line diagrams, which will be conventionally indicated as \"higher\" diagrams.
Indeed, going on with the iterations, one gets sets of diagrams as the ones depicted in Figs. 15, and so on.
Figure 13: Schematic representation of the direct (a) and exchange (b) three hole-line diagrams.
To these series of diagrams one has, of course, to add the diagrams obtained by introducing the exchange matrix elements of \\(G\\) in place of the direct ones (if they really introduce a new diagram). The structure of the diagrams of Figs. (15) displays indeed the successive three-particle scattering processes.
Let us notice that the graph of Fig. 10a, where the bubble is attached to the hole line, is not included, and it has to be added separately, as previously discussed in connection with the \\(U\\) insertion diagrams.
Some ambiguity arise if the diagram of Fig. 16 should be included at the three hole-line level or not. The diagram is usually referred to as the \"hole-hole\" diagram, for obvious reasons. Although, due to momentum conservation, only three hole lines are independent, we will consider this particular diagram as belonging to the four hole-line class.
For writing down explicitly the three hole-line contribution to the total energy we
Figure 14: The ring diagram, belonging the set of three hole-line diagrams.
Figure 15: The series of three hole-line diagrams, up to fifth order in the \\(G\\)-matrix.
still need to find out the correct symmetry factors and signs. Let us first consider the part of the diagram which describes the interaction among the three particle lines. In the scattering processes each two-body \\(G\\) matrix can involve both the direct and the exchange term, as illustrated in Fig. 17. Hence, there is no additional symmetry factor involved. The three lines, in fact, are never equivalent, since the various \\(G\\) matrices are alternating among the different possible pairs of particles along the diagram. Therefore, the direct and exchange matrix elements of each \\(G\\) matrix have to be considered and no symmetry factor for this part of the diagram has to be introduced. Let us consider now the hole lines which close the diagram. For the diagrams of the type of Fig. 13a, two equivalent hole lines appear, joining the first and the last interaction. As discussed previously, this implies the introduction of a symmetry factor equal to \\(\\frac{1}{2}\\) in front of each diagram belonging to this group. In conclusion, the explicit expression for the contribution of the whole set of diagrams of Fig. (13a) (the \"direct\" diagrams) can be written
\\[E^{dir}_{3h}=\\frac{1}{2}\\sum_{k_{1},k_{2},k_{3}\\leq k_{F}}\\sum_{\\{k^{\\prime}\\}, \\{k^{\\prime\\prime}\\}\\geq k_{F}}\\langle k_{1}k_{2}|G|k^{\\prime}_{1}k^{\\prime}_ {2}\\rangle_{A}\\]
\\[\\cdot\\frac{1}{e}\\ \\langle k^{\\prime}_{1}k^{\\prime}_{2}k^{\\prime}_{3}|XT^{(3)}X| k^{\\prime\\prime}_{1}k^{\\prime\\prime}_{2}k^{\\prime\\prime}_{3}\\rangle\\ \\frac{1}{e^{\\prime}}\\ \\langle k^{\\prime\\prime}_{1}k^{\\prime\\prime}_{2}|G|k_{1}k_{2}\\rangle_{A}\\ \\ \\, \\tag{18}\\]
where again the operators \\(X\\) are introduced in order to generate all possible scattering processes, with the condition that the \\(G\\) matrices alternate, from one interaction to the next one, among the possible pairs out of the three particle lines. In Eq. (18) the denominator \\(e=E_{k^{\\prime}_{1}}+E_{k^{\\prime}_{2}}-E_{k_{1}}-E_{k_{2}}\\), and analogously \\(e^{\\prime}=E_{k^{\\prime\\prime}_{1}}+E_{k^{\\prime\\prime}_{2}}-E_{k_{1}}-E_{k_{2}}\\).
Let us now consider the exchange diagrams of Fig. 13b. They can be obtained by
Figure 16: The hole-hole diagram. It is not included in the Bethe-Fadeev equation.
Figure 17: Direct and exchange contribution within the three hole-line diagrams.
interchanging the initial point (or end point) of the hole line labeled \\(k_{3}\\) with the one of the hole line \\(k_{2}\\), i.e. by multiplying by \\(P_{123}\\) the expression of Eq. (18) (at the right or left side). In this case, however, we have to omit the symmetry factor \\(\\frac{1}{2}\\), since no pair of equivalent lines appears any more. We could equally well interchange \\(k_{3}\\) with \\(k_{1}\\), since in this way we actually consider the same set of diagrams. This can be readily checked by displaying explicitly the sets of associated diagrams in the two cases. It is then convenient to take the average of the two possibilities, which is equivalent to multiply by \\(P_{123}+P_{132}\\equiv X\\) the expression of Eq. (18) and to reintroduce the factor \\(\\frac{1}{2}\\). In summary, the entire set of three hole-line diagrams can be obtained by multiplying the expression of Eq. (18) by \\(1+X\\).
It is convenient in Eqs. (17) and (18) to single out the first interaction which occurs in \\(T^{(3)}\\), where the third hole line must originate. Posing \\(T^{(3)}=GD\\), or, explicitly
\\[\\langle k_{1}k_{2}k_{3}|T^{(3)}|k_{1}^{\\prime}k_{2}^{\\prime}k_{3}^{\\prime} \\rangle\\,=\\,\\sum_{k_{1}^{\\prime\\prime},k_{2}^{\\prime\\prime}}\\langle k_{1}k_{ 2}|G|k_{1}^{\\prime\\prime}k_{2}^{\\prime\\prime}\\rangle\\langle k_{1}^{\\prime \\prime}k_{2}^{\\prime\\prime}k_{3}|D|k_{1}^{\\prime}k_{2}^{\\prime}k_{3}^{\\prime} \\rangle\\quad, \\tag{19}\\]
then the matrix \\(D\\) satisfies the formal equation
\\[D\\,=\\,1\\,-\\,X\\frac{Q_{3}}{e_{3}}GD\\quad. \\tag{20}\\]
Notice that, contrary to the \\(G\\)-matrices appearing in Eq. (13), the \\(G\\) matrix appearing in Eq. (20) is off-the energy shell, since the denominators which enter in its definition contain the energy of the hole lines \\(k_{1},k_{2},k_{3}\\), as well as of the third particle line. The denominator \\(e_{3}\\) is the energy of the appropriate three particles-three holes intermediate state.
Summarizing, the three-hole line contribution can be obtained by solving the integral equation (20) and inserting the solution in Eq. (18). Notice that the solution \\(D\\) depends parametrically on the external three hole lines momenta.
The scattering matrix \\(T^{(3)}\\) (or equivalently \\(D\\)) can be used as the building block for the construction of the irreducible four-body scattering matrix \\(T^{(4)}\\), in an analogous way as the two-body scattering matrix \\(G\\) has been used to construct \\(T^{(3)}\\). The resulting equations for \\(T^{(4)}\\), in the case of four particles in free space, are called Yakubovsky equations. It is not difficult to imagine that additional complexities are involved in these equations. Since nobody till now has dared to write down these equations for nuclear matter, not to say to solve them, we will not discuss their structure. However, estimates of the four-hole lines contribution have been considered (Day 1981).
## 3 Nuclear matter within the BBG expansion.
Before summarizing the theoretical results for the EoS at zero temperature on the basis of the BBG expansion, let us briefly analyze in more detail the properties of the scattering matrix \\(G\\). As already mentioned, the \\(G\\) matrix can be considered as the in medium two-body scattering matrix. This can be more clearly seen by introducing the two body scattering wave function \\(\\Psi_{k_{1},k_{2}}\\) in analogy to the case of free space scattering (Newton 1966)
\\[|\\Psi_{k_{1},k_{2}}\\rangle\\,=\\,|k_{1}k_{2}\\rangle\\,+\\,\\frac{Q}{e}G|k_{1}k_{2} \\rangle\\,=\\,|k_{1}k_{2}\\rangle\\,+\\,\\frac{Q}{e}v|\\Psi_{k_{1},k_{2}}\\rangle\\quad, \\tag{21}\\]
where we have used the relationship \\(G|k_{1}k_{2}\\rangle=v|\\Psi_{k_{1},k_{2}}\\rangle\\). The latter is obtained by multiplying by \\(v\\) the first of Eqs. (21), which defines the scattering wave function \\(|\\Psi\\rangle\\), and making use of the integral equation (12) for the \\(G\\) matrix. It is instructive to look more closely to the scattering wave function in coordinate representation. The centre of mass motion separates, since the total momentum \\({\\bf P}\\) is a constant of the motion. In the notation of Eq. (10), the integral equation for the scattering wave function, in the relative coordinate, reads
\\[\\psi({\\bf r})\\,=\\,e^{i{\\bf kr}}\\,+\\,\\int d^{3}r^{\\prime}({\\bf r}|\\frac{Q}{e}|{ \\bf r}^{\\prime})v({\\bf r})\\psi({\\bf r}^{\\prime})\\quad, \\tag{22}\\]
where, for simplicity, the spin-isospin indices have been suppressed and the NN interaction has been assumed to be local. Still the wave function \\(\\psi\\) depends on both the total momentum \\({\\bf P}\\) and on the entry energy \\(\\omega\\). The latter appears in the denominator \\(e\\), see Eq. (12), while the total momentum \\({\\bf P}\\) appears also in the Pauli operator \\(Q\\). The kernel \\(\\frac{Q}{e}\\) in Eq. (22) is the same as in the usual theory of two-body scattering (Newton 1966), except for the Pauli operator \\(Q\\), which has a deep consequence on the properties of \\(\\psi\\). This can be most easily seen if one considers the case \\(P=0\\) and an entry energy corresponding to two particles inside the Fermi sphere, \\(\\omega<2E_{F}\\). In this case the Pauli operator simply implies that the relative momentum \\(|{\\bf k}|^{\\prime}>k_{F}\\), and the kernel reads, after a little algebra
\\[({\\bf r}|\\frac{Q}{e}|{\\bf r}^{\\prime})\\,=\\,\\frac{1}{2\\pi^{2}}\\int_{k_{F}}^{ \\infty}\\frac{k^{\\prime}dk^{\\prime}}{2e_{k^{\\prime}}\\,-\\,\\omega}\\frac{\\sin k^{ \\prime}|{\\bf r}-{\\bf r}^{\\prime}|}{|{\\bf r}-{\\bf r}^{\\prime}|}\\quad, \\tag{23}\\]
where the energy denominator never vanishes (provided the energy \\(e_{k^{\\prime}}\\) is an increasing function of \\(k^{\\prime}\\), as it always happens in practice). In the usual scattering theory (Newton 1966), on the contrary, the denominator can vanish and the integral on \\(k^{\\prime}\\) provides the free one particle Green' s function (according to the chosen boundary conditions). Then, for large values of \\(r\\), one gets the usual asymptotic behaviour (for outgoing boundary condition)
\\[\\psi({\\bf r})\\,-\\,e^{i{\\bf kr}}\\sim f(\\theta)\\frac{e^{ikr}}{r}\\quad, \\tag{24}\\]
which describes an outgoing spherical wave and therefore a flux of scattered particles. Here \\(\\theta\\) is the angle between \\({\\bf r}\\) and the initial momentum \\({\\bf k}\\). The asymptotic behaviour of the kernel of Eq. (22) can be obtained by a first partial integration with respect to the sine function
\\[({\\bf r}|\\frac{Q}{e}|{\\bf r}^{\\prime})\\,=\\,\\frac{1}{2\\pi^{2}}\\frac{k_{F}}{2e_ {k_{F}}\\,-\\,\\omega}\\frac{\\cos k_{F}|{\\bf r}-{\\bf r}^{\\prime}|}{|{\\bf r}-{\\bf r }^{\\prime}|^{2}}\\,+\\,O(|{\\bf r}-{\\bf r}^{\\prime}|^{-3})\\quad, \\tag{25}\\]
since further partial integrations give higher inverse power of \\(|{\\bf r}-{\\bf r}^{\\prime}|\\) (here the non-vanishing of the energy denominator is essential). The asymptotic behaviour of \\(\\psi({\\bf r})\\)follows easily from Eq. (25), since the NN interaction is of short range. Inserting Eq. (25) in Eq. (22), one gets
\\[\\psi({\\bf r})\\,-\\,e^{i{\\bf k}\\cdot{\\bf r}}\\sim\\frac{\\cos k_{F}r}{r^{2}}\\quad. \\tag{26}\\]
In this case the scattered flux vanishes at large distance, since the scattered wave vanishes faster than \\(1/r\\), and no real scattering actually occurs. The scattering wave function \\(\\psi({\\bf r})\\) indeed merges, at large distance \\(r\\), into the two-body relative wave function of Eq. (10) for a gas of free particles. In the language of scattering theory this means that all the phase shifts are zero. This property is usually called the \"re-phasing\" of the function \\(\\psi\\). The two-body wave function does not describe a scattering process but rather the distortion of the two-body relative motion due to the interaction with respect to free gas case. Since the interaction is assumed to be of short range, such a distortion is concentrated at short distance, mainly inside the repulsive core region, but also slightly outside it (due to the attractive part of the interaction and to quantal effects).
It has to be stressed that for entry energy corresponding to two particles above the Fermi sphere, the two-body wave function \\(\\psi({\\bf r})\\) can be still defined, as well as the scattering \\(G\\) matrix, and this is indeed necessary for the continuous choice of the auxiliary potential \\(U\\). In this case \\(\\psi({\\bf r})\\) can describe a real scattering (the energy denominator can vanish), namely a collision process of two particles inside nuclear matter, provided the correct boundary condition is imposed. This is always the case when the two particles initial momenta lie above the Fermi sphere.
Let us go back to the case of two particles inside the Fermi sphere. The difference between the wave function \\(\\psi({\\bf r})\\) and the corresponding free wave function \\(\\exp(i({\\bf k}\\cdot{\\bf r}))\\), already introduced, Eq. (10), is called the \"defect function\" \\(\\zeta_{k_{1},k_{2}}\\). The size of the defect function is a measure of the two-body correlations present in the system. As a
Figure 18: Two-body relative wave-function for the free nucleon gas (full line) and for correlated nuclear matter (dots).
more quantitative parameter one can take the norm of the defect function, averaged over the Fermi sphere and calculated inside the available volume per particle. The parameter is usually called \"wound parameter\", since it describes the \"wound\" in the wave function produced by the NN correlations, and it is a more refined version of the parameter \\(p\\) previously introduced in discussing the hole expansion. Since the repulsive core is expected to have the dominant effect in nuclear matter, and it is of short range, the \\(s\\)-wave component of \\(\\psi({\\bf r})\\) should be the most affected one, and therefore the corresponding defect function should be the largest one. This is indeed the case, as shown in Fig. 18, where the function \\(\\psi({\\bf r})\\) in the channel \\({}^{1}S_{0}\\) (Eqs. (21) and (22) can be easily written in spin-isospin coupled representation), is reported (dots) in comparison with the corresponding free relative wave function (full line). The calculations have been done in the continuous choice and at saturation density \\(k_{F}=1.36\\,fm^{-1}\\). The initial relative momentum was chosen at \\(q=0.1fm^{-1}\\) (the value at \\(q=0\\) exactly can create numerical problems). One can see that the distortion of the free wave function is concentrated at small \\(r\\) values (the square of the wave function has to be taken). One can notice the striking similarity with the naive guess of Fig. 2. At distance larger than the core radius the correlated wave function oscillates slightly around the uncorrelated one, an effect mostly due to the large distance attractive component of the NN interaction.
The short distance correlation is expected to decrease for higher partial waves. It should also be affected by the initial relative momentum. Both effects are shown in Fig. 19, where the two-body wave function at relative momentum \\(q=k_{F}\\) is reported for the \\({}^{1}S_{0}\\),\\({}^{1}P_{1}\\) and \\({}^{1}D_{2}\\) channels.
The \"healing effect\" is apparent in all these cases, the two-body relative wave function is strongly suppressed at short distance. The corresponding \"healing distance\", the size of the interval where the suppres
Figure 19: The same as in Fig. 18, but at relative momentum \\(q=k_{F}\\) and for the three channels \\({}^{1}S_{0},\\ ^{1}P_{1}\\) and \\({}^{1}D_{2}\\). The first peak of the wave function is decreasing at increasing values of the partial wave \\(l=0,1,2\\).
to estimate the wound parameter, as discussed above. Values of this parameter are about 0.2-0.25 around saturation density for symmetric nuclear matter, which indicate a moderate rate of convergence. Even at densities of few times the saturation value the wound parameter does not exceed 0.3 - 0.35. In pure neutron matter it turns out that the wound parameter is smaller by about a factor 2 in the same density range, and convergence should be much better.
It has to be stressed, anyhow, that the reduction of weight that should be obtained by an additional hole line in the set of diagrams along the BBG expansion does not depend only on the probability to find two particles at short distance, but also on the action of the NN potential on the defect function \\(\\zeta\\). Since the introduction of the scattering \\(G\\)-matrix should take care, to a large extent, of the short range correlations due to the repulsive core, the higher order correlations should contain a more balanced contribution from the repulsive and attractive parts of the interaction, and therefore a strong compensation between attractive and repulsive contributions to the expansion. With some degree of optimism, one can hope that the expansion rate could be even better than the one guessed from the value of the wound parameter.
This expectation is indeed confirmed by actual calculations of the three hole-line contributions. The results of Baldo et al. (2001) are reported in Fig. 20 for the Argonne v\\({}_{18}\\) NN potential (Wiringa et al. 1995), and symmetric nuclear matter, both for the gap and for the continuous choice. The full lines correspond to the Brueckner two hole-line level of approximation (Brueckner-Hartree-Fock or BHF), while the symbols indicate results obtained adding the three hole-line contributions.
Two conclusions can be drawn from these results.
Figure 20: Equation of state of symmetric nuclear matter at the two hole–line level (full lines) in the gap (BHF–G) and in the continuous choice (BHF–C) of the single particle potential. The symbols label the corresponding EoS when the three hole–line contributions are added.
i) At the Brueckner level the gap and continuous choice still differ by few MeV, which indicates that the results depend to a certain extent on the choice of the auxiliary potential. According to the discussion above this implies that the expansion has not yet reached full convergence. On the contrary when the three hole-line diagrams are added the results with the different choices for the single particle potential \\(U\\) are quite close, which is surely an indication that the expansion has reached a good degree of convergence. Notice that the insensitivity to the choice of \\(U\\) is valid in a wide range of density, only at the highest density some discrepancy starts to appear. One can see that even at 4-5 times saturation density the BBG expansion can be considered reliable.
ii) As already discussed, the auxiliary potential is crucial for the convergence of the BBG expansion. It is not surprising, therefore, that the rate of convergence is dependent on the particular choice of \\(U\\). From the results it appears that the continuous choice is an optimal one, since the three hole-line corrections are much smaller and negligible in first approximation.
It is important to stress that the smallness of the three hole-line corrections is the result of a strong cancellation of the contributions of the different diagrams discussed above. This is illustrated in Fig. 21, where the values of the bubble (figure 11a), ring (figure 14), U-potential insertion (11b) and higher order diagrams are reported.
This shows clearly the relevance of grouping the diagrams according to the number of hole lines, in agreement with the BBG expansion. An ordering of the diagrams according to e.g. the number of \\(G\\)-matrices involved would be badly divergent.
Similar results are obtained for pure neutron matter, as illustrated in Fig. 22, taken from Baldo et al. (2000). Notice that, in agreement with the previous discussion on the wound parameter, the rate of convergence looks faster in this case.
Figure 21: The contributions of different three hole–line diagrams to the interaction energy in symmetric nuclear matter. The curve “bub–U” gives the contribution of the potential insertion diagram of Fig. 11b. The line labeled “total” is the sum of all contributions.
## 4 The Coupled Cluster Method
The BBG expansion can be obtained also within the Coupled-Cluster Method (CCM) (Kummel et al. 1978), a general many-body theory which has been extensively applied in a wide variety of different physical systems, both bosonic and fermionic ones. The connection between the CCM and the BBG expansions has been clarified by Day (1983). In the CCM method one starts from a particular ansatz on the form of the exact ground state wave function \\(\\Psi\\) in terms of the unperturbed ground state \\(\\Phi\\)
\\[|\\Psi\\rangle\\,=\\,e^{\\hat{S}}|\\Phi\\rangle\\ \\, \\tag{27}\\]
where the hermitean operator \\(\\hat{S}\\) is expanded in terms of n-particle and n-hole unperturbed states
\\[\\hat{S}\\,=\\,\\sum_{n}\\sum_{k_{1}, k_{n},k^{\\prime}_{1}, k^{\\prime}_{n}} \\frac{1}{n!^{2}}\\langle k^{\\prime}_{1}, k^{\\prime}_{n}|S_{n}|k_{1}, k_{n} \\rangle a^{\\dagger}(k^{\\prime}_{1}) a^{\\dagger}(k^{\\prime}_{n})\\ \\,a(k_{n}) a(k_{1}) \\tag{28}\\]
where all the \\(k\\)'s are hole momenta, i.e. inside the Fermi sphere, and all the \\(k^{\\prime}\\)'s are particle momenta, i.e. outside the Fermi sphere. For translationally invariant systems the term \\(S_{1}\\) vanishes due to momentum conservation. The exponential form is chosen in order to include, as much as possible, only \"linked\" terms in the expansion of \\(\\hat{S}\\), in the spirit of the linked-cluster theorem discussed above. As already noticed, the unlinked diagrams can indeed be summed up by an exponential form. This form also implies the normalization \\(\\langle\\Phi|\\Psi\\rangle=1\\). The functions \\(S_{n}\\) are expected to describe the n-body correlations in the ground state. As an illustration, let us consider only \\(S_{2}\\) for simplicity and let us assume that it can be considered local in coordinate space, \\(S_{2}(r_{i}-r_{j})=\\chi_{ij}\\), where the labels \\(ij\\) include spin-isospin variables. Then the correlated ground state can
Figure 22: Equation of state of pure neutron matter at the two hole–line level (full lines) in the gap (BHFG) and in the continuous choice (BHFC) of the single particle potential. The symbols label the corresponding EoS when the three hole–line contributions are added.
be written
\\[\\Psi(r_{1},r_{2}, )\\,=\\,\\Pi_{i<j}f_{ij}\\Phi(r_{1},r_{2}, )\\ \\, \\tag{29}\\]
where the product runs over all possible distinct pairs of particles and \\(f_{ij}\\,=\\,\\exp(2\\chi_{ij})\\). In general, however, the functions \\(S_{n}\\) are highly non-local in coordinate space and the expression for the ground state wave function cannot be written in such a simple form.
The eigenvalue equation for the exact ground state \\(\\Psi\\) can be re-written as a (non-hermitean) eigenvalue equation for the unperturbed ground state \\(\\Phi\\) with a modified Hamiltonian, transformed according to a similarity transformation generated by \\(\\hat{S}\\)
\\[e^{-\\hat{S}}\\,H\\,e^{\\hat{S}}\\,|\\Phi\\rangle\\,=\\,E\\,|\\Phi\\rangle\\ . \\tag{30}\\]
The equations for the total energy \\(E\\) and for the correlation functions \\(S_{n}\\) can be obtained by multiplying systematically Eq. (30) by the unperturbed ground state, two particle-two hole states, three particle-three hole states, and so on. The multiplication by \\(\\langle\\Phi|\\) gives a particularly simple expression for the total energy. If only a two-body interaction \\(V\\) is present, one gets
\\[E\\,=\\,\\langle\\Phi|e^{-\\hat{S}}\\,H\\,e^{\\hat{S}}\\,|\\Phi\\rangle\\,=\\,E_{0}\\,+\\, \\langle\\Phi|\\{V+[V,\\hat{S}_{2}]_{-}\\}|\\Phi\\rangle\\ \\, \\tag{31}\\]
where \\(E_{0}\\) is the unperturbed total (kinetic) energy, while all the other terms in the expansion of the similarity transformation actually vanish. Therefore, in principle the exact total energy can be obtained from the knowledge of the exact two particle- two hole amplitude \\(S_{2}\\) only. More explicitly
\\[E\\,=\\,E_{0}\\,+\\,\\frac{1}{2}\\sum_{k_{1},k_{2}<k_{F}}\\langle k_{1}k_{2}|W_{2}|k_ {1}k_{2}\\rangle \\tag{32}\\]
where
\\[\\langle k_{1}k_{2}|W_{2}|k_{1}k_{2}\\rangle\\,=\\,\\langle k_{1}k_{2}|\\{V+VS_{2}\\} |k_{1}k_{2}\\rangle \\tag{33}\\]
Of course, the amplitude \\(S_{2}\\) is connected with the higher order amplitudes \\(S_{3} S_{n} \\). As mentioned above, the equations linking the lowest order amplitudes with the higher ones are obtained by multiplying Eq. (30) by the unperturbed \\(n\\) particle-\\(n\\) holes bra states (n larger or equal to 2). These equations are the constitutive \"Coupled Cluster\" equations, which are equivalent to the eigenvalue equation for the ground state. Approximations can be obtained by truncating this chain of equations to a certain order \\(m\\), i.e. neglecting \\(S_{n}\\) for \\(n>m\\). The meaning of the truncation can be read from the ansatz Eq. (27), it amounts to consider correlated \\(n\\) particle \\(n\\) hole components in the ground state up to \\(n=m\\), while higher order components with \\(n>m\\) are just antisymmetrized products of the lower ones (note the exponential form, which produces components of arbitrary higher orders).
This form of the CCM equations can be also obtained from the variational principle, i.e. by demanding that the mean value of the Hamiltonian in the ground state \\(\\Psi\\) of Eq.
(27) is stationary under an arbitrary variation of the state vector orthogonal to \\(\\Psi\\). Such a variation can be written
\\[\\delta|\\Psi\\rangle\\,=\\,e^{-\\hat{S}^{\\dagger}}\\delta\\hat{S}e^{-\\hat{S}}|\\Psi \\rangle\\ \\, \\tag{34}\\]
where \\(\\delta\\hat{S}\\) corresponds to an arbitrary variation of the function \\(S_{n}\\) in Eq. (28). It is easily verified that such a variation is indeed orthogonal to \\(\\Psi\\). This is equivalent to take \\(\\delta\\hat{S}\\) systematically proportional to a \\(n\\) particle - \\(n\\) hole operator, for all non-zero \\(n\\), and to require that the corresponding energy variation vanishes. This set of conditions gives for the functions \\(S_{n}\\) the same CCM equations, which are therefore variational in character. The energy can be still taken from Eq. (31), but variants are possible (Navarro 2002).
However, the CCM equations, as they stand, cannot be applied to calculations in nuclear matter or in nuclei. The main correlations in nuclear systems come from the strong short range repulsive core, and this part of the NN interaction requires special treatment. The many-body wave functions must take into account the overall strong repulsion which is present whenever two particles approach each other at a distance smaller than the core radius. This requirement must be incorporated systematically in the correlation functions \\(S_{n}\\), otherwise no truncation of the expansion would be feasible. The simplest way to proceed is to renormalize the original NN interaction and introduce an effective interaction which takes into account the two-body short range correlations from the start, so that all the remaining contributions of the expansion are expressed in terms of the renormalized, and hopefully \"reduced\", interaction. In the BBG expansion this is done by introducing the G-matrix, and a similar procedure can be followed within the CCM scheme. Originally such a line was developed by Kummel and Luhrmann (1972), and resulted in the so-called Hard Core truncation scheme. A similar procedure has been followed recently in finite nuclei (Kowalski 2004 and Dean 2004), where the G-matrix, first calculated in an extended space, is then used in the CCM scheme to calculate systematically the correlations not included in the G-matrix. The introduction of the G-matrix of course changes the order of the diagrammatic expansion in the CCM method, in particular while the original CCM scheme treats particle-particle (short range) and particle-hole correlations (long range) on the same footing, the modified CCM scheme shifts the long range part of the correlations to higher orders and introduces the G-matrix as the effective two-body interaction in the corresponding terms of the expansion. The formal scheme along this lines has been developed for nuclear matter by Day (1983).
Once this procedure is introduced, in the resulting CCM equations the variational property is of course lost, as in the case of the BBG expansion.
Furthermore, a single particle potential \\(U(k)\\) can also be introduced in the CCM method. While the original CCM set of equations are formally independent of \\(U(k)\\) at each level of truncation, the modified equations do not depend on the single particle potential only if no truncation is performed. A detailed analysis of the connection between CCM and BBG expansions was presented by Day (1983). In the modifiedCCM equations, one introduces the effective interaction
\\[\\hat{W}\\,=\\,\\frac{1}{2}\\sum_{\\{k_{i}\\}}\\langle k_{1}k_{2}|V|k_{3}k_{4}\\rangle a_{ k_{1}^{\\dagger}}a_{k_{2}^{\\dagger}}^{\\dagger}\\left(e^{-\\hat{S}}a_{k_{4}}a_{k_{3}}e^{ \\hat{S}}\\right)_{c}\\ \\, \\tag{35}\\]
where the operator in parenthesis, if applied to the unperturbed ground state, can produce two hole states, 3 holes and 1 particle, 4 holes and two particles and so on. The subscript \\(c\\) indicates that, in this expansion, only the terms which do not annihilate the unperturbed ground state are retained, i.e. no \\(a_{k}^{\\dagger}\\) with \\(k<k_{F}\\) or \\(a_{k}\\) with \\(k>k_{F}\\) are retained. The operator \\(\\hat{W}\\) can be also expanded in \\(n\\) particles - \\(n\\) holes operators
\\[\\hat{W}\\,=\\,\\sum_{n}\\sum_{k_{1} k_{n},k_{1}^{\\prime} k_{n}^{\\prime}}\\,\\frac {1}{n!^{2}}\\langle k_{1}^{\\prime}, k_{n}^{\\prime}|W_{n}|k_{1}, k_{n}\\rangle a ^{\\dagger}(k_{1}^{\\prime}) a^{\\dagger}(k_{n}^{\\prime})a(k_{n}) a(k_{1}) \\tag{36}\\]
in exactly the same fashion as the operator \\(\\hat{S}\\). The functions \\(W_{n}\\) are related with the functions \\(S_{n}\\). Schematically this relation can be written
\\[W_{n}\\,=\\,V\\delta_{n,2}+VS_{n-1}+VS_{n}+\\sum_{k\\leq n-2}VS_{k}S_{n-k}\\ . \\tag{37}\\]
The modified CCM equations are obtained by the same procedure as before. The equations involves now both the functions \\(W_{n}\\) and the functions \\(S_{n}\\). Together with the relationship Eq. (37), a closed set of equations is then obtained, which is again equivalent to the original eigenvalue problem for the ground state. The ground state energy is still given by Eq. (32), since the relation between \\(W_{2}\\) and \\(S_{2}\\), according to Eq. (37) for \\(n=2\\), is indeed given by Eq. (33). The truncation scheme (the \"Bochum\" truncation scheme) is now performed on both \\(W_{n}\\) and \\(S_{n}\\), i.e. the truncation at order \\(m\\) corresponds to neglecting the functions \\(W_{n}\\) and \\(S_{n}\\) for \\(n>m\\). If we truncate the expansion at \\(m=2\\), only \\(W_{2}\\) and \\(S_{2}\\) are retained. The quantity \\(W_{2}\\) can be readily identified with the on-shell \\(G\\)-matrix of the BBG expansion and the function \\(S_{2}\\) with the corresponding defect function. If the self-consistent single particle potential is introduced, one then gets at this level exactly the Brueckner approximation.
As already discussed, the \\(G\\)-matrix can be introduced in all the terms of the Coupled-Cluster expansion. In this case each term of the expansion coincides with one diagram in the BBG method. However, it turns out that the ordering of terms according to the modified CCM truncation scheme at increasing \\(n\\) does not coincide completely with the ordering of diagrams in the hole-line expansion, i.e. for \\(n>2\\) the CCM expansion at a given truncation \\(n\\) includes also diagrams with a number of hole lines larger than \\(n\\). In particular, the truncation at \\(n=3\\) includes also \"ring diagrams\" with an arbitrary number of hole lines, i.e. the whole series of the particle-hole ring diagrams initiated by the diagram of Fig. 14, adding more and more particle-hole bubbles. These have been shown to be small (Day 1981), and therefore the CCM can be considered equivalent to the BBG expansion up to the three hole-line level of approximation.
The Coupled Cluster method gives a new insight into the structure and meaning of the hole-line expansion according to the BBG method. In fact, as we have seen the CCM is based on the ansatz (27) for the ground state wave function, and it is likely that the same structure of the ground state is underlying the BBG expansion. At Brueckner level it is then consistent to assume that the ground state wave function is given by
\\[|\\Psi_{Bru}\\rangle\\,=\\,e^{\\hat{S}_{2}}|\\Phi\\rangle \\tag{38}\\]
with \\(S_{2}\\) the Brueckner defect function.
## 5 The variational method
The variational method for the evaluation of the ground state of many-body systems was developed since the formulation of quantum theory of atoms and molecules. It acquires a particular form in nuclear physics because of the peculiarities of the NN interaction. The strong repulsion at short distance has been treated by introducing a Jastrow-like trial wave function. The complexity of the NN interaction needs special treatment and the introduction of more complex correlation factors. Many excellent review papers exist in the literature on the variational method and its extensive use for the determination of nuclear matter EoS (Navarro et al. 2002, Pandharipande and Wiringa 1979). Here we restrict the exposition to the essential ingredients of the method to the purpose of a formal and numerical comparison with the other methods.
In the simple case of a central interaction the trial ground state wave function is written as
\\[\\Psi(r_{1},r_{2}, )\\,=\\,\\Pi_{i<j}f(r_{ij})\\Phi(r_{1},r_{2}, )\\,\\,\\,, \\tag{39}\\]
where \\(\\Phi\\) is the unperturbed ground state wave function, properly antisymmetrized, and the product runs over all possible distinct pairs of particles. The similarity with the wave function of Eq. (29) is apparent and indicates a definite link with BBG and CCM methods. The correlation function \\(f(r_{ij})\\) is here determined by the variational principle, i.e. by imposing that the mean value of the Hamiltonian gets a minimum (or in general stationary point)
\\[\\frac{\\delta}{\\delta f}\\frac{\\langle\\Psi|H|\\Psi\\rangle}{\\langle\\Psi|\\Psi \\rangle}\\,=\\,0\\,\\,\\,. \\tag{40}\\]
In principle this is a functional equation for the correlation function \\(f\\), which however can be written explicitly in a closed form only if additional suitable approximations are introduced. A practical and much used method is to assume a parametrized form for \\(f\\) and to minimize the energy with respect to the set of parameters which constrain its form. Since, as previously discussed, the wave function is expected to decrease strongly whenever two particles are at distance smaller than the repulsive core radius of the NN interaction, the function \\(f(r_{ij})\\) is assumed to converge to 1 at large distance and to go rapidly to zero as \\(r_{ij}\\to 0\\), with a shape similar to the one shown in Fig. 18 for the correlated two-body wave function. Furthermore, at distance just above the core radius a possible increase of the correlation function beyond the value 1 is possible.
For nuclear matter it is necessary to introduce a channel dependent correlation factor, which is equivalent to assume that \\(f\\) is actually a two-body operator \\(\\hat{F}_{ij}\\). Onethen assumes that \\(\\hat{F}\\) can be expanded in the same spin-isospin, spin-orbit and tensor operators appearing in the NN interaction. Momentum dependent operators, like spin-orbit, are usually treated separately. The product in Eq. (39) must be then symmetrized since the different terms do not commute anymore. The most flexible assumption on the \\(F\\)'s is to impose that they go to 1 at a given \"healing\" distance \\(d\\) with zero derivative. The healing distances, which eventually can be defined for each spin-isospin and tensor channels, are then taken as variational parameters, while the functions for \\(r<d\\) are determined directly from the variational procedure. In principle, the condition of energy minimum (or extremal) should produce a set of Euler-Lagrange equations which determine the correlation factors. In practice, a viable explicit form can be used only for the two-body cluster terms, as discussed below.
If the two-body NN interaction is local and central, its mean value is directly related to the pair distribution function \\(g({\\bf r})\\)
\\[<V>\\,=\\,\\frac{1}{2}\\rho\\int d^{3}rv(r)g({\\bf r})\\ \\, \\tag{41}\\]
where
\\[g({\\bf r_{1}}-{\\bf r_{2}})\\,=\\,\\frac{\\int\\Pi_{i>2}d^{3}r_{i}|\\Psi(r_{1},r_{2} )|^{2}}{\\int\\Pi_{i}d^{3}r_{i}|\\Psi(r_{1},r_{2} )|^{2}}\\ . \\tag{42}\\]
The main job in the variational method is to relate the pair distribution function to the correlation factors \\(F\\). In general this cannot be done exactly, and one has to rely on some suitable expansion. For the central part of the correlations, the physical quantity which describes the main perturbation with respect to the free Fermi gas is the function \\(1-F(r)^{2}=h(r)\\), which is a measure of the strength of the short range part of the correlation. One can then expand the square of the correlated wave function in the components with a given number of \\(h\\)-factors, and correspondingly the energy mean value can be expanded in different terms, each one with a given number of \\(h\\)-functions. If the full NN interaction is considered, also the non central component of the correlation factors, \\(F_{nc}\\), must be included in the expansion. In this case the smallness factors are \\(F_{nc}^{2}\\) and the product \\(F_{nc}\\cdot h\\), since they are expected to be small and vanish at large distance. The different terms can be represented graphically by diagrams to help their classification and identify their possible cancellations. It turns out (Fantoni and Rosati 1974) that the mean value, at least in the thermodynamic limit, is the summation of the so-called \"irreducible\" diagrams, in strong similarity with the linked-cluster theorem of the BBG expansion. Indeed, the reducible diagrams are canceled exactly by the expansion of the denominator in Eq. (42). The problem of calculating \\(g(r)\\) from the ansatz of Eq. (39) has also a strong similarity with the statistical mechanics of a classical gas at finite temperature, where different methods to sum up infinite series of diagrams in the so-called \"virial expansion\" have been developed, noticeably the Hypernetted Chain (HNC) summation method (Leeuwven et al. 1959).These methods can be almost literally translated to the case of boson systems. With some modifications due to the different statistics, they can be extended (Fantoni and Rosati 1974) to fermion systems (FHNC), provided in this case the correlations are taken to be only central (\"Jastrow type\" correlations) in Eq. (39), i.e. the correlation factors are assumed to be only dependent on the coordinates. Unfortunately, in nuclear matter, as already mentioned, correlations are of complicated structure due to the NN interaction, and the HNC method can be applied only within approximate schemes, like the Single Operator Chain (SOC) summation method (Pandharipande and Wiringa 1979, Lagaris and Pandharipande 1980, Lagaris and Pandharipande 1981), called also Variational Summation Method (VSM). In VMS only chains with a given correlation operator are considered. In general, the correlation functions are calculated at the two-body cluster level, where one gets the Euler-Lagrange coupled equations for all operator channels, that can be solved exactly for the set of correlation functions \\(F^{p}(r_{ij})\\) at fixed values of the \"healing distances\". The index \\(p\\) labels the different two-body operators, spin-spin, spin-isospin, tensor, and so on. The VMS method is then applied, keeping the same set of correlation functions, to calculate the total energy. The procedure is repeated for different values of the healing distances and the energy minimum is found within this parameter space. The minimization gives of course automatically the ground state wave function. The VSM allows one to include a definite class of higher order \"clusters\" beyond the two-body ones. However, particle clusters in the variational method are physically quite different from the ones in the CCM method as well as from the BBG ones, where particle \"clusters\" are defined in terms of diagrams with a given number of hole-lines, according to the hole-line expansion. This point will be discussed in the next section. Generally speaking, the summation of clusters performed by chain summations are expected to include long range correlations, while the variational procedure leading to the Euler-Lagrange equations should include mainly short range correlations. Indeed, in the low density limit the Euler-Lagrange equation reduces to the Schroedinger equation for two particles in free space.
## 6 A critical comparison
As it has been shown by Jackson et al. (1982), in the low density limit, where two-body correlations dominate, the Euler-Lagrange equations of the variational method are equivalent to the summation of the ladder diagrams of the BBG expansion, while the hypernetted chain summation is related to the ring diagram series. This result is actually valid only for boson systems, while for Fermi systems it holds approximately, only by means of a suitable averaging over entry energy and momenta of the diagrams appearing in the BBG expansion. Indeed, the correlation factors are at most state dependent in the variational approach, while in principle they should depend also on both energy and total momentum. In any case, due to these approximate links, it was suggested (Jackson et al. 1982) to use in many-body systems in general a \"parquet\" summation, where both particle-particle short range correlations and chain summations of the ring type are treated on the same footing. However, this method has never been systematically exploited in the case of nuclear matter, and therefore the approach will not be discussed here.
The most relevant difference between the BBG (or CCM) method and the variational one is the introduction of the self-consistent single particle potential, which is not explicitly introduced in the variational procedure. As already noticed, with this modification the CCM and the BBG expansion are not any more of variational character, in general, at a given level of truncation. However, at the same time a large fraction of higher order correlations are effectively embodied in the single particle potential and the speed of convergence of the expansion is substantially improved. In the variational approach the average single particle potential is implicitly built up along the cluster expansion. It is likely that this is the reason of the slow convergence in the order of the clusters included in the chain summations (Morales et al. 2002), and also for this reason the meaning of \"clusters\" is not straightforwardly the same in the different methods.
In the variational approach three-body correlations arise as cyclic products of three two-body factors, e.g. \\(f(r_{ij})f(r_{jk})f(r_{ki})\\). This contribution has been recently (Morales et al. 2002) calculated exactly in symmetric and pure neutron matter for realistic interactions. Irreducible three-body correlations can be introduced from the start by multiplying the uncorrelated wave function not only by two-body correlation factors \\(f(r_{ij})\\) but also by three-body correlation factors \\(f_{ijk}\\), which will then include those three-body correlations which cannot be be expressed as product of two-body ones. As noticed by Luhrmann (1975), this also indicates a difference with the BBG (and CCM) expansion, where the whole three-body correlations are included in the energy term generated by the Bethe-Fadeev equations.
Despite all these differences, some similarity of the methods appear to be present, while a more detailed comparison can be made only at the level of the numerical results.
In summary, the main differences between the variational and the BBG approaches can be identified as follows.
1. In the BBG method for the nuclear EoS the kinetic energy contribution is kept at its unperturbed value at all orders of the expansion, while all the correlations are embodied in the interaction energy part. This characteristic of the BBG method is not due to any approximation but to the expansion method, where the modification of the occupation numbers due to correlations is treated on the same footing and at the same order as the other correlation effects. In the variational method both kinetic and interaction parts are directly modified by the correlation factors.
2. The correlation factors introduced in the variational method are assumed to be essentially local, but usually state dependent. The corresponding (implicit) correlation factors in the BBG expansion are in general highly non-local and energy dependent, besides being state dependent.
3. In the BBG method the auxiliary single particle potential \\(U(k)\\) is introduced within the expansion in order to improve the rate of convergence. No single particle potential is introduced in the variational procedure for the calculation of the ground state energy and wave function. Of course, once the variational calculation is performed, the single particle potential can be extracted. This also should imply that the rate of convergence in terms of the order of the clusters is slower in the variational method. It was indeed shown by Morales et al. (2002) that one needs clusters at least up to order 5 to get reasonable convergence, but in principle this does not create any problem in the variational method (while in the BBG expansion it would be a disastrous difficulty). It has to be stressed anyhow that the physical meaning of \"cluster\" is quite different in the two methods, being more related to long range correlations in the variational scheme, to the short range ones in BBG.
Point 3 is probably the most relevant difference between the two methods, but in any case it is difficult to estimate to which extent each one of the listed differences can affect the resulting EoS.
The similarity and connection between the two methods can be found by interpreting on physical grounds the diagrammatic expansion used in each one of them. The two-body correlations are surely described by the lowest order diagram of the variational method, which corresponds to a factor \\(f_{ij}\\), which in turn can be related to the G-matrix, i.e. to the Brueckner approximation (with the warning of point 2). The hypernetted sums, in their various form, should be connected with the series of ring diagrams starting from the one discussed in connection with the three hole-line diagrams (including an arbitrary number of loops). As mentioned above, the three-body correlations included in the Bethe-Fadeev equations can be related to the irreducible product of three \\(f_{ij}\\) factors. For boson systems all these connections are more stringent, for fermion systems like nuclear matter they are much less transparent and one has to rely on physical arguments.
## 7 The Equation of State from the BBG and the variational approach
The first obvious requirement any EoS must satisfy is the reproduction of the so-called \"saturation point\" (SP), extracted from the fit of the mass formula to the smooth part of the binding energy of nuclei along the stability valley. To be definite, we will take the values \\(e\\,=\\,-16\\) MeV and \\(\\rho\\,=\\,0.17fm^{-3}\\) for the energy per particle and density, respectively, as defining the SP of symmetric nuclear matter. As it is well known, no two-body force which fits the NN phase shifts was found to be able to reproduce accurately the SP. In early applications of Brueckner theory it was realized that the SP predicted by different phase-equivalent NN interactions lie inside the so-called \"Coester band\", after Coester et al. (1970). The band misses the phenomenological SP, even taking into account the intrinsic uncertainty coming from the extraction procedure (different mass formulae, different fit procedures, etc.). The band indicates that either the binding energy is too small but the density is correct, or the binding energy is correct but the density is too large, see Fig. 23. Furthermore the position along the band was related to the strength of the tensor forces, i.e. to the percentage of D-wave in the deuteron. Higher values of the strength were corresponding to the upper part of the band. However the analysis was done in the gap choice, as discussed above. The use of the continuous choice within the Brueckner method changes substantially the results, as shown in Fig. 23. Ifsome the most modern _local_ forces, with different deuteron D-wave percentages, are used, the SP turns out to be restricted in this case to an \"island\", which however is still shifted with respect to the phenomenological SP. The discrepancy does not appear dramatic. Taking into account that the Brueckner approximation is the lowest order in the BBG scheme, this result is surely remarkable. Unfortunately, as shown in the previous section, higher order contributions, namely the three hole-line diagrams, do not change the nuclear EoS appreciably and the discrepancy still persists. The variational method gives results in full agreement with this conclusion. The deficiency is evidently not in the many-body treatment but in the adopted Hamiltonian. Two possible corrections can be devised : many-body forces (to be distinguished from many-body correlations), in particular three-body forces, and relativistic effects. As we will mention later, it is
Figure 23: Saturation curve (full line) for the Argonne v\\({}_{14}\\) potential (Wiringa et al. 1984) in the Brueckner approximation with the gap choice for the single particle potential. The saturation points for other NN interactions within the same approximation are indicated by open circles. They display the Coester band discussed in the text. The reported percentages indicate, for some of the interactions, the strength of the D-wave component in the deuteron. Most of the reported systematics is reproduced from Machleidt R 1989, _Adv. Nucl. Phys._ **19**, 189, where the force corresponding to each label is specified and the corresponding references are given. The big square marks the approximate empirical saturation point. The stars correspond to some results obtained within the Brueckner scheme with the continuous choice for the single particle potential, as discussed in the text.
well known that the two possible corrections are actually strongly related. Here we will consider three-body forces in some detail.
First we compare in Fig. 24. the BBG and variational EoS (Akmal et al. 1998) both for symmetric matter and pure neutron matter without three-body forces in order to single out the dependence of the results on the adopted many-body scheme.
Since we focus on the high density part of the EoS, i.e. above saturation density, the comparison is displayed in a wide density range. It has to be stressed that the NN phase shifts constrain the NN two-body force up to about 350 MeV in the laboratory, which corresponds to a relative momentum of about \\(k_{l}=2\\) fm\\({}^{-1}\\). Densities corresponding to values of \\(k_{F}\\) larger than \\(k_{l}\\) fall surely in the region where an extrapolation is needed and the NN force is untested. For pure neutron matter the agreement between the two theories can be considered surprisingly good up to quite high density. For symmetric matter the good agreement extends up to about 0.6 fm\\({}^{-3}\\), while at higher density the variational EoS is substantially higher than the BBG one. The reason for that is unknown.
This type of agreement is still present if three-body forces (TBF) are introduced to the purpose of getting a SP in agreement with the empirical findings. This can be seen in Fig. 25, where calculations with the Argonne v\\({}_{18}\\) interaction and the Urbana model for three-body forces are presented. These TBF contain an attractive and repulsive part, whose structure is suggested by elementary processes which involve meson exchanges and three nucleons but cannot be separated into two distinct nucleon-nucleon interaction processes. These TBF are phenomenological in character since the two parameters, namely the strength of the attractive and the repulsive terms, cannot be fixed from first principles but they are adjusted to reproduce accurately experimental data. In Fig. 25 we also report (full line) the EoS of Heiselberg and Hjorth-Jensen (1999), who
Figure 24: Symmetric matter (lower curves) and pure neutron matter (upper curves) EoS for the Argonne v\\({}_{18}\\) NN potential calculated within the BBG (dashed lines) and the variational (diamonds) methods. Only two–body forces are included.
proposed a modification of the variational EoS of Akmal et al. (1998), which prevents its superluminal behaviour at high density (this EoS is in fact softer).
Few observations are in order. It turns out that the parameters of the three-body forces which have been fitted to data on few nucleon systems (triton, \\({}^{3}He\\) and \\({}^{4}He\\)) have to be modified if the SP has to be reproduced within the phenomenological uncertainty. Generally speaking the repulsive part has to be reduced substantially. It could be argued that this change of parameters poses a serious problem, since if the TBF model makes sense one must keep the same parameters at all densities, otherwise higher order many-body forces should be invoked. Actually this is a false problem. In fact the discrepancy on the SP cannot be reduced more than few hundred MeV (typically 200-300 KeV), due to the intrinsic uncertainty in its position. This discrepancy remains essentially the same for few-body systems if one uses the same TBF fitted in nuclear matter, and actually the contribution of TBF in few-body systems is quite small. Therefore TBF which allow one to describe both few-body systems and the nuclear matter SP do exist, if the accuracy is kept at the level of 200-300 KeV in energy per particle, and indeed they are not unique. To try a very precise overall fit at the level of 10 KeV or better, as it is now possible in the field of few-body systems, appears definitely to be too challenging. There are surely higher order terms (four-body forces, retardation effects in the NN interaction, other relativistic effects, etc.) which could contribute at this level of precision. In fact the contribution of TBF to the energy per particle around saturation is in all cases about 1-2 MeV, while for few-body systems it is one order of magnitude smaller. Therefore a TBF tuned to fit the binding energy of few-body system with an accuracy of 1-10 KeV
Figure 25: Symmetric matter (lower curves) and pure neutron matter (upper curves) EoS for the Argonne v\\({}_{18}\\) NN potential and three–body forces (TBF), calculated within the BBG (dashed lines) and the variational (diamonds) methods. The full lines correspond to the modified version of the variational EoS of Heiselberg and Hjorth–Jensen (1999)
cannot be extended to fit nuclear matter SP, since this would correspond to a quite unbalanced fitting procedure. In other words, 100-200 KeV for the energy per particle is the limit of accuracy of the TBF model applied to the nuclear EoS in the considered wide range of density.
Secondly, it has to be noticed that once the SP is reproduced by adjusting the TBF, it turns out that the parameters are not the same in the BBG and variational methods, i.e. the TBF are not the same. Finally the way of incorporating TBF is simplified in the BBG method, namely TBF are reduced to a density dependent two-body force by a suitable average over the position and spin-isospin quantum numbers of the third particle (Grange et al. 1989). The results presented in Fig. 25 are Brueckner calculations with TBF included following this procedure. The agreement between the two curves seems to indicate that once the SP is reproduced correctly, the full EoS is determined to a large extent up to density as high as 0.6 fm\\({}^{-3}\\). However the conclusion is restricted to the particular model for the two-body and three-body forces. The possible dependence on the considered forces will be discussed in the next section.
## 8 Dependence on the two and three-body forces
The progress in the accuracy and extension of NN experimental data, as well as in their fit by different NN interaction models has been quite impressive. The data range up to 350 MeV in the laboratory. Going beyond this limit requires the introduction of non-nucleonic degrees of freedom (mesons, isobar resonances, etc.) and the corresponding inelastic channels. The latter possibility looks quite a complex task, and only few cases are present in the literature, notably by Ter Haar and Malfliet (1987). Since in this paper we restrict to nucleonic degrees of freedom only, we will consider NN two-body interactions which do not include explicitly mesonic or isobaric degrees of freedom. Despite that, it is a common paradigm that the NN interaction is determined, at least in an effective way, by the exchange of different mesons, and practically all modern NN interactions take inspiration for their structure by this assumption, in an explicit or implicit way. A really large set of two-body interactions has been developed along the years, but nowadays it is mandatory to restrict the possible choices to the most modern ones, since they are the only ones which fit the widest and most accurate experimental data, on one hand, and are more accurate in the fitting, on the other. One can then restrict the set of possible NN two-body interactions to the ones which fit the latest data (few thousands of data points) with an accuracy which gives a \\(\\chi^{2}\\)/datum close to 1. With these requirements, the number of NN interactions reduces quite a bit, and only few ones can be considered acceptable. The one which is constructed more explicitly from meson exchange processes is the recently developed CD Bonn potential by Machleidt (2001), which is the latest one of the Bonn potential series. In principle this interaction is the one with the best \\(\\chi^{2}\\) value. However the experimental data are not always consistent to the needed degree of accuracy and some selection must be done. In the same work one can find a detailed analysis of the different data set together with the method of selection which has been followed for the most accurate NN interactions and the values of the corresponding \\(\\chi^{2}\\), if one includes the data up to the year 2000. Among the most accurate NN interactions one has to include the Argonne v\\({}_{18}\\), already discussed. It is constructed by a set of two-body operators which arise naturally in meson exchange processes, but the form factors are partly phenomenological (except, of course, the one-pion exchange). This interaction has been recently modified in the \\({}^{1}S_{0}\\) and \\({}^{3}S_{1}-^{3}D_{1}\\) channels with the inclusion of a purely phenomenological short range non-local force, which substitutes the original potentials below 1 fm (Doleschall and Borbely 2000, Doleschall et al. 2003, Doleschall 2004), usually indicated as IS potential. This allows one to reproduce the binding energy of three and four nucleon systems very accurately without the inclusion of any TBF, at variance with the original Argonne v\\({}_{18}\\). Also the radii of \\({}^{3}H\\) and \\({}^{3}He\\) are accurately reproduced, while the radius of \\({}^{4}He\\) is slightly underestimated (Lazauskas and Carbonell 2004). This potential is phase-equivalent to the original interaction, but the off-shell behaviour is modified. Finally one can mention the latest potentials of the Nijmegen group (Stocks et al. 1994). They have also the ideal value \\(\\chi^{2}/{\\rm datum}\\ \\approx\\ 1\\). However the fit was performed separately in each partial wave and the corresponding two-body operator structure cannot have the simple form as expected from meson exchange processes. This interaction will be discussed only marginally. A larger set of interactions, which includes the old NN potentials, can be found in Li K H et al. (2006), where the resulting symmetric nuclear matter EoS at BHF level are compared.
It has to be noticed that the three selected NN interactions, v\\({}_{18}\\), CD Bonn and IS, give an increasingly better reproduction of the three-body binding energy and radii, and at the same time their non-locality is increasing in the same order. They are all phase equivalent to a good accuracy, so that the differences appearing in the nuclear matter EoS can be solely due to their different off-shell behaviour. It is well known (Coester et al. 1970) that phase equivalent potentials can give a quite different saturation point and overall EoS, but here the comparison is restricted to a very definite class of realistic accurate NN interactions, with an operatorial structure which is quite similar and suggested by meson exchange processes. In Fig. 26 we compare the three corresponding EoS (Baldo and Maieron 2005) at the BHF level and with the inclusion of three hole-line diagrams (no TBF). First of all one can notice that also for the CD Bonn and IS interactions the three hole-line contributions are still relatively small (especially if one compares the interaction energies at two and three hole-line levels). The convergence of the hole expansion appears to be a general feature. Furthermore, at increasing non-locality the EoS become softer and the SP tends to run away from the empirical SP and unreasonably large values for the binding energy and the SP density are obtained. In the figure two versions of the IS potential are considered, NL1, where only the \\({}^{1}S_{0}\\) channel is modified, and NL2, where also the \\({}^{3}S_{1}\\) - \\({}^{3}D_{1}\\) channel is modified. The results indicate that the problem of reproducing the empirical SP with two-body realistic and accurate interactions cannot be solved by introducing non-locality, which modifies the off-shell properties of the two-body potentials. Furthermore, the requirement of a very accurate fitting of the binding energy and radii of three (and eventually four) nucleon systems makes the reproduction of nuclear matter SP more challenging.
The necessity of introducing TBF around saturation is in any case definitely confirmed. According to the results shown in Fig. 26. it is apparent that TBF cannot be unique, but they depend on the two-body forces employed, since each one of the two-body forces gives a different discrepancy for the SP and therefore needs a different correction. This is not surprising, since both two-body force and TBF should originate within the same physical framework and therefore they are intimately related. In particular, in the nucleon-meson coupling models TBF should be generated by processes which involve the coupling constants which are already present in the two-body forces. Well known examples are depicted in Fig. 27. Other couplings, like meson-meson ones, appear only at the TBF level. It has to be stressed that all these processes must be considered within an effective theory framework (i.e. a theory with cutoff). The problem of consistency between two-body interactions and TBF has been taken systematically by Grange et al (1989) and further developed in recent works (Zuo et al. 2002). Since processes which include meson-meson couplings seem to be small and can be neglected
Figure 26: Symmetric matter EoS at two hole–line level (upper panel) and at three hole–line level (lower panel) for different NN interactions, the Argonne v\\({}_{18}\\), the CD Bonn and two versions of the IS potential (NL1 and NL2), see the text for detail.
in first approximation, TBF calculated along these lines do not contain in principle any additional parameters. It is not surprising then that the corresponding EoS has a SP which is appreciably more shifted away from the empirical one if compared with the EoS with the same two-body force (v\\({}_{18}\\) ) but with the phenomenological TBF (see Li Z H et al. 2006\\({}^{b}\\)). In any case TBF can be a good starting point for further improvements. It is unclear if other more complex processes can play a role. The effect of these TBF in few-body systems is not known, but it is expected to be not very large, since at low density also the contribution of these TBF in nuclear matter becomes quite small. The most significant difference with the phenomenological TBF is the stiffness of the EoS at high density, which turns out to be much higher, as can be seen also in the case of pure neutron matter. The energy and pressure rise steeply above saturation, and this can create some problems, as discussed later.
## 9 The Dirac-Brueckner approach
As already mentioned, one of the deficiencies of the Hamiltonian considered in the previous sections is the use of the non-relativistic limit. The relativistic framework is of course the framework where the nuclear EoS should be ultimately based. The best relativistic treatment developed so far is the Dirac-Brueckner approach. Excellent review papers on the method can be found in the literature (Machleid 1989) and in textbooks (Brockmann and Machleidt 1999). Here we restrict the presentation to the main basic elements of the theory and to the latest results, in order to make the comparison with the other methods more transparent. We will follow closely the presentation by Brockmann and Machleidt (1999) but we will make reference also to the more recent developments.
Figure 27: Some processes which can contribute to three-nucleon forces.
In the relativistic context the only NN potentials which have been developed are the ones of OBE (one boson exchange) type. The starting point is the Lagrangian for the nucleon-mesons coupling
\\[{\\cal L}_{pv} = -\\frac{f_{ps}}{m_{ps}}\\overline{\\psi}\\gamma^{5}\\gamma^{\\mu}\\psi \\partial_{\\mu}\\varphi^{(ps)} \\tag{43}\\] \\[{\\cal L}_{s} = +g_{s}\\overline{\\psi}\\psi\\varphi^{(s)}\\] (44) \\[{\\cal L}_{v} = -g_{v}\\overline{\\psi}\\gamma^{\\mu}\\psi\\varphi^{(v)}_{\\mu}-\\frac{f _{v}}{4M}\\overline{\\psi}\\sigma^{\\mu\
u}\\psi(\\partial_{\\mu}\\varphi^{(v)}_{\
u} -\\partial_{\
u}\\varphi^{(v)}_{\\mu}) \\tag{45}\\]
with \\(\\psi\\) the nucleon and \\(\\varphi^{(\\alpha)}_{(\\mu)}\\) the meson fields, where \\(\\alpha\\) indicates the type of meson and \\(\\mu\\) the Lorentz component in the case of vector mesons. For isospin 1 mesons, \\(\\varphi^{(\\alpha)}\\) is to be replaced by \\(\\mathbf{\\tau\\cdot\\varphi^{(\\alpha)}}\\), with \\(\\tau^{l}\\) (\\(l=1,2,3\\)) the usual Pauli matrices. The labels \\(ps\\), \\(pv\\), \\(s\\), and \\(v\\) denote pseudoscalar, pseudovector, scalar, and vector coupling/field, respectively.
The one-boson-exchange potential (OBEP) is defined as a sum of one-particle-exchange amplitudes of certain bosons with given mass and coupling. The main difference with respect to the non-relativistic case is the introduction of the Dirac-spinor amplitudes. The six non-strange bosons with masses below 1 GeV/c\\({}^{2}\\) are used. Thus,
\\[V_{OBEP}=\\sum_{\\alpha=\\pi,\\eta,\\rho,\\omega,\\delta,\\sigma}V^{OBE}_{\\alpha} \\tag{46}\\]
with \\(\\pi\\) and \\(\\eta\\) pseudoscalar, \\(\\sigma\\) and \\(\\delta\\) scalar, and \\(\\rho\\) and \\(\\omega\\) vector particles. The contributions from the isovector bosons \\(\\pi,\\delta\\) and \\(\\rho\\) contain a factor \\(\\mathbf{\\tau_{1}\\cdot\\tau_{2}}\\). In the so called static limit, i.e. treating the nucleons as infinitely heavy (their energy equals the mass) the usual denominator of the interaction amplitude in momentum space, coming from the meson propagator, is exactly the same as in the non-relativistic case (since in both cases meson kinematics is relativistic). This limit is not taken in the relativistic version, noticeably in the series of Bonn potentials, and the full expression of the amplitude with the nucleon relativistic (on-shell) energies is included. As an example, let us consider one pion exchange. As it is well known, in the non-relativistic and static limit the corresponding local potential in momentum space reads (in standard notations)
\\[V^{loc}_{\\pi}=-\\frac{g_{\\pi}^{2}}{4M^{2}}\\frac{(\\sigma_{1}\\cdot{\\bf k})( \\sigma_{2}\\cdot{\\bf k})}{k^{2}c^{2}+(mc^{2})^{2}}(\\tau_{1}\\cdot\\tau_{2})\\]
with \\({\\bf k}={\\bf q}-{\\bf q}^{\\prime}\\), where \\({\\bf q}\\) and \\({\\bf q}^{\\prime}\\) are the initial and final relative momenta of the interacting nucleons. This has to be compared with the complete expression of the matrix element between nucleonic (positive energy) states (Machleidt 2000). In the center of mass frame it reads
\\[V^{full}_{\\pi}=-\\frac{g_{\\pi}^{2}}{4M^{2}}\\frac{(E^{\\prime}+M)(E+M)}{k^{2}c^{2 }+(mc^{2})^{2}}\\left(\\frac{\\sigma_{1}\\cdot{\\bf q}^{\\prime}}{E^{\\prime}+M}-\\frac {\\sigma_{1}\\cdot{\\bf q}}{E+M}\\right)\\times\\left(\\frac{\\sigma_{2}\\cdot{\\bf q}^{ \\prime}}{E^{\\prime}+M}-\\frac{\\sigma_{2}\\cdot{\\bf q}}{E+M}\\right)\\]
where \\(E,E^{\\prime}\\) are the initial and final nucleon energies. One can see that in this case some non-locality is present, since the matrix element depends separately on \\({\\bf q}\\) and \\({\\bf q}^{\\prime}\\)Putting \\(E=E^{\\prime}=M\\), one gets again the local version. Notice that in any case the two versions coincide on-shell (\\(E=E^{\\prime}\\)), and therefore the non-locality modifies only the off-shell behaviour of the potential. The matrix elements are further implemented by form factors at the NN-meson vertices to regularize the potential and to take into account the finite size of the nucleons and the mesons. In applications of the DBHF method usually one version of the relativistic OBE potential is used, which therefore implies that a certain degree of non-locality is present. As already anticipated in the previous section, this is also true if these potentials are used within the non-relativistic BHF method.
The fully relativistic analogue of the two-body scattering matrix is the covariant Bethe-Salpeter (BS) equation. In place of the NN non-relativistic potential the sum \\({\\cal V}\\) of all connected two-particle irreducible diagrams has to be used, together with the relativistic single particle propagators. Explicitly, the BS equation for the covariant scattering matrix \\({\\cal T}\\) in an arbitrary frame can be written
\\[{\\cal T}(q^{\\prime},q|P)={\\cal V}(q^{\\prime},q|P)+\\int d^{4}k{\\cal V}(q^{\\prime },k|P){\\cal G}(k|P){\\cal T}(k,q|P)\\, \\tag{47}\\]
with
\\[{\\cal G}(k|P) = \\frac{i}{(2\\pi)^{4}}\\frac{1}{(\\frac{1}{2}\
ot\\!P+\
ot\\!k-M+i \\epsilon)^{(1)}}\\frac{1}{(\\frac{1}{2}\
ot\\!P-\
ot\\!k-M+i\\epsilon)^{(2)}} \\tag{48}\\] \\[= \\frac{i}{(2\\pi)^{4}}\\left[\\frac{\\frac{1}{2}\
ot\\!P+\
ot\\!k+M}{( \\frac{1}{2}P+k)^{2}-M^{2}+i\\epsilon}\\right]^{(1)}\\left[\\frac{\\frac{1}{2}\
ot \\!P-\
ot\\!k+M}{(\\frac{1}{2}P-k)^{2}-M^{2}+i\\epsilon}\\right]^{(2)} \\tag{49}\\]
where \\(q,\\ k\\), and \\(q^{\\prime}\\) are the initial, intermediate, and final relative four-momenta, respectively (with e. g. \\(k=(k_{0},{\\bf k})\\)), and \\(P=(P_{0},{\\bf P})\\) is the total four-momentum; \\(\
ot\\!k=\\gamma^{\\mu}k_{\\mu}\\). The superscripts refer to particle (1) and (2). Of course all quantities are appropriate matrices in spin (or helicity) and isospin indices. The use of the OBE potential as the kernel \\({\\cal V}\\) is equivalent to the so-called ladder approximation, where one meson exchanges occur in disjoint time intervals with respect to each other, i.e. at any time only one meson is present. Unfortunately, even in the ladder approximation the BS equation is difficult to solve since \\({\\cal V}\\) is in general non-local in time, or equivalently energy dependent, which means that the integral equation is four-dimensional. It is even not sure in general if it admits solutions. It is then customary to reduce the four-dimensional integral equation to a three-dimensional one by approximating properly the energy dependence of the kernel. In most methods the energy exchange \\(k_{0}\\) is fixed to zero and the resulting reduced BS equation is similar to its non-relativistic counterpart. In the Thompson reduction scheme this equation for matrix elements between positive-energy spinors (c.m. frame) reads
\\[{\\cal T}({\\bf q^{\\prime}},{\\bf q})=V({\\bf q^{\\prime}},{\\bf q})+\\int\\frac{d^{3 }k}{(2\\pi)^{3}}V({\\bf q^{\\prime}},{\\bf k})\\,\\frac{M^{2}}{E_{\\bf k}^{2}}\\, \\frac{1}{2E_{\\bf q}-2E_{\\bf k}+i\\epsilon}{\\cal T}({\\bf k},{\\bf q}|{\\bf P}) \\tag{50}\\]
where both \\(V({\\bf q^{\\prime}},{\\bf q})\\) and \\({\\cal T}\\) have to be considered as matrices acting on the two-particle helicity (or spin) space, and \\(E_{\\bf k}=\\sqrt{{\\bf k}^{2}+M^{2}}\\) is the relativistic particle energy. In the alternative Blankenbecler-Sugar (Machleidt 2000) reduction scheme some different relativistic kinematical factors appear in the kernel. This shows that the reduction is not unique. The partial wave expansion of the \\({\\cal T}\\)-matrix can then be performed starting from the helicity representation. The corresponding amplitudes include single as well as coupled channels, with the same classification in quantum numbers \\(JLS\\) as in the non relativistic case and therefore their connection with phase shifts is the same (Brockmann and Machleidt 1998). In the intermediate states of momentum \\({\\bf k}\\) only the positive energy states are usually considered (by the proper Dirac projection operator). As in the case of the OBEP potential, again the main difference with respect to the non-relativistic case is the use of the Dirac spinors.
The DBHF method can be developed in analogy with the non-relativistic case. The two-body correlations are described by introducing the in-medium relativistic \\(G\\)-matrix. The DBHF scheme can be formulated as a self-consistent problem between the single particle self-energy \\(\\Sigma\\) and the \\(G\\)-matrix. Schematically, the equations can be written
\\[G = V+i\\int VQggG\\] \\[\\Sigma = -i\\int_{F}(Tr[gG]-gG) \\tag{51}\\]
where \\(Q\\) is the Pauli operator which projects the intermediate two particle momenta outside the Fermi sphere, as in the BHF G-matrix equation, and \\(g\\) is the single particle Green' s function. The self consistency is entailed by the Dyson equation
\\[g=g_{0}+g_{0}\\Sigma g\\]
where \\(g_{0}\\) is the (relativistic) single particle Green's function for a free gas of nucleons. The self-energy is a matrix in spinor indices, and therefore in general it can be expanded in the covariant form
\\[\\Sigma(k,k_{F})=\\Sigma_{s}(k,k_{F})-\\gamma_{0}\\Sigma_{0}(k,k_{F})+\\mathbf{\\gamma}\\cdot{\\bf k}\\Sigma_{v} \\tag{52}\\]
where \\(\\gamma_{\\mu}\\) are the Dirac gamma matrices and the coefficients of the expansion are scalar functions, which in general depend on the modulus \\(|{\\bf k}|\\) of the three-momentum and on the energy \\(k_{0}\\). Of course they also depend on the density, i.e. on the Fermi momentum \\(k_{F}\\). The free single particle eigenstates, which determine the spectral representation of the free Green' s function, are solutions of the Dirac equation
\\[[\\ \\gamma_{\\mu}k^{\\mu}\\,-\\,M\\ ]\\,u(k)\\ =\\,0\\]
where \\(u\\) is the Dirac spinor at four-momentum \\(k\\). For the full single particle Green's function \\(g\\) the corresponding eigenstates satisfy
\\[[\\ \\gamma_{\\mu}k^{\\mu}\\,-\\,M\\,+\\,\\Sigma\\ ]\\,u(k)^{*}\\ =\\,0\\]
Inserting the above general expression for \\(\\Sigma\\), after a little manipulation, one gets
\\[[\\ \\gamma_{\\mu}k^{\\mu^{*}}\\,-\\,M^{*}\\ ]u(k)^{*}\\ =\\,0\\]with
\\[{k^{0}}^{*}\\,=\\,\\frac{k^{0}+\\Sigma_{0}}{1+\\Sigma_{v}}\\quad;\\quad{k^{i}}^{*}\\,=\\, k^{i}\\quad;\\quad M^{*}\\,=\\,\\frac{M+\\Sigma_{s}}{1+\\Sigma_{v}} \\tag{53}\\]
This is the Dirac equation for a single particle in the medium, and the corresponding solution is the spinor
\\[u^{*}({\\bf k},s)=\\sqrt{\\frac{E_{\\bf k}^{*}+M^{*}}{2M^{*}}}\\left(\\begin{array}[] {c}1\\\\ \\frac{\\mathbf{\\sigma\\cdot k}}{E_{\\bf k}^{*}+M^{*}}\\end{array}\\right) \\chi_{s}\\quad;\\quad E_{\\bf k}^{*}=\\sqrt{{\\bf k}^{2}+{M^{*}}^{2}}\\enspace. \\tag{54}\\]
In line with the Brueckner scheme, within the BBG expansion, in the self-energy of Eq. (51) only the contribution of the single particle Green' s function pole is considered (with strength equal one). Furthermore, negative energy states are neglected and one gets the usual self-consistent condition between self-energy and scattering \\(G\\)-matrix. The functions to be determined are in this case the three scalar functions appearing in Eq. (52). However, to simplify the calculations these functions are often replaced by their value at the Fermi momentum.
In any case, the medium effect on the spinor of Eq. (54) is to replace the vacuum value of the nucleon mass and three-momentum with the in-medium values of Eq. (53). This means that the in-medium Dirac spinor is \"rotated\" with respect to the corresponding one in vacuum, and a positive (particle) energy state in the medium has some non-zero component on the negative (anti-particle) energy state in vacuum. In terms of vacuum single nucleon states, the nuclear medium produces automatically anti-nucleon states which contribute to the self-energy and to the total energy of the system. It has been shown by Brown et al. (1987) that this relativistic effect is equivalent to the introduction of well defined TBF at the non-relativistic level. These TBF turn out to be repulsive and consequently produce a saturating effect. The DBHF gives indeed in general a better SP than BHF. Of course one can wonder why these particular TBF should be selected, but anyhow a definite link between DBHF and BHF + TBF is, in this way, established. Indeed, including in BHF only these particular TBF one gets results close to DBHF calculations, see e.g. Li K H (2006).
Despite the DBHF is similar to the non-relativistic BHF, some features of this method are still controversial. The results depend strongly on the method used to determine the covariant structure of the in-medium \\(G\\)-matrix, which is not unique since only the positive energy states must be included. It has to be stressed that, in general, the self-energy is better calculated in the matter reference frame, while the G-matrix is more naturally calculated in the center of mass of the two interacting nucleons. This implies that the \\(G\\)-matrix has to be Lorentz transformed from one reference frame to the other, and its covariant structure is then crucial. Formally, the most accurate method appears to be the subtraction scheme of Gross-Boelting et al. (1999). Generally speaking, the EoS calculated within the DBHF method turn out to be stiffer above saturation than the ones calculated from the BHF + TBF method.
## 10 The compressibility at and above saturation density
Despite several uncertainties, the compressibility of nuclear matter at saturation can be considered as determined within a relatively small range, approximately between 220 and 270 MeV. The different relativistic and non-relativistic microscopic EoS, considered in the previous section, turn out to be compatible with these values of the compressibility, As we have seen, however, the compressibility (i.e. stiffness) can differ at high enough density, which can be relevant for many phenomenological data.
In heavy ion collisions at intermediate energy, nuclear matter is expected to be compressed and to reach densities few times larger than the saturation value. Several observable quantities have been devised that should be sensitive to the stiffness of the nuclear EoS. In particular the measure of different types of \"flow\" is considered particularly useful and this line has been followed in many experiments. A more ambitious, and probably more questionable, analysis was performed by Daneliewicz et al. (2002). The authors consider both the in-plane transverse flow and the elliptic flow measured in different experiment on \\(Au+Au\\) collision at energies between 0.2 and 10 GeV/A. According to relativistic Boltzmann-Uehling-Uhlenbeck simulations it is claimed that density up to 7 times saturation density is reached during the collisions (at the highest energy), and from the data an estimate of the pressure is extracted. Together with an evaluation of the uncertainty, the analysis results in the determination of a region in the pressure-density plane where the nuclear EoS should be located. In this way it appears easy to test a given EoS, and it became popular to confront the various microscopic and phenomenological EoS with this region, which is assumed to be the allowed one (essentially for symmetric nuclear matter). If one believes the validity of this analysis, it turns out that the test is quite stringent, despite the fact that in the same work it is also shown that the value of the compressibility at saturation is not at all well determined. This means that also at the phenomenological level the value of the compressibility at saturation does not determine the EoS stiffness at high density. In Fig. 28 the set of microscopic EoS already discussed are reported in comparison with the allowed region. The variational EoS of Akmal et al. (1998), as well as the EoS derived from BHF together with phenomenological TBF, look in agreement with the phenomenological analysis, while the EoS from the DBHF calculations of van Dalen et al. (2005) is only marginally compatible, see also the paper by Klahn et al. (2006). The non-relativistic EoS calculated with BHF and \"ab initio\" TBF reported by Zuo et al. (2002) looks close to the BHF results with phenomenological TBF up to 2-3 time saturation density, giving further support to the phenomenological TBF, see also Zhou et al. (2006). However, at higher density it becomes too stiff and definitely falls outside the allowed region.
In turns out that also many phenomenological EoS do not pass this test (Klahn et al. 2006).
It has to be noticed that the flow values, and other observable quantities, in general do not depend only on the nuclear EoS, as embodied in the single particlepotential, but also on the in-medium nucleon-nucleon collision cross section and on the effective mass (i.e. momentum dependence of the single particle potential). The extraction of meaningful information from the experiments requires a careful analysis and interpretation of the data. Other quantities which are related to the EoS are the rates of particle production, in particular \\(K^{+}\\) and \\(K^{-}\\) and their ratio. In fact the strange particle production is intimately related to the density reached during the collision and therefore to the nuclear matter compressibility. However in order to reach meaningful conclusions, it is necessary to have a reasonably good description of the behaviour of kaons in the nuclear medium at high density, which is not an easy task theoretically.
On the astrophysical side, as it is well known, each EoS for asymmetric matter gives rise to a definite relationship between the mass and radius of neutron stars (NS). This is because ordinary NS are bound by gravity and the solution of the Tolmann-Opennheimer-Volkoff (TOV) equation, based only on general relativity and the adopted EoS, provides the full density profile of the star. Unfortunately it is quite difficult to get from observation both the mass and the radius of a single NS, and up to now the accuracy, especially of the radius value, is not good enough to discriminate among different EoS. The quantity which has been mostly given attention is then the maximum mass of NS. The mass vs. radius plot has indeed a maximum value at the smaller radius,
Figure 28: Different EoS in comparison with the phenomenological constraint extracted by Danielewicz et al. (2002) (shaded area), where \\(\\rho_{0}=0.16\\) fm\\({}^{-3}\\). Full line: EoS from the BBG method with phenomenological TBF (Zuo et al. 2004). Dashed line : modified variational EoS of Heiselberg and Hjorth–Jensen (1999). Dotted line : variational EoS of Akmal et al. (1998). Open circle : EoS from the BBG method with “ab initio” TBF (Li 2006\\({}^{a}\\)). Dash–dotted line : EoS from Dirac–Brueckner method (van Dalen et al. 2005).
beyond which the star configuration is unstable towards collapse to a black hole. This maximum is characteristic of each EoS, and the observation of a NS with a mass larger than the predicted one would rule out the corresponding EoS. If one assumes that only nucleonic degrees of freedom are present inside the NS, then one can adopt the EoS discussed above. It turns out that BHF EoS give a maximum mass close to two solar masses, while the DBHF and variational ones a slightly larger value, around 2.2 - 2.3 solar masses. However, as already mentioned, the variational EoS of Akmal et al. (1998) becomes superluminal at high density, and actually the corrected one by Heiselberg and Hjorth-Jensen (1999) gives a maximum mass very close to 2 solar masses. Also the BHF EoS with the TBF by Zuo et al. (2002) gives a maximum mass larger than 2 (Zhou et al. 2004), but unfortunately this EoS becomes also superluminal already at relatively low density. From the astrophysical observations (mainly binary systems) the masses of NS were found, up to few years ago, mostly concentrated around 1.5 solar masses, the most precise one being 1.44 (Hulse and Taylor 1975). These values look compatible with the theoretical predictions. However, the situation for the maximum mass is by far much more complex. First of all it is likely that other degrees of freedom, besides the nucleonic ones, can appear inside a NS, in particular hyperonic matter. The BBG scheme has been extended to matter containing hyperons in several papers. If the most recent hyperon-nucleon and hyperon-hyperon interactions are used the maximum mass of NS drops to values below the observational limit (Schulze et al. 2006). There are two possibilities to overcome this failure in reproducing the observational constraint. The hyperon-nucleon and hyperon-hyperon interactions are poorly known from laboratory experiments, which presently are able to provide only few data points to be fitted. Different interactions, still compatible with phenomenology, could be able to produce a stiffer EoS at high density and consequently larger mass values. Another possibility is the appearance of other degrees of freedom, in particular the transition to quark matter could occur in the core of NS. This possibility has been studied extensively by many authors and indeed it has been found that the onset of the deconfined phase is able to increase the maximum mass to values compatible with the observational limit and ranging from 1.5 to 1.8 solar masses. These results have been obtained within simple quark matter models, like the MIT bag model, the Color Dielectric Model and the Nambu-Jona Lasinio model, with the possible inclusion of color superconductivity (Drago et al. 2005). If perturbative-like corrections to the simple MIT model are introduced, masses up to about 2 can be obtained (Alford et al. 2005). In any case, all that shows clearly the great relevance of the nowadays standard observations on NS to our knowledge of the high density nuclear EoS. The astrophysical observations are able to rule out definite EoS or put constraints on them.
As a final remark on this subject one has to mention the recent claims of the observation of NS with mass definitely larger than 2 (Nice et al. 2005, Ozel 2006). If confirmed, these observations would put serious constraints on the nuclear EoS and would point to an additional repulsion which should be present at high density, i.e. a larger stiffness of the quark matter EoS. It has to be stressed that the nuclear EoSappropriate to NS cannot be directly applied to heavy ion collisions. The NS matter is in beta equilibrium and the strange content is determined by chemical equilibrium, which cannot be established during the collision time of heavy ions. In fact the hyperon multiplicity in heavy ion collisions is much smaller than one and no strange matter can be actually formed. Furthermore, the asymmetry of NS matter is much larger than the values reachable in laboratory experiments. Of course a good microscopic theory must be able to connect the two different physical situations within the same many-body scheme, which is one of the main challenges of nuclear physics.
## 11 Symmetry energy above saturation
At sub-saturation density the symmetry energy of nuclear matter seems to be under control from the theoretical point of view, since the different microscopic calculations agree among each other and the results look only marginally dependent on the adopted nuclear interaction. The symmetry energies calculated within the BBG scheme (Baldo et al. 2004), for different NN interactions (TBF have a negligible effect) are in agreement with each other and are reasonably well reproduced by some of the most used phenomenological Skyrme forces. Variational or DBHF calculations give very similar results. The approximate agreement of phenomenological calculations with the microscopic ones does not hold for all Skyrme forces, as shown e.g. by Chen et al. (2006), and a wide spread of values is actually found below saturation. The microscopic symmetry energy \\(C_{sym}\\) below saturation density can be approximately described by
\\[C_{sym}\\,=\\,31.3(\\rho/\\rho_{0})^{0.6}\\ \\,\\]
where \\(\\rho_{0}\\) is the saturation density. Notice that the exponent is close to the one for a free Fermi gas (of course the absolute values are quite different by approximately a factor 2).
The symmetry energy at density above saturation can be studied with heavy ion reactions in central or semi-central collisions where nuclear matter can be compressed. Particle emissions and productions are among the processes which have been widely used to this purpose. Generally speaking the signal coming from these studies are weak because several competing effects are very often present at the same time and they largely cancel out among each other. In the paper by Ferini et al. (2006) the ratio between \\(K^{+}\\) and \\(K^{0}\\) rates and between \\(\\pi^{-}\\) and \\(\\pi^{+}\\) has been studied through simulations of \\(Au+Au\\) central collisions in the energy range 0.8 - 1.8 GeV. These ratios seem to be dependent on the strength of the isovector part of the single particle potentials, but the dependence is not so strong, due to the compensation between symmetry potential effects and threshold effects. In any case it has to be stressed that the behaviour of \\(K\\) mesons, or even \\(\\pi\\) mesons, in nuclear matter is a complex many-body problem, which complicates the interpretation of the experimental data.
To this respect one has to notice that it was suggested by Li et al. (1997) that in NS the onset of a kaon condensate could be possible. This can happen due to the steep increase inside the star of the electron chemical potential which can finally equal the in-medium mass of \\(K^{-}\\) mass. Since this condensate produces a substantial softening of the EoS, the NS maximum mass turns out to be limited to about 1.5 solar masses. The possibility of kaon condensation was re-examined recently by Li et al (2006\\({}^{a}\\)). In any case, this value looks in contradiction with the latest observational data and shows once more the great value of the astrophysical studies on NS for our knowledge of dense nuclear matter.
Another process in NS which is sensitive to symmetry energy is cooling. The main mechanism of cooling is the direct Urca (DU) process
\\[n\\to p+e^{-}+\\overline{\
u}_{e}\\hskip 28.452756pt;\\hskip 28.452756ptp+e^{-} \\to n+\
u_{e}\\]
where neutrinos and antineutrinos escape from the star, cooling the object with a time scale of the order of million years. Since the chemical potentials of neutrons and protons are quite different because of the large asymmetry of the NS matter, the conservation of energy and momenta forbid these reactions when the percentage of protons is below 14% (when muons are also included). The percentage of protons is directly determined by the symmetry energy, and therefore the density at which the threshold for DU occurs is directly determined by the density dependence of the symmetry energy. At density above saturation this threshold can be different for different EoS. In some case, as for the EoS of Akmal et al. (1998), it is practically absent up to almost the central density of NS, even for the largest masses. In this case other processes, like the indirect Urca process, are the dominant cooling mechanism, which are however much less efficient than the DU process. Models of NS which do not include the DU process are only marginally successful in reproducing cooling data on NS (Yakovlev and Pethick 2004). If the DU threshold is at too \"low\" density, the cooling process can be too fast, even with the inclusion of nuclear matter pairing (Klahn 2006), which hinders the DU process. This is what occurs for the EoS derived from the DBHF method, whose symmetry energy rises steeply with density. The EoS from BHF, with the inclusion of phenomenological TBF, give a threshold density for DU process intermediate between these two cases and seem to be compatible with cooling data. For EoS with a similar behaviour of the symmetry energy the scenario of NS cooling involves a slow cooling for low masses and a fast one for the higher masses. Of course a more detailed description of NS cooling requires the many-body treatment of the different processes which can contribute (Blaschke et al. 2004). Finally, the possible onset of quark matter could again change the whole cooling scenario, which has then to be reconsidered, but the above general considerations can be still applied.
## 12 Conclusions
In this topical review we have presented the microscopic many-body theories, developed along the years, on the nuclear Equation of State, where only nucleonic degrees of freedom are considered. The results of different approaches have been critically compared, both at the formal and the numerical levels, with special emphasis on the high baryon density regime. The non-relativistic Bethe-Brueckner-Goldstone and variational methods are in fair agreement up to 5-6 times saturation density. When three-body forces are introduced, as required by phenomenology, some discrepancy appears for symmetric nuclear matter above 0.6 fm\\({}^{-3}\\). The dependence of the results on the adopted realistic two-body forces and on the choice of the three-body forces has been analyzed in detail. It is found that a very precise reproduction of data on three- and four-body nuclear systems, as well as of the nuclear matter saturation point, is too demanding for the present day nuclear force models. In particular, if a very accurate description of few body systems is achieved by a suitable off-shell adjustment of the two-body forces, i.e. by introducing a non-local component, then the saturation point cannot be reproduced with a reasonable precision. However, local forces with phenomenological three-body forces are able to give an approximate reproduction both of the properties of few-body systems and of the nuclear matter saturation point, with a discrepancy on the binding energy per particle below 200-300 KeV.
The relativistic Dirac-Brueckner approach, as applied to nuclear matter, gives an Equation of State above saturation which is stiffer than the non-relativistic approaches. Some ambiguities related to the three-dimensional reduction of the fully relativistic two-body scattering matrix in the medium have still to be resolved.
We have then briefly reviewed the observational data on neutron stars and the experimental results on heavy ion collisions at intermediate and relativistic energies that could constrain the nuclear Equation of State at high baryon density. The EoS of the relativistic Dirac-Brueckner approach seems to present some discrepancies in comparison with constraints coming form heavy ion collisions and neutron stars data. The microscopic non-relativistic EoS turn out to be compatible with the phenomenological constraints available up to now. It looks likely that future developments in astrophysical observations and in laboratory experiments on heavy ion collisions will further constrain the nuclear EoS and give further hints on our knowledge of the fundamental processes which determine the behaviour of nuclear matter at high baryon density.
## References
* [1] Akmal A, Pandharipande V R and Ravenhall D G 1998, Phys. Rev. **C58** 1804.
* [2] Alford M, Braby M, Paris M and Reddy S 2005, ApJ **629** 969.
* [3] Baldo M 1999 _Nuclear Methods and the Nuclear Equation of State_,Chapter 1, International Review of Nuclear Physics vol 8 (World Scientific) Baldo M Ed.
* [4] Baldo M, Giansiracusa G, Lombardo U and Song H Q 2000, Phys. Lett. **B 473** 1.
* [5] Baldo M, Fiasconaro A, Giansiracusa G, Lombardo U and Song H Q 2001, Phys. Rev. **C 65** 017303.
* [6] Baldo M, Maieron C, Schuck P and Vinas X 2004, Nucl. Phys. **A 736** 241.
* [7] Baldo M and Maieron C 2005, Phys. Rev. **C 72** 034005.
* [8] Bethe H A, Brandow B H and Petschek A G, 1962, _Phys. Rev._**129** 225.
* [9] Blaschke D, Grigorian H and Voskresensky D N 2004, Astron. Astrophys. **424** 979.
Brockmann R and Machleidt R 1999, _Nuclear Methods and the Nuclear Equation of State_,Chapter 2, International Review of Nuclear Physics vol 8 (World Scientific) Baldo M Ed. Brown G E, Weise W, Baym G and Speth J 1987, _Comm. Nucl. Part. Phys._# Introduction
1115 ; astro-ph/0605106.
Newton R G 1966, _Scattering Theory of Waves and Particles_, Mc-Graw Hill. Pandharipande V R and Wiringa R B 1979, Rev. Mod. Phys. **51** 821. Rajaraman R and Bethe, H A 1967, Rev. Mod. Phys. **39** 745. Schulze H-J, Polls A, Ramos A and Vidana I 2006, Phys. Rev. **C 73**, 058801. Stocks V G J,Klomp R A M, Terheggen C P F de Swart J J 1994, Phys. Rev. **C 49** 2950. Ter Haar B and Malfliet R 1987, Phys. Rep. **149**, 207. van Dalen E N E, Fuchs Ch and Faessler A 2005, Phys. Rev. **C72** 065803. Wiringa R B, Stoks V G J and Schiavilla R 1995, Phys. Rev. **C 51**, 38. Wiringa R B, Smith R A and Ainsworth T L 1984, Phys. Rev. **C 29**, 1207. Yakovlev D G and Pethick C J 2004, Ann. Rev. Astron. Astrophys. **42** 169 (2004). Zhou X R, Burgio F G, Lombardo U, Schulze H-J and Zuo W 2004, Phys. Rev. **C 69** 018801. Zuo W, Lejeune A, Lombardo U and Mathiot J-F 2002, Nucl. Phys. **A 706** 418 ; EPJA **14** 469. | A central issue in the theory of astrophysical compact objects and heavy ion reactions at intermediate and relativistic energies is the Nuclear Equation of State (EoS). On one hand, the large and expanding set of experimental and observational data is expected to constrain the behaviour of the nuclear EoS, especially at density above saturation, where it is directly linked to fundamental processes which can occur in dense matter. On the other hand, theoretical predictions for the EoS at high density can be challenged by the phenomenological findings. In this topical review paper we present the many-body theory of nuclear matter as developed along different years and with different methods. Only nucleonic degrees of freedom are considered. We compare the different methods at formal level, as well as the final EoS calculated within each one of the considered many-body schemes. The outcome of this analysis should help in restricting the uncertainty of the theoretical predictions for the nuclear EoS. | Provide a brief summary of the text. |
arxiv-format/0703053v1.md | # Extraction of Cartographic Objects on High Resolution Satellite Images for Object Model Generation
Guray Erus, Nicolas Lomenie
_SIP-CRIP5 Laboratory, Universite de Paris 5_
_{egur,lomenie}@math-info.univ-paris5.fr_
## 1 Introduction
With the spread of very high-resolution satellite images, more sophisticated image processing systems are required for the automatic extraction of information. In the frame of a research project of CNES1 to develop tools and algorithms to exploit images acquired by the new generation Pleiades satellites, a database of \"cartographic object images\" has been prepared. This database will be used to detect the target objects in a global scene, on a very large satellite image. For this task, low-level pixel based classification methods are not very successful, mainly because they disregard the structural features of the target objects. Man-made geographical objects have well-defined (but quite variable), mostly geometrical structures. In low-resolution, the objects disappear and become part of the texture. However, in high resolution, the shape features and the spatial relations between the objects are perceivable and exploitable.
Footnote 1: French National Space Agency
As a first module, a detection method that uses very basic structural and spatial features of the objects has been developed [1]. This method generates an object model from the given examples. As input, the system uses sample images of objects manually segmented by an expert. The images are transformed to Attributed Relational Graphs (ARGs). A model representing the common features of the objects is constructed applying graph-matching algorithms [2].
In order to generate the model from the initial satellite images, the preliminary task is the extraction of the objects from the sample images (as done by the manual segmentation). The satellite images of the same object in two different spectral bands and resolutions are provided (Fig 1): A panchromatic image (\\(I_{p}\\)) with resolution of 2.5 meters, and a multispectral (\\(I_{u}\\)) image with resolution of 10 meters.
This paper describes a hybrid segmentation approach that uses the two images of different resolutions, and that combines region-based and edge-based methods by a \"marker-controlled watershed method using edges\". The originality of this method lies in the fusion of complementary information from different sources and methods to obtain a good segmentation.
The ultimate objective is to combine the two modules and to integrate them in an interactive object detection software based on query by example. The object model learned from the examples will be used asa semantic filter (together with a radiometric filter) in the detection of the candidate regions.
## 2 Model Generation
The segmented images are first split into primitives either using simple geometric shapes (rectangles and circles) or using the skeleton of the objects. They are then transformed into ARGs. The features of primitives are stored in the vertices of the graph and the features of connections in the edges. The vertex attributes are the type of the object and the edge attributes are the type and direction of the connection.
From the ARGs belonging to a class of object, the prototypes, the most common representations of each class, are detected using an exact matching algorithm. The model is obtained by finding the Maximal Common Subgraph (_MaxCSg_) and the Minimal Common Supergraph (_MinCSg_) from the prototypes of an object. The main steps of the algorithm are given in figure 2.
To evaluate the quality of a model, the metric proposed by Bunke [3] to calculate the edit distance between two graphs is adapted to the distance between an ARG and the model.
Figure 3 presents the models generated for the bridge and round-about objects. The models are simple and quite similar to manually generated models. They represent the geometrical and spatial characteristics of the target objects.
The manually segmented images are not always available. In order to use the method on a real world system, the target objects should be first extracted from the background. Consequently, our work is concentrated on the extraction of target objects from the satellite images.
## 3 Extraction of Target Objects
Extraction of objects on \\(I_{p}\\) images using naive methods does not give a satisfactory result due to the imprecision of the boundaries separating cartographic objects from the background. On the other hand, while the objects can be well differentiated from the background on \\(I_{n}\\) images, their shape is too coarse. The use of both images makes it possible to overrun these limitations.
The steps of the method are given in the following sections.
### Segmentation of the \\(I_{n}\\) Image.
The first step is to clip the central part of the \\(I_{n}\\) image and to magnify it by a factor of 4, so that its resolution becomes the same as the resolution of the \\(I_{s}\\) image. A linear interpolation is applied to magnify the image.
The \\(I_{n}\\) image consists of three channels I\\({}_{1}\\), I\\({}_{2}\\), I\\({}_{3}\\) with spectral ranges 0.50-0.59 \\(\\upmu\\)m, 0.61-0.69 \\(\\upmu\\)m and 0.78-0.89 \\(\\upmu\\)m. The target objects are consisting of pieces of possibly different types of roads in a specific spatial arrangement. We generated the linear combination of the three channels, that discriminates better the roads from other objects, fixing the coefficients empirically. A deeper analysis in parameter space combined with prior knowledge on reflectance values of spectral bands may result in a much better discrimination. The resulting image I\\({}_{i}\\) is obtained by:
\\[I_{f}=(I_{1}+I_{2})\\ *\\ 0.3\\ -\\ I_{3}\\]
On \\(I_{f}\\) a hysteresis threshold is applied to extract the region R\\({}_{M}\\) containing the target object. The threshold value is calculated by finding the most frequent gray value in a 5x5 central window on all \\(I_{n}\\) images. The method is very successful in detecting the true positives (Fig. 4). There are some false positives due to the incorrect threshold value for some of the images, but these are acceptable considering that the result will be refined in the next steps.
### Matching the Mask.
It is necessary to search for the exact location of the mask on the \\(I_{s}\\) image. There exist many sophisticated image registration techniques [4]. In our case, we don't
Figure 3: Bridge and round-about models: using rectangles and circles, using line segments and circles
Figure 2: Model generation
deal with the transformations and deformations that the image has been subject to, and the difference between centers is limited to at most 10 pixels in horizontal and in vertical in both directions. A matching method moving the center of the mask in a 20x20 window located on the center of the \\(I_{s}\\) image would be sufficient to visit all \"candidate\" points.
For each candidate point \\(p_{i}\\) a matching score should be calculated. The mask, in the correct place, should contain the whole target object. The most pertinent information that indicates the location of the object is given by the edges. A correctly placed mask that contains an object should also contain its edges. Besides that, the edges in the exterior part of an object would remain out of the mask if the mask is not well placed. Consequently, the score is calculated using the number of edge points contained in a region.
_Fig. 4. Mask on Im image and on Ip image after matching_
Sub-pixel precision edges are obtained using Canny edge detection algorithm [5]. In order to eliminate noisy short edge segments, the edges are smoothed and edges with closer extremities are merged. Finally short edges are eliminated. Before calculating the score, a dilation is applied on the mask so that it would be slightly bigger than the object. In case of equal score, the location that gives the smallest variance score for the masked region is selected. The mask center is shifted to the point with the highest score (Fig. 4).
### Extraction of the object.
The region covered by the mask is an approximation of the object shape obtained from a lower resolution image. The mask should be refined in order to extract the object.
Region-based methods fail to locate the object boundaries well. It is also difficult to select the regions belonging to the object. Edge-based methods give us a good localization of the separation lines between the object and the background. However, the extracted edges are not continuous, and consequently they don't give us closed regions.
It is a very common approach to integrate region growing and edge detection methods, as Pavlidis et al. [6] that uses the edge intensities to eliminate the boundaries in an over- segmented image, or Le Moigne et al. [7] that uses the edge features to determine the stopping criterion of the region growing algorithm.
In his multi-scale segmentation algorithm, [8] proposed to increase the intensities of the straight edge segments obtained from a different source, on the gradient image before applying the watershed. In that way these segments are preserved in the final watersheds.
The method proposed in this paper is a marker-controlled watershed segmentation that uses the edge information as in [8].
Watershed method produce an important over-segmentation. A solution is to apply a fusion algorithm. Since we look for a unique object, a very appropriate alternative is to use markers to limit the number of resultant regions. The main difficulty is to detect the markers. Usually, this is done through user interaction. In our case, a mask covering the object is already obtained. It can be assumed that the skeleton of the mask should match with the skeleton of the object, consequently it is a good marker for the object. Similarly the external boundaries of the mask can be used to mark the background.
The steps of the final algorithm are as follows:
1. _Detect:_ \\(S_{\\text{m}}=\\) _skeleton of the mask M._ \\(B_{\\text{m}}=\\) _the external boundary of M after dilation._ \\(Ig=\\) _gradient of Is_ \\(Es=\\) _edges of Is (after previously mentioned post processing)_
2. _Set Ig(Es) to max(Ig) (Pixels belonging to the edges will have the highest gradient values)_
3. _Modify Ig so that it has minima only at Sm and Bm._
4. _Apply the watershed._
The algorithm gives a unique region as the object around the skeleton bounded by the detected edges.
### Experimental Results
We tested our system on 20 bridge and 20 round-about objects.
_Table 1. The cumulative results_We evaluated the results by manual inspection in the end of each step and by comparing them with images segmented by an expert at the end of the whole process. Table 1 gives a brief overview of the results at the end of each step.
The results of the first step are very successful for the bridges. In almost all of the images the object is partially or totally detected. For the round-abouts there are some bad segmentations mainly due to two reasons: the small size of the object in some images makes the segmentation very difficult in the low resolution image and buildings very near to the round-abouts are confused with the roads. In the second step, the rate of exact match is very high. The incorrect matches occur mainly when the mask size is large. After eliminating the errors from the previous steps, the results of the extraction phase are very successful.
_Figure 5. The extracted objects_
A qualitative analysis seems us more appropriate for the final step. The segmentation follows exactly the edge lines and connects them according to the maximum values of the gradient values in disconnected regions. This behavior is conform to what we aimed using a hybrid approach.. However, the result is highly dependent on the skeleton obtained from the mask, and when there are multiple edges it may follow the incorrect edges (Fig. 5). Defining a weight on the length and the position of the edge line may help to solve this problem.
## 4 Conclusions
This study is a first approach to a difficult problem, and we proposed a system that can be refined with several improvements. The extraction module gives promising results. However, the results should be improved in order to apply the model generation directly on them.
Detection of circles and line segments may help correcting the irregularities of the final segmentation. For the round-about images, the early detection of the central circle using Hough transformation may be used to improve the results of the first two steps. Another alternative is to implement deformable models.
We can propose several refinements and ameliorations for the model generation. Considering the numerical attributes, using a fuzzy modelisation of the symbolic concepts and integrating methods issued from qualitative spatial reasoning as in [9] may enhance the results of our system.
Another important perspective is to iterate the segmentation process using the generated model so that the high-level knowledge would be used in low-level processing.
## 5 Acknowledgements
We would like to thank French Space Agency - CNES, Toulouse, and especially Gilbert Pauc and Jordi Inglada for their support. We would also like to thank Dr. Z. Hamrouni and Cultural Service of the French Embassy in Ankara.
## 10 References
[1] Erus G., Lomenie N., \"Automatic Learning of Structural Models of Cartographic Objects\", IAPR - Workshop on Graph-based Representations in Pattern Recognition, 2005, Poitiers.
[2] Cordella L. P., Foggia P., Sansone C., M. Vento: Learning structural shape descriptions from examples. Pattern Recognition Letters. 23(2002) 1427--1437
[3] Bunke H.,Shearer K.: A Graph distance metric based on the Maximal Common Subgraph. Pattern Recognition Letters 19 (1998) 255--259
[4] Brown,L.:A Survey of Image Registration Techniques, ACM Computing Surveys. 25-376, 1992
[5] Canny, J.: A Computational Approach to Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 8, No. 6, Nov 1986.
[6] Pavlidis T. Liow Y.: Integrating region growing and edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(3):225-233 Mar 1990
[7] Le Moigne J., Tilton J.C.: Refining image segmentation by integration of edge and region data. IEEE Transactions on Geoscience and Remote Sensing, 33-3 May 1995.
[8] Bloch, I., Colliot, O., Camara, O., Geraud, T., Fusion of spatial relationships for guiding recognition, example of brain structure recognition in 3D MRI, PRL(26), No. 4, March 2005, pp. 449-457.
[9] Guigues L. \"Modeles multi-echelles pour la segmentation d'images\" Ph. D. Thesis, 2003 | The aim of this study is to detect man-made cartographic objects in high-resolution satellite images. New generation satellites offer a sub-metric spatial resolution, in which it is possible (and necessary) to develop methods at object level rather than at pixel level, and to exploit structural features of objects. With this aim, a method to generate structural object models from manually segmented images has been developed. To generate the model from non-segmented images, extraction of the objects from the sample images is required. A hybrid method of extraction (both in terms of input sources and segmentation algorithms) is proposed: A region based segmentation is applied on a 10 meter resolution multi-spectral image. The result is used as marker in a \"marker-controlled watershed method using edges\" on a 2.5 meter resolution panchromatic image. Very promising results have been obtained even on images where the limits of the target objects are not apparent. | Write a summary of the passage below. |
arxiv-format/0703649v1.md | # Equation of State for supernova explosion simulations
D.P.Menezes
Depto de Fisica - CFM - Universidade Federal de Santa Catarina Florianopolis - SC - CP. 476 - CEP 88.040 - 900 - Brazil
C. Providencia
Centro de Fisica Teorica - Dep. de Fisica - Universidade de Coimbra - P-3004 - 516 Coimbra - Portugal
## I Introduction
The observed pulsars, also commonly known as neutron stars, are believed to be the remnants of type II supernova explosions. These type II supernovae appear at the end of the evolution of very massive stars. The core of these stars collapses to a density around several times nuclear saturation density. A rebound takes place and drives a shockwave which expells most of the original mass of the star. The simulation of supernova explosions and the conditions for them to take place have been subject of investigation for the last 30 years. Depending on certain thermodynamical conditions present in the equations of state (EoS), the supernova explosion simulation is successful or not [1]. The EoS built for nuclear astrophysics purposes depends on a series of thermodynamic properties which are obtained for certain temperatures, densities and matter composition. Hence, an efficient EoS which is reasonably accurate is mandatory for a supernova explosion simulation to be successful.
In order to obtain an equation of state (EoS) for low and high density matter suitable to astropysical applications, the relativistic non linear Walecka model (NLWM) [2; 3] is used. For matter to be neutral, electrons are also included. For sufficiently high densities the formation of hyperons is energetically favored. Normally, the appearance of the strange baryons softens the EoS. Our formalism is described next with the inclusion of the whole baryonic octet for the sake of completeness but, in a first step towards a complete desciption of a supernova explosion, only protons and neutrons are considered. Next we incorporate strangeness but restrict ourselves to the inclusion of \\(\\Sigma^{-}\\). Convergence problems are well known to exist at low temperatures and below certain densities. Appropriate approximations are then utilized. Blackbody radiation is considered and, whenever convenient, electrons and positrons are described separately. Future prospects for obtaining more sophisticated EoS are discussed.
## II Hadronic matter equation of state
A common extension of the NLWM considers the inclusion of the whole baryonic octet (\\(n\\), \\(p\\), \\(\\Lambda\\), \\(\\Sigma^{+}\\), \\(\\Sigma^{0}\\), \\(\\Sigma^{-}\\), \\(\\Xi^{-}\\), \\(\\Xi^{0}\\)) in the place of the nucleonic sector. In this work we present the complete formalism, but numerical calculations were performed with nucleons only.
The lagrangian density of the NLWM reads:
\\[{\\cal L}={\\cal L}_{B}+{\\cal L}_{mesons}+{\\cal L}_{leptons}, \\tag{1}\\]
where
\\[{\\cal L}_{B}=\\sum_{B}\\tilde{\\psi}_{B}\\left[\\gamma_{\\mu}\\left(i\\partial^{\\mu} -g_{vB}V^{\\mu}-g_{\\rho B}{\\bf t}\\cdot{\\bf b}^{\\mu}\\right)-\\left(M_{B}-g_{sB} \\phi\\right)\\right]\\psi_{B},\\]
with \\(\\sum_{B}\\) extending over the chosen baryons B,
\\[g_{sB}=x_{sB}\\ g_{s},\\ \\ g_{vB}=x_{vB}\\ g_{v},\\ \\ g_{\\rho B}=x_{\\rho B}\\ g_{\\rho}\\]
and \\(x_{sB}\\), \\(x_{vB}\\) and \\(x_{\\rho B}\\) are equal to 1 for the nucleons and acquire different values in different parametrizations for the other baryons,
\\[{\\cal L}_{mesons}=\\frac{1}{2}(\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi-m_{s}^{2} \\phi^{2})-\\frac{1}{3!}\\kappa\\phi^{3}-\\frac{1}{4!}\\lambda\\phi^{4}-\\frac{1}{4} \\Omega_{\\mu\
u}\\Omega^{\\mu\
u}+\\frac{1}{2}m_{v}^{2}V_{\\mu}V^{\\mu}\\]\\[-\\frac{1}{4}{\\bf B}_{\\mu\
u}\\cdot{\\bf B}^{\\mu\
u}+\\frac{1}{2}m_{\\rho}^{2}{\\bf b}_{ \\mu}\\cdot{\\bf b}^{\\mu}, \\tag{2}\\]
where \\(\\Omega_{\\mu\
u}=\\partial_{\\mu}V_{\
u}-\\partial_{\
u}V_{\\mu}\\), \\({\\bf B}_{\\mu\
u}=\\partial_{\\mu}{\\bf b}_{\
u}-\\partial_{\
u}{\\bf b}_{\\mu}-g_{ \\rho}({\\bf b}_{\\mu}\\times{\\bf b}_{\
u})\\) and \\({\\bf t}\\) is the isospin operator.
In the above lagrangian, neither pions nor kaons are included because they vanish in the mean field approximation which is used in the present work and we do not consider the possible contribution of pion and kaon condensates. The leptonic sector is included as a free fermi gas which does not interact with the hadrons. Its lagrangian density reads:
\\[{\\cal L}_{leptons}=\\sum_{l}\\bar{\\psi}_{l}\\left(i\\gamma_{\\mu}\\partial^{\\mu}-m_{ l}\\right)\\psi_{l}. \\tag{3}\\]
In the present work only electrons (and positrons are considered). In the mean field approximation (MFA) (see [4; 5], for instance), the meson equations of motion read:
\\[\\phi_{0}=-\\frac{\\kappa}{2m_{s}^{2}}\\phi_{0}^{2}-\\frac{\\lambda}{6m_{s}^{2}} \\phi_{0}^{3}+\\sum_{B}\\frac{g_{s}}{m_{s}^{2}}x_{sB}\\ \\rho_{sB}, \\tag{4}\\]
\\[V_{0}=\\sum_{B}\\frac{g_{v}}{m_{v}^{2}}x_{vB}\\ \\rho_{B}, \\tag{5}\\]
\\[b_{0}=\\sum_{B}\\frac{g_{\\rho}}{m_{\\rho}^{2}}x_{\\rho B}\\ t_{3B}\\ \\rho_{B}, \\tag{6}\\]
with
\\[\\rho_{B}=2\\int\\frac{{\\rm d}^{3}p}{(2\\pi)^{3}}(f_{B+}-f_{B-}),\\quad\\rho=\\sum_{ B}\\rho_{B}, \\tag{7}\\]
\\[\\rho_{sB}=\\frac{1}{\\pi^{2}}\\int p^{2}{\\rm d}p\\frac{M_{B}^{*}}{\\sqrt{p^{2}+{M_ {B}^{*}}^{2}}}(f_{B+}+f_{B-}),\\]
with \\(M_{B}^{*}=M_{B}-g_{sB}\\ \\phi\\), \\(B\\pm\\) stands respectively for baryons and anti-baryons, \\(t_{3B}\\) is the third component of the baryon isospin, \\(E^{*}({\\bf p})=\\sqrt{{\\bf p}^{2}+{M^{*}}^{2}}\\) and
\\[f_{B\\pm}=1/\\{1+\\exp[(E^{*}({\\bf p})\\mp\
u_{B})/T]\\}\\, \\tag{8}\\]
where the effective chemical potential is
\\[\
u_{B}=\\mu_{B}-g_{vB}V_{0}-g_{\\rho B}\\ t_{3B}\\ b_{0}. \\tag{9}\\]
Within the MFA, the meson fields are taken as classical fields whilst the baryon fields remain quantum [2]. On the other hand, the Dirac equation, which is the equation of motion for the baryons is not solved directly but instead used in the calculation of the densities appearing in the meson equations of motion. The system has then to be solved self-consistently.
At \\(T=0\\), the distribution functions for baryons are replaced by step functions. In this case equation (7) becomes simply \\(\\rho_{B}=k_{FB}^{3}/3\\pi^{2}.\\) The baryonic energy density in the mean field approximation reads:
\\[{\\cal E}_{B} = 2\\sum_{B}\\int\\frac{{\\rm d}^{3}p}{(2\\pi)^{3}}\\sqrt{{\\bf p}^{2}+{M^ {*}}^{2}}\\left(f_{B+}+f_{B-}\\right)+ \\tag{10}\\] \\[\\frac{m_{s}^{2}}{2}\\phi_{0}^{2}+\\frac{\\kappa}{6}\\phi_{0}^{3}+ \\frac{\\lambda}{24}\\phi_{0}^{4}+\\frac{m_{v}^{2}}{2}V_{0}^{2}+\\frac{\\xi g_{v}^{ 4}}{8}V_{0}^{4}+\\frac{m_{\\rho}^{2}}{2}b_{0}^{2}\\]
and the related pressure becomes
\\[P_{B} = \\frac{1}{3\\pi^{2}}\\sum_{B}\\int{\\rm d}p\\frac{{\\bf p}^{4}}{\\sqrt{ {\\bf p}^{2}+{M^{*}}^{2}}}\\left(f_{B+}+f_{B-}\\right) \\tag{11}\\] \\[-\\frac{m_{s}^{2}}{2}\\phi_{0}^{2}-\\frac{\\kappa\\phi_{0}^{3}}{6}- \\frac{\\lambda\\phi_{0}^{4}}{24}+\\frac{m_{v}^{2}}{2}V_{0}^{2}+\\frac{\\xi g_{v}^{ 4}V_{0}^{4}}{24}+\\frac{m_{\\rho}^{2}}{2}b_{0}^{2}.\\]The entropy of the baryons are taken as
\\[{\\cal S}_{B} = -2\\sum_{B}\\int\\frac{{\\rm d}^{3}p}{(2\\pi)^{3}}\\left[f_{B+}log(f_{B+}) +(1-f_{B+})log(1-f_{B+})\\right. \\tag{12}\\] \\[+\\left.f_{B-}log(f_{B-})+(1-f_{B-})log(1-f_{B-})\\right]\\]
and hence the free energy reads
\\[{\\cal F}_{B}={\\cal E}_{B}-T{\\cal S}_{B}. \\tag{13}\\]
Notice again that the above expressions were obtained for finite temperature, but they can be easily modified for \\(T=0\\). Whenever T=0, no anti-particles are present.
At very low T (\\(-0.4\\leq log(T)<-0.1\\) MeV) there are well known convergence problems due to the distribution functions and in this case we use the Sommerfeld approximation for the baryons [6]. The effective chemical potentials, in particular, read
\\[\
u_{i}=\\epsilon_{Fi}-\\frac{\\pi^{2}}{6}\\,T^{2}\\,\\frac{\\left(k_{Fi}^{2}+ \\epsilon_{Fi}^{2}\\right)}{k_{Fi}\\epsilon_{Fi}},\\quad i=p,n. \\tag{14}\\]
For the net electron density we have
\\[\\rho_{e}=2\\int\\frac{{\\rm d}^{3}p}{(2\\pi)^{3}}(f_{e^{-}}-f_{e^{+}}), \\tag{15}\\]
where the distribution functions for the particles (\\(e^{-}\\)) and antiparticles (\\(e^{+}\\)) are given by
\\[f_{e^{\\mp}}=1/(1+\\exp[(\\epsilon\\mp\\mu_{e})/T])\\;, \\tag{16}\\]
with \\(\\mu_{e}\\) as the chemical potential. In order to ensure charge neutrality, electron and proton densities have to be equal, i.e.,
\\[\\rho_{e}=\\rho_{p}. \\tag{17}\\]
Next we always distinguish between electrons (\\(e^{-}\\)) and positrons (\\(e^{+}\\)) and when both particles and antiparticles are considered we refer to the related quantity with the index \\(e\\). At \\(T=0\\), the distribution functions for the leptons are also replaced by step functions and no positrons are left. In this case equation (15) becomes simply \\(\\rho_{e}=k_{Fe}^{3}/3\\pi^{2}\\). The thermodynamic quantities read
\\[{\\cal E}_{e}=2\\int\\frac{d^{3}p}{\\left(2\\pi\\right)^{3}}\\sqrt{{\\bf p}^{2}+m_{e}^ {2}}(f_{e^{-}}+f_{e^{+}}), \\tag{18}\\]
\\[{\\cal E}_{e^{-}}=2\\int\\frac{d^{3}p}{\\left(2\\pi\\right)^{3}}\\sqrt{{\\bf p}^{2}+m_ {e}^{2}}\\;f_{e^{-}},\\quad{\\cal E}_{e^{+}}=2\\int\\frac{d^{3}p}{\\left(2\\pi\\right) ^{3}}\\sqrt{{\\bf p}^{2}+m_{e}^{2}}\\;f_{e^{+}}, \\tag{19}\\]
\\[P_{e}=\\frac{1}{3\\pi^{2}}\\int\\frac{{\\bf p}^{4}dp}{\\sqrt{{\\bf p}^{2}+m_{e}^{2}} }(f_{e^{-}}+f_{e^{+}}), \\tag{20}\\]
\\[P_{e^{-}}=\\frac{1}{3\\pi^{2}}\\int\\frac{{\\bf p}^{4}dp}{\\sqrt{{\\bf p}^{2}+m_{e}^ {2}}}\\;f_{e^{-}},\\quad P_{e^{+}}=P_{e}-P_{e^{-}} \\tag{21}\\]
\\[{\\cal S}_{e}=\\frac{{\\cal E}_{e}+P_{e}-\\mu_{e}\\rho_{e}}{T},\\quad{\\cal S}_{e^{-} }=\\frac{{\\cal E}_{e^{-}}+P_{e^{-}}-\\mu_{e}\\rho_{e^{-}}}{T},\\quad{\\cal S}_{e^{+ }}=\\frac{{\\cal E}_{e^{+}}+P_{e^{+}}+\\mu_{e}\\rho_{e^{+}}}{T}, \\tag{22}\\]
\\[{\\cal F}_{e}={\\cal E}_{e}-T{\\cal S}_{e},\\quad{\\cal F}_{e^{-}}={\\cal E}_{e^{-} }-T{\\cal S}_{e^{-}},\\quad{\\cal F}_{e^{+}}={\\cal E}_{e^{+}}-T{\\cal S}_{e^{+}}, \\tag{23}\\]The particle fraction is defined as \\(y_{i}=\\rho_{i}/\\rho\\), where \\(i=p,n,e^{-},e^{+}\\), and \\(\\rho\\) is the total baryonic density.
At very low densities a Boltzman distribution for relativistic electrons and positrons is necessary [7]. The low density limit depends on the temperature and is numerically chosen such that eqs. (15) and (17) are equal within a \\(10^{-6}\\) precision. If the difference is larger than this limit, eq. (17) is chosen and the corresponding chemical potentials are
\\[\
u_{e}=m_{e}+log\\left[\\frac{\\rho_{e}}{g}\\left(\\frac{2\\pi}{Tm_{e}}\\right)^{3/2 }\\right], \\tag{24}\\]
with \\(g=2\\) defined as the spin multiplicity, Moreover,
\\[\\rho_{e}=(e^{\\mu_{e}/T}-e^{-\\mu_{e}/T})\\frac{I_{1}}{\\pi^{2}}, \\tag{25}\\]
or analogously,
\\[\\mu_{e}=T\\log[\\frac{z}{2}+\\sqrt{\\frac{z^{2}}{4}+1}],\\qquad z=\\pi^{2}\\rho_{e}/ I_{1}. \\tag{26}\\]
The energy density and pressure become
\\[{\\cal E}_{e}=(e^{\\mu_{e}/T}-e^{-\\mu_{e}/T})\\frac{I_{2}}{\\pi^{2}}, \\tag{27}\\]
\\[P_{e}=(e^{\\mu_{e}/T}-e^{-\\mu_{e}/T})\\frac{(I_{2}-m_{e}^{2}I_{0})}{3\\pi^{2}}, \\tag{28}\\]
where \\(\\beta=1/T\\), \\(I_{0}=\\frac{m_{e}}{\\beta}K_{1}(m_{e}\\beta)\\), \\(I_{1}=-\\frac{dI_{0}}{d\\beta}=\\frac{m_{e}}{\\beta^{2}}K_{1}(y)-\\frac{m_{e}^{2}} {\\beta}\\frac{dK_{1}}{dy}\\), \\(I_{2}=-\\frac{dI_{1}}{d\\beta}=\\frac{2m_{e}}{\\beta^{3}}K_{1}(y)-2\\frac{m_{e}^{2 }}{\\beta^{2}}\\frac{dK_{1}}{dy}+\\frac{m_{e}^{3}}{\\beta}\\frac{d^{2}K_{1}}{dy^{2}}\\), with \\(y=m_{e}\\beta\\), \\(K_{i}\\) are modified Bessel functions and \\(K_{\
u}^{\\prime}(x)=-\\frac{1}{2}\\left(K_{\
u-1}(x)+K_{\
u+1}(x)\\right)\\).
The photons are taken into account via blackbody radiation and the main expressions are
\\[P_{\\gamma}=\\frac{\\pi^{2}T^{4}}{45},\\quad{\\cal S}_{\\gamma}=\\frac{4P_{\\gamma}}{ T},\\quad{\\cal E}_{\\gamma}=3P_{\\gamma},\\quad{\\cal F}_{\\gamma}={\\cal E}_{\\gamma}-T{ \\cal S}_{\\gamma}. \\tag{29}\\]
For the hadron phase we have used the GM3 parametrization proposed by Glendenning and Moszkowski [8], corresponding to an effective mass \\(M^{*}=0.78\\,M\\) and incompressibility \\(K=240\\) MeV at the saturation density \\(\\rho_{0}=0.153\\) fm\\({}^{-3}\\). The coupling constants are \\(\\left(\\frac{g_{s}}{m_{s}}\\right)^{2}=9.927\\), \\(\\left(\\frac{g_{v}}{m_{v}}\\right)^{2}=4.82\\), \\(\\left(\\frac{g_{e}}{m_{\\rho}}\\right)^{2}=4.79\\), \\(\\kappa=0.017318gs^{3}\\), \\(\\lambda=-0.014526gs^{4}\\).
In our codes the inputs are the temperature, proton fraction and baryonic density. The grids for these quantities are \\(-0.4\\leq log(T)\\leq 2\\) (MeV) with mesh intervals of 0.1,
\\(0\\leq y_{p}\\leq 0.6\\) with mesh intervals of 0.02,
\\(3\\leq log(P)\\leq 15.7\\) (g/cm\\({}^{3}\\)).
In the output we have \\(y_{p}\\), \\(y_{p}\\), \\(y_{e}\\)-, \\(y_{e^{+}}\\), \\(y_{e}\\),
\\(\\mu_{p}-M\\) (MeV), \\(\\mu_{n}-M\\) (MeV), \\(\\mu_{e}-m_{e}\\) (MeV), \\(-\\mu_{e}+m_{e}\\) (MeV),
\\({\\cal E}_{B}\\) (erg/g), \\({\\cal E}_{e^{-}}\\) (erg/g), \\({\\cal E}_{e^{+}}\\) (erg/g), \\({\\cal E}_{\\gamma}\\) (erg/g), \\({\\cal E}_{B}+{\\cal E}_{e}+{\\cal E}_{\\gamma}\\) (erg/g),
\\({\\cal S}_{B}\\) (\\(k_{B}\\)/baryon), \\({\\cal S}_{e^{-}}\\) (\\(k_{B}\\)/baryon), \\({\\cal S}_{e^{+}}\\) (\\(k_{B}\\)/baryon), \\({\\cal S}_{\\gamma}\\) (\\(k_{B}\\)/baryon), \\({\\cal S}_{B}+{\\cal S}_{e}+{\\cal S}_{\\gamma}\\) (\\(k_{B}\\)/baryon),
\\(P_{B}\\) (dyne/cm\\({}^{2}\\)), \\(P_{e^{-}}\\) (dyne/cm\\({}^{2}\\)), \\(P_{e^{+}}\\) (dyne/cm\\({}^{2}\\)), \\(P_{\\gamma}\\) (dyne/cm\\({}^{2}\\)), \\(P_{B}+P_{e}+P_{\\gamma}\\) (dyne/cm\\({}^{2}\\)),
\\({\\cal F}_{B}+{\\cal F}_{e}+{\\cal F}_{\\gamma}\\) (MeV/fm\\({}^{3}\\)).
## III Threshold density for matter with strangeness
To include strangeness in the EoS, \\(\\Sigma^{-}\\) was first chosen because in \\(\\beta\\)- equilibrium matter at zero temperature and with the GM3 parametrization, its onset appears at lower densities than the onset of the least massive hyperon, the \\(\\Lambda\\). Depending on the parametrization chosen for the NLWM and on the hyperon-meson coupling constants, this trend may change at higher temperatures and hence, in a future work \\(\\Sigma^{-}\\) and \\(\\Lambda\\) should be included simultaneously.
In compact stars, stellar matter is in chemical equilibrium, which means that
\\[\\mu_{\\Sigma^{0}}=\\mu_{\\Xi^{0}}=\\mu_{\\Lambda}=\\mu_{n},\\quad\\mu_{\\Sigma^{-}}=\\mu _{\\Xi^{-}}=\\mu_{n}+\\mu_{e},\\quad\\mu_{\\Sigma^{+}}=\\mu_{p}=\\mu_{n}-\\mu_{e}.\\]In an explosive enviroment like the one existing in a supernova, chemical equilibrium is not supposed to be enforced. However, we consider that the time during which the supernova explosion occurs is much longer than the characteristic time of the weak interaction in such a way that the strangeness fraction is expected to be finite.
In order to build an EoS containing strangeness and appropriate for a supernova simulation we define for each energy density, temperature and proton fraction a threshold density above which a given fraction of strangeness, \\(y_{s}\\), is allowed to exist. We determine the threshold density from the condition of \\(\\beta\\)-equilibrium for the \\(\\Sigma^{-}\\), which is imposed through the two independent chemical potentials (\\(\\mu_{n}\\) and \\(\\mu_{e}\\)). In this case, at \\(T=0\\), the corresponding effective chemical potential and density are
\\[\
u_{\\Sigma^{-}}=\\sqrt{k_{F\\Sigma}^{2}+{M^{*}}_{\\Sigma}^{2}}=\\mu_{\\Sigma^{-}}- g_{v\\Sigma}V_{0}+g_{\\rho\\Sigma}b_{0},\\]
and
\\[\\rho_{\\Sigma}=\\frac{k_{F\\Sigma}^{2}}{3\\pi^{2}}.\\]
If the condition
\\[\\frac{\\rho_{\\Sigma}}{\\rho}\\geq y_{s},\\]
is satisfied, the appearence of the strangeness fraction \\(y_{s}\\) in the EoS is allowed. For \\(y_{s}>0\\), we define
\\[\\rho=\\rho_{n}+\\rho_{p}+\\rho_{\\Sigma}\\]
with
\\[\\rho_{p}=y_{p}(1-y_{s})\\rho,\\qquad\\rho_{n}=(1-y_{p})(1-y_{s})\\rho,\\qquad\\rho_{ \\Sigma}=y_{s}\\rho.\\]
For charge neutrality,
\\[\\rho_{e}=\\rho_{p}-\\rho_{\\Sigma^{-}} \\tag{30}\\]
is required.
In order to fix the meson-hyperon coupling constants we have used the prescription given in [8; 9], where the hyperon coupling constants are constrained by the binding of the \\(\\Lambda\\) hyperon in nuclear matter, hypernuclear levels and neutron star masses (\\(x_{s\\Sigma}=0.7\\) and \\(x_{v\\Sigma}=x_{\\rho\\Sigma}=0.783\\)) and assumed that the couplings to the \\(\\Sigma\\) are equal to those of the \\(\\Lambda\\) hyperon.
## IV Results and Future Prospects
A comprehensive test of the thermodynamic accuracy and consistency of our EoS, as described in [10], mainly when strangeness is introduced, should be performed.
A non-homogeneous phase known as pasta phase should be considered at low densities. This non-homogeneous configuration made out of spheres, rods, bubbles or other more exotic structures, have been extensively used recently [12; 13; 14]. These strucutres may change the neutrino opacity in supernova matter and influence neutron star quakes and pulsar glitches. We can obtain the pasta phase by building the binodal section, and therefore obtaining the chemical potentials and densities of the gas and liquid phase in equilibrium. A very crude approximation would be to forget Coulomb interaction and take zero thickness nuclei. We can consider the matter made of liquid dropplets in a gas introducing two parameters: the radius of the Wigner-Seitz cell and the radius of the nucleus (equal for protons and neutrons). One of the parameters would be fixed imposing a given particle density and the other by minimizing the free energy. These results can be improved by including the Coulomb contribution and the surface energy by hand. A Thomas-Fermi calculation can then be used to obtain the pasta phase with all fields introduced in a consistent way and the surface energy calculated from the derivatives of the fields.
\\(\\alpha\\)-particles can be easily incorporated in the EoS as proposed in [11]. Once the pasta phase and the \\(\\alpha\\)-particles are included, the EoS should then be compared with reference [15]. We are already aware of some important differences. The parametrization used in [15], known as TM1 [16] reproduces ground state properties of stable and unstable nuclei. Nevertheless, this parametrization has proven not to be adequate in the description of neutron star matter because it breaks down, giving rise to negative baryon effective masses at densities exisiting inside a neutron star (approximately 6 times the nuclear saturation density) when hyperons are incorporated into the EoS [17]. For this reason, we usually choose one of the parametrizations introduced by Glendenning and collaborators [9], which give a higher nucleon effective mass at the nuclear matter saturation density and, for this reason, avoids the problem of the baryon negative masses. Moreover, according to [15], the EoS with inhomogeneities has a critical temperature \\(T\\simeq 15\\) MeV above which matter is uniform. This number certainly depends on the choice of the parameters. Based on our recent works, we would expect a smaller value for the critical temperature since for nuclear matter with no electrons (no Coulomb interaction and surface tension) the critical temperature occurs for symmetric matter just above 15 MeV. The high value obtained in [15] may be due to the way the density distributions are parametrized which give rise to very stiff surfaces for the droplets. In [13] a critical temperature of \\(\\sim 5\\) MeV was obtained for \\(y_{p}=0.3\\) matter and \\(\\sim 6\\) MeV was obtained for \\(y_{p}=0.5\\). One of our recent studies on the dynamical instabilities of npe matter also predicts lower critical temperatures, more according to the results of [13]. We do not know if the differences on the EoS due to the use of different parametrizations is more important or of the order of the magnitude of the changes included due to the explicit inclusion of a non-homogeneous phase. This should be studied.
In Fig. 1 we compare our results for the free energy obtained at three different temperatures and three different proton fractions with the results of [15]. One can see that results deviate sligtly at higher densities. In Fig. 2 we plot the pressure for the same temperatures and proton fractions as in Fig 1. Again the results are very similar. In Fig. 3 we plot, once more, for the same temperatures and proton fractions, the entropy. The differences are more pronounced. While at higher temperatures (50 MeV) the curves are very similar for all proton fraction, for lower temperatures the curves are identical only for neutron matter (very low proton fraction). One should notice, however, that the trends of the curves are the same.
Finally, we comment on the definition of the internal energy: it is equal to the nucleon mass for zero density at T=0 MeV. For finite temperature this is no longer true because of the presence of nucleons and antinucleons. We have defined the internal energy as the energy density per number density in erg/g and in [15] the internal energy is given by the energy density per number density minus the atomic mass unit. As one can see in Fig 4, both results are in accordance once the same definition is used. A more clear comparison is done in Fig. 5 where the internal energy for homogeneous matter within TM1 is also shown, and compared with the internal energy obtained with the GM3 parametrization and the EoS of [15], also with TM1 but with the non-homogeneous phase included.
Finally, as a second step in a more refined EoS with strangeness the \\(\\Lambda\\) hyperons and later the whole octect and muons should be included.
###### Acknowledgements.
This work was partially supported by CNPq(Brazil), CAPES(Brazil)/GRICES (Portugal) under project 100/03 and FEDER/FCT (Portugal) under the projects POCTI/FP/63419/2005 and POCTI/FP/63918/2005.
## References
* (1) J. Cooperstein, H.A. Bethe and G.E. Brown, Nucl. Phys. **A 429**, 527 (1984).
* (2) B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986).
* (3) J. Boguta and A. R. Bodmer, Nucl. Phys. **A292**, 413 (1977).
* (4) D.P. Menezes and C. Providencia, Phys. Rev. **C 68**, 035804 (2003).
* (5) S.S. Avancini and D.P. Menezes, Phys. Rev. C 74, 015201 (2006).
* (6) N. Ashcroft, N.D. Mermim, Solid State Physics, Saunders College Publishing, Orlando, 1976
* (7) L. D. Landau and E. M. Lifshitz, Statistical Physics, Pergamon Press, 1959.
* (8) N. K. Glendenning and S. Moszkowski, Phys. Rev. Lett. **67**, 2414 (1991).
* (9) N. K. Glendenning, Compact Stars, Springer-Verlag, New-York, 2000.
* (10) F.X. Timmes and D. Arnett, Astrophys. J. Suppl. **125**, 277 (1999).
* (11) J.M. Lattimer and F.D. Swesty, Nucl. Phys. **535**, 331 (1991).
* (12) D. G. Ravenhall, C. J. Pethick, and J. R. Wilson, Phys. Rev. Lett. **50**, 2066 (1983); M. Hashimoto, H. Seki, and M. Yamada, Prog. Theor. Phys.**71**, 320 (1984).
* (13) G. Watanabe, K. Sato, K. Yasuoka, and T. Ebisuzaki, Phys. Rev. C 69, 055805 (2004); G. Watanabe, T. Maruyama, K. Sato, K. Yasuoka, and T. Ebisuzaki, Phys. Rev. Lett. 94, 031101 (2005).
* (14) T. Maruyama, T. Tatsumi, D. N. Voskresensky, T. Tanigawa, and S. Chiba, Phys. Rev. C 72, 015802 (2005).
* (15) H. Shen, H. Toki, K. Oyamatsu, K. Sumiyoshi, Nucl. Phys. **A 637**, 435 (1998); H. Shen, H. Toki, K. Oyamatsu, K. Sumiyoshi, _User Notes for Relativistic EoS Tables_.
* (16) K. Sumiyoshi, H. Kuwabara, H. Toki, Nucl. Phys. **A 581**, 725 (1995).
* (17) D.P. Menezes and C. Providencia, Phys. Rev. **C 68**, 035804 (2003); A.M.S. Santos and D.P. Menezes, Phys. Rev. **C 69**, 045803 (2004).
Figure 1: Free energy (MeV/fm\\({}^{3}\\)) as function of the energy density (g/cm\\({}^{3}\\)) for different temperatures (MeV).
Figure 2: Pressure (MeV/fm\\({}^{3}\\)) as function of the energy density (g/cm\\({}^{3}\\)) for different temperatures (MeV).
Figure 3: Entropy per baryon as function of the energy density (g/cm\\({}^{3}\\)) for different temperatures (MeV).
Figure 4: Internal energy of symmetric matter for different temperatures.
Figure 5: Internal energy of symmetric matter for different temperatures for homogeneous matter within GM3 and TM1 parametrizations, and the EoS of [15]. | In this work we present a detailed explanation of the construction of an appropriate equation of state (EoS) for nuclear astrophysics. We use a relativistic model in order to obtain an EoS for neutrally charged matter that extends from very low to high densities, from zero temperature to 100 MeV with proton fractions ranging from 0 (no protons) to 0.6 (asymmetric matter with proton excess). For the achievement of complete convergence, the Sommerfeld approximation is used at low temperatures and the Boltzman distribution for relativistic particles is used in the calculation of the electron properties at very low densities. Photons are also incorporated as blackbody radiation. An extension of this EoS is also presented with the inclusion of strangeness by taking into account the \\(\\Sigma^{-}\\) hyperon only. Strangeness fractions range from 0.02 to 0.3. | Give a concise overview of the text below. |
arxiv-format/0704_1501v1.md | # Surface Structure Analysis of Atomically Smooth BaBiO\\({}_{3}\\) Films
A. Gozar
[email protected]
G. Logvenov
V. Y. Butko
I. Bozovic
Brookhaven National Laboratory, Upton, New York 11973-5000, USA
######
pacs: 68.03.Hj, 68.47.Gh, 68.49.Sf, 81.15.Hi _Introduction_ - The plethora of remarkable electrical and magnetic properties of transition metal oxides made them both a focus of basic research and very appealing candidates for integration in electronic devices. For the latter, it is important to reproducibly synthesize and characterize atomically perfect surfaces and interfaces. However, this goal is difficult to attain in complex oxides because of the complicated phase diagrams and high sensitivity to growth conditions.[1] This explains the large disparity between what is known about the structure of the very top atomic layers in these materials compared to widely used semiconductor or metal surfaces.
In this work we provide a new route to circumvent these problems. This is done by combining the power of atomic layer-by-layer molecular beam epitaxy (ALL-MBE)[2], enabling production of films with perfect surfaces, with the extreme surface sensitivity of the low energy Time-of-Flight Scattering and Recoil Spectroscopy (TOF-SARS)[3; 4] and the Mass Spectroscopy of Recoiled Ions (MSRI)[5] techniques. Angle-Resolved (AR) TOF-SARS could determine in principle inter-atomic spacings with a resolution approaching 0.01 A, comparable to lateral values obtained in surface X-ray crystallography and even better in the direction perpendicular to the surface.[4] This comparison is to be judged also from the perspective of having a table-top experimental setup as opposed to the requirement of very intense synchrotron light. However, in order to reach such accuracy one needs samples with very smooth surfaces. For this reason TOF-SARS and MSRI data in oxides have been primarily used for monitoring surface composition rather than the structure.[6] Exceptions are very few materials like Al\\({}_{2}\\)O\\({}_{3}\\) or SrTiO\\({}_{3}\\) which are commercially available and commonly used as single crystal substrates for film growth.[7; 8]
Ba\\({}_{1-x}\\)K\\({}_{x}\\)BiO\\({}_{3}\\) is a family of superconductors with the maximum T\\({}_{c}\\simeq\\) 32 K (at x = 0.4) being the highest in an oxide material without copper.[9; 10] BaBiO\\({}_{3}\\), the parent compound, is insulating and non-magnetic. These interesting properties are believed to arise due to a charge-density-wave instability[11] which leads to the lowering of the crystal symmetry from the simple cubic perovskite structure, see Fig. 1. The driving force of this transition and the persistence of the charge order in superconducting Ba\\({}_{1-x}\\)K\\({}_{x}\\)BiO\\({}_{3}\\) are still a matter of debate.[11; 12; 13] This is largely due to the difficulty in obtaining high quality single crystals from these materials. A recent study brought to the forefront the important problem of dimensionality in Ba\\({}_{1-x}\\)K\\({}_{x}\\)BiO\\({}_{3}\\) suggesting that this compound has in fact a layered structure, in analogy to the high T\\({}_{c}\\) superconducting cuprates.[14] In the context of superconducting electronics, the interest in undoped and K doped BaBiO\\({}_{3}\\) stems from succesful fabrication of superconductor-insulator-superconductor tunnel junctions, a task that has bee quite ellusive with either cuprates or MgB\\({}_{2}\\) superconductors. For these reasons it is critically important to understand and control the surface properties in this family of compounds.
Real space crystallography using TOF-SARS is based on the concepts of shadowing and blocking cones.[3; 15] TOF-SARS is sensitive to both ions and neutrals so it is not dependent on charge exchange processes at the surface. The drawback is that these spectra display broad 'tails' at high times (see Fig. 2) associated with multiple scattering events that are difficult to analyze quantitatively. This drawback is eliminated by MSRI which achieves 'time focussing' according to \\(t=t_{0}+k(M/e)^{1/2}\\), i.e. the flight time of the ions is only a function of their mass to charge ratio. The broad continua seen in TOF-SARS are turned into very sharp features allowing isotopic mass resolution[5], see Fig. 3. MSRI is thus easier to interpret and one is tempted to use AR MSRI for surface structure analysis. However, because MSRI detects only ions, one should worry about possible anisotropic neutralization effects which makes a _quantitative_ interpretation problematic.[16] In fact this aspect is a long standing problem in ion based mass spectrometry affecting both compositional and structural studies.
Here we report on succesful and reproducible synthesis of large area single crystal thin films of BaBiO\\({}_{3}\\) using ALL-MBE. This opens the path of improved basic experiments including high resolution surface crystallography in oxides based on ion scattering. Next we present results of surface analysis of a BaBiO\\({}_{3}\\) film using AR TOF-SARS and MSRI. We show that atomically smooth surfaces lead to high sensitivity of both type of spectra with respectto as small as one degree variation in the azimut angle. Comparison between the AR TOF-SARS and MSRI data shows that the latter can be a powerful tool for quantitative surface structure analysis. To the best of our knowledge this result has not been reported before. The angular dependence of the spectra along with numerical simulations allows us to unambiguously determine that the BaBiO\\({}_{3}\\) film terminates by a BiO\\({}_{2}\\) layer.
_Experimental_ - A BaBiO\\({}_{3}\\) thin film was grown on a SrTiO\\({}_{3}\\) substrate. The thickness of 96 nm was determined from low angle X-ray reflectometry, a value in very good agreement with the prediction from the programmed growth rate. Atomic force microscopy data show absence of any secondary phase precipitates and a roughness Rms that is below 4 A in a typical 25\\(\\mu\\)m\\({}^{2}\\) area, see Fig. 1b. This was expected based on our observation of strong specular reflection and undamped RHEED oscillations during growth. The pronounced finite thickness reflectance oscillations (Fig. 1c - inset) clearly demonstrate atomically flat film interfaces. The high quality of the film is furthermore reflected in the sharp Bragg diffraction peaks seen in the \\(\\omega-2\\theta\\) scans shown in Fig. 1c. The lattice constants of the bulk (96 nm thick) film at T = 300 K were determined to be \\(c=4.334(2)\\) A while in-plane \\(a\\approx b\\simeq 4.353(4)\\) A. Note also that the unit cell of our BaBiO\\({}_{3}\\) film appears to be very close to the simple perovskite structure shown in Fig. 1b, which is not the case for the body-centered monoclinic structure characteristic of single crystals at this temperature.[11] The experimental observation of excellent epitaxy is intriguing taking into account the relatively large mismatch, of about 11%, between the in-plane lattice constants of SrTiO\\({}_{3}\\) and bulk single crystals of BaBiO\\({}_{3}\\).
The ion scattering data were taken using a recently built Ionwerks TOF system, based on principles described in Refs.[5; 17] We used a K\\({}^{+}\\) ion source tuned to provide a monochromatic beam of 10 keV. Microchannel plate detectors were mounted at \\(\\theta=27^{0}\\) and \\(37^{0}\\) total scattering angles. The incident angle was \\(\\alpha=15^{0}\\). Time resolved spectra were achieved by pulsing the source beam at a 20 kHz over a 0.5\\(\\times\\)2 mm aperture. The typical pulse width was 12 ns. This, together with the \\(I\\approx 0.1\\mu\\)A value of the ion current on the aperture, allows us to estimate an ion dose of about 3\\(\\times 10^{11}\\) ions/cm\\({}^{2}\\) per spectrum. The surface density is about \\(10^{15}\\) atoms/cm\\({}^{2}\\) indicating that the technique was not invasive for the duration of the experiment. More important, measurements from pristine regions of the 1 cm\\({}^{2}\\) sample during and after the experiment ensured that the surface was neither damaged nor charged. The TOF-SARS and MSRI spectra were collected at T \\(\\approx 600^{0}\\) C in ozone atmosphere at a pressure \\(p\\simeq\\) 5\\(\\times 10^{-6}\\) Torr. In the following \\(K_{S}(X)\\) denotes K\\({}^{+}\\) ions undergoing single scattering events from the element \\(X\\) on the surface while the notation \\(X_{R}\\) stands for particles (\\(X=Ba,Bi\\) or \\(O\\)) recoiled from the BaBiO\\({}_{3}\\) surface. The numerical calculations were performed using the Scattering and Recoil Imaging Code (SARIC) which is a classical trajectory simulation program based on the binary collision approximation.[18]
_Surface structure analysis of BaBiO\\({}_{3}\\)_ - Fig. 2a illustrates the dependence of the TOF-SARS spectra on the azimuth angle \\(\\Phi\\), defined as the angle between the scattering plane and the [100] direction of the cubic structure. One minute long scans taken with \\(\\Phi\\) varied in \\(1^{0}\\) increments between 3.5\\({}^{0}\\) and 10.5\\({}^{0}\\) are shown. The continuum above \\(t\\geq 9.9\\)\\(\\mu\\)s is due to K multiple scattering. Based on the predictions of elastic binary collision the two sharp features in Fig. 2a can be immediately assigned to single scattering events of K ions. The peak at \\(t=9.75\\) ns corresponds to \\(K_{S}(Bi)\\) and the one at \\(t\\simeq 9.8\\) ns to \\(K_{S}(Ba)\\). Dramatic changes are seen in the behavior of the higher time \\(K_{S}(Ba)\\) peak as \\(\\Phi\\) is varied.
Surface roughness typically smears out the structure in AR scattering or recoil features. The strong sensitivity of the spectra in Fig. 1 to small variations of the azimuthal angle is a consequence (and a direct proof) of high surface quality. The angle independent intensity of the \\(K_{S}(Bi)\\) peak along with the strong variation in the \\(K_{S}(Ba)\\) feature suggest that the film terminates with BiO\\({}_{2}\\) planes which shadow the subsurface BaO layers. We show below that this is fully supported by more detailed analysis including numerical simulations.
Information about the surface arrangement and dynamics can be obtained by studying the details of the spectral shapes. For example the inset of Fig. 2a, which shows a zoomed in region around the \\(K_{S}(Bi)\\) peak at \\(\\Phi=45^{0}\\), reveals a shoulder on the low time side. Indeed,
Figure 1: (a) The cubic perovskite structure of undistorted BaBiO\\({}_{3}\\). (1) and (2) correspond to K trajectories as described in the text. (b) A 50 \\(\\times\\) 50 \\(\\mu\\)m AFM image of the BaBiO\\({}_{3}\\) film. (c) \\(\\omega-2\\theta\\) scan of the BaBiO\\({}_{3}\\) film; STO labels Bragg peaks from the SrTiO\\({}_{3}\\) substrate. The inset shows X-ray reflectance intensity oscillations at grazing incidence.
the data can be well fitted by two Gaussian peaks which correspond to trajectories involving single and double scattering events denoted by (1) and (2) in Fig. 1a. This assignment to events involving only K and Bi atoms along the [110] direction is based on the elastic binary collision model which predicts a difference of 33 ns between these two trajectories, in good agreement with the experimental value \\(\\delta t=30\\) ns. We do not observe the low time feature for \\(\\Phi=0^{0}\\); this is understood as the effect of the intervening O atom along [100] direction. Since the intensity of peak (2) depends strongly on the atomic arrangement at the surface, angular dependencies of the relative intensity of these two peaks could be used to get information about the symmetry and vibration amplitudes of the top layer atoms. Shown in Fig. 2b is the \\(Bi_{R}\\) peak which is found around \\(t\\simeq 18\\mu\\)s.
We turn now to the question whether AR MSRI can be a quantitative probe for high resolution surface study. It is natural to ask (a) if one can see structure in the AR MSRI data and (b) whether such dependence, if present, provides quantitative information about the surface. The latter problem is related to the fact that it is hard to quantify the yield of ionic fraction which, moreover, could be itself an anisotropic function with respect to the orientation of the crystallographic axes.[16]
The answer to the first question is given in Fig. 3. The main panel shows a MSRI spectrum taken at \\(\\Phi=0^{0}\\) and the inset shows the Ba isotopes region for three azimuths. Clearly, the intensities of the corresponding peaks vary substantially when this angle is changed. Note also the sharpness of the peaks which allows for easy separation of atomic isotopes. The mass resolution, \\(m/\\Delta m\\approx 380\\), is about one order of magnitude better than that obtained in typical TOF-SARS spectra.
The second question is addressed in the top panel of Fig. 4 where two data sets are compared. One data set corresponds to the \\(\\Phi\\) dependence of the intensities of MSRI \\(Ba_{R}\\) peaks derived from the spectra shown in the inset of Fig. 3. The second data set refers to \\(Ba_{R}\\) peak from the TOF-SARS spectra taken with the detector at the same total scattering angle \\(\\theta=37^{0}\\). The latter data set was taken with the time focussing analyser not biased, making it sensitive to both ion and neutral particles. The two dependencies are essentially identical. Equally good agreement is also observed if the \\(Bi_{R}\\) feature is considered instead of \\(Ba_{R}\\). We conclude that anisotropic neutralization effects are not important which proves that AR MSRI can be used as a quantitative probe for surface analysis. The insulating nature of the film and the use of alkali ion source beam could be responsible for this effect.[16] Note also that the steep decrease in the \\(Ba_{R}\\) signal between \\(\\Phi=5^{0}\\) and \\(10^{0}\\) seen in Fig. 3 is consistent with the disappearance of the \\(K_{S}(Ba)\\) peak at \\(t\\simeq 9.8\\mu\\)s in the TOF-SARS data at \\(\\theta=27^{0}\\) from Fig. 2a.
We address now the problem of film surface termination. The spectra shown in the bottom panel of Fig. 4 provide the answer to this question. The experimental points correspond to the \\(\\Phi\\) dependence of the integrated intensity of the \\(Bi_{R}\\) peak shown in Fig. 2b. For every data point we subtracted the background due to the high time tail associated with K multiple scattering (as seen in Fig. 2a). The results of two SARIC simulations are also shown in Fig. 4. The solid and dashed lines correspond to assumed BiO\\({}_{2}\\) and BaO terminations respectively. The simulation based on BiO\\({}_{2}\\) termination reproduces well the main features of the experiment, i.e. the two peaks around \\(20^{0}\\) and \\(33^{0}\\). In contrast, the lower angle feature is absent if BaO is assumed to be the topmost layer. These numerical simulations clearly show that the film terminates in a BiO\\({}_{2}\\) surface.
One advantage of real space structure analysis is the intuitive picture it immediately provides: the dips in the
Figure 2: (Color online) Time-of-Flight K scattering data from BaBiO\\({}_{3}\\) recorded at \\(\\theta=27^{0}\\) total scattering angle. (a) The spectra are taken for azimuthal angles \\(\\Phi\\) from \\(3.5^{0}\\) to \\(10.5^{0}\\). The arrow indicates the direction of increasing angle. The peaks at \\(t=9.74\\)\\(\\mu\\)s and \\(t\\approx 9.8\\)\\(\\mu\\)s correspond to \\(K_{S}(Bi)\\) and \\(K_{S}(Ba)\\) respectively. Inset: zoomed in area around the K\\({}_{S}(Bi)\\) peak for \\(\\Phi=45^{0}\\). The line through the data points is a two Gaussian peaks fit. These two peaks, denoted by (1) and (2), correspond to the K trajectories shown in Fig. 1a. (b) The \\(Bi_{R}\\) peak at higher times for \\(\\Phi=0^{0}\\), \\(15^{0}\\) and \\(21^{0}\\).
Figure 3: (Color online) The main panel displays a MSRI spectrum taken at \\(\\Phi=0^{0}\\) azimuth. The inset shows a zoomed in area of the Ba isotopes region from the main panel, for three values of \\(\\Phi\\): \\(0^{0},\\;5^{0}\\) and \\(20^{0}\\).
azimuth scans generally correspond to low index crystallographic directions in the material. In Fig. 4b the minima in the experimental scan found at \\(\\Phi=0^{0},27^{0}\\) and \\(45^{0}\\) are associated with the [100], [210] and [110] directions of the cubic structure respectively, see Fig. 1.
Because these effects strongly depend on the interatomic distances as well as the type of atoms on the surface, the magnitude of these minima can also be explained qualitatively. The severe shadowing and bloking due to both Bi and O atoms lying along the [100] azimuth are the cause of the absence of \\(Bi_{R}\\) signal for \\(\\Phi=0^{0}\\). Although Oxygen atoms contribute to this effect, due to their lower mass they cannot completely shadow or block the incoming K ions or the recoiled Bi along [210] direction. As a result only a shallow minimum is seen around \\(\\Phi=27^{0}\\). This is not the case for Bi atoms which are responsible for the more pronounced dip at \\(\\Phi=45^{0}\\) in spite of the larger interatomic separation along the [110] direction. Similarly, the absence of a peak for \\(\\Phi\\simeq 20^{0}\\) in the simulated \\(Bi_{R}\\) intensity for assumed BaO termination (dashed line in Fig. 4b) can be easily understood in terms of Ba shadowing effects on Bi atoms along the [141] direction and the value \\(\\alpha=15^{0}\\) for the incidence angle. A detailed analysis of the lattice constants and possible surface relaxation based on numerical calculations of the shadowing and blocking cones are the purpose of future work.
_Conclusions - Achieving atomically smooth surfaces is proven to have a great impact on the possibility to use AR TOF-SARS and MSRI for structure and interface analysis in complex oxides. This is a stepping stone for future characterization of artificially layered superconducting compounds like Ba\\({}_{1-x}\\)K\\({}_{x}\\)BiO\\({}_{3}\\) or the cuprates. The quantitative agreement between TOF-SARS and MSRI spectra of a BaBiO\\({}_{3}\\) film show that AR MSRI can be a powerful tool for high resolution surface analysis. Data and simulations allowed us to clearly identify that the BaBiO\\({}_{3}\\) film terminates in a BiO\\({}_{2}\\) layer._
_Acknowledgements - This work was supported by DOE grant DE-AC-02-98CH1886. We thank W. J. Rabalais for providing us with the SARIC simulation code and J. A. Schultz for useful discussions._
## References
* (1) I. Bozovic, IEEE Trans. Appl. Supercond. **11**, 2686 (2001)
* (2) I. Bozovic, G. Logvenov, I. Belca, B. Narimbetov, and I. Sveklo, Phys. Rev. Lett. **89**, 107001 (2002).
* (3) J. W. Rabalais, _Principles and Applications of Ion Scattering Spectrometry_, Wiley-Interscience, 2003.
* (4) V. Bykov, L. Houssiau, and J. W. Rabalais, J. Phys. Chem. **104**, 6340 (2000).
* (5) V. S. Smentkowski, A. R. Krauss, D. M. Gruen, J. C. Holecek, and J. A. Schultz, J. Vac. Sci. Technol. A **17**, 2634 (1999).
* (6) O. Auciello, J. Appl. Phys. **100**, 051614 (2006) and references therein.
* (7) J. Ahn, and J. W. Rabalais, Surf. Sci. **388**, 121 (1997).
* (8) P. A. W. van der Heide, Q. D. Jiang, Y. S. Kim, and J. W. Rabalais, Surf. Sci. **473**, 59 (2001).
* (9) L. F. Mattheiss, E. M. Gyorgy, and D. W. Johnson, Jr., Phys. Rev. B **37**, R3745 (1988).
* (10) R. J. Cava, B. Batlogg, J. J. Krajewski, R. Farrow, L. W. Rupp, A. E. White, K. Short, W. F. Peck, and T. Kometani, Nature **332**, 814 (1988).
* (11) S. Pei, J. D. Jorgensen, B. Dabrovski, D. G. Hinks, D. R. Richards, A. W. Mitchell, J. M. Newsam, S. K. Sinha, D. Vaknin, and A. J. Jacobson, Phys. Rev. B **41**, 4126 (1990).
* (12) T. Thonhauser, and K. M. Rabe, Phys. Rev. B **73**, 212106 (2006).
* (13) A. V. Puchkov, T. Timusk, M. A. Karlow, S. L. Cooper, P. D. Han, and D. A. Payne, Phys. Rev. B **54**, 6686 (1996).
* (14) L. A. Klinkova, M. Uchida, Y. Matsui, V. I. Nikolaichik, and N. V. Barkovskii, Phys. Rev. B **67**, R140501 (2003).
* (15) M. Aono, Y. Hou, C. Oshima, and Y. Ishizawa, Phys. Rev. Lett. **49**, 567 (1982).
* (16) H. Niehus, W. Heiland, and E. Taglauer, Surf. Sci. Rep. **17**, 213 (1993).
* (17) A. R. Krauss, Y. Lin, O. Auciello, G. J. Lamich, D. M. Gruen, J. A. Schultz, and R. P. H. Chang, J. Vac. Sci. Technol. A **12**, 1943 (1994); J. A. Schultz, S. Ulrich, L. Woolverton, W. Burton, K. Waters, C. Kleina, and H. Wollnik, Nucl. Intrum. Methods Phys. Res. B **118**, 758 (1996).
Figure 4: (Color online) Top: azimuth dependence of the integrated intensity of the Ba recoil peak from two sets of measurements taken at \\(\\theta=37^{0}\\) scattering angle: MSRI (squares) and TOF-SARS spectra (circles). The curves are matched at \\(\\Phi=5^{0}\\). Bottom: \\(\\Phi\\) dependence of the experimental integrated intensity of the \\(Bi_{R}\\) peak from TOF-SARS data of Fig. 2b. The solid and dashed lines correspond to SARIC simulations assuming BiO and a BaO film terminations respectively. The data in this panel are matched at \\(\\Phi=45^{0}\\).
* (18) V. Bykov, C. Kim, M. M. Sung, K. J. Boyd, S. S. Todorov, and J. W. Rabalais, Nucl. Intrum. Methods Phys. Res. B **114**, 371 (1996). | Using low energy Time-of-Flight Scattering and Recoil Spectroscopy (TOF-SARS) and Mass Spectroscopy of Recoiled Ions (MSRI) we analyze the surface structure of an atomically smooth BaBiO\\({}_{3}\\) film grown by molecular beam epitaxy. We demonstrate high sensitivity of the TOF-SARS and MSRI spectra to slight changes in the orientation of the ion scattering plane with respect to the crystallographic axes. The observed angle dependence allows us to clearly identify the termination layer as BiO\\({}_{2}\\). Our data also indicate that angle-resolved MSRI data can be used for high resolution studies of surface structure of complex oxide thin films. | Give a concise overview of the text below. |
arxiv-format/0705_0304v2.md | **Cybergeo : Revue europeenne de geographie, N'295, 06 decembre 2004**
**Modelisation prospective de l'occupation du sol. Le cas d'une montagne mediterranneenne.**
(Prospective modelling of georeferenced data by crossed GIS and statistic approaches applied to land cover in Mediterranean mountain areas)
Martin Paegelow1, Nathalie Villa2 Laurence Cornez3, Frederic Ferraty2, Louis Ferre2, Pascal Sarda2
Footnote 1: [email protected] - GEODE, UMR 5602 CNRS, 5 allees Antonio Machado, 31058 Toulouse cedex 9
Footnote 2: GRIMM, équipe d’accueil 3686, 5 allees Antonio Machado, 31058 Toulouse cedex 9
Footnote 3: Stagaire au GEODE, UMR 5602 CNRS
**Resume :**
_Les auteurs mettent en euvre trois methodes de modelisation prospective appliquees a des donnees georeferencees haute resolution portant sur l'occupation du sol en milieu montagnard mediterraneen : approche SIG, modelle lineaire generalisee et reseaux neuronaux. Une validation des modeles est entreprise par la prediction de l'occupation du sol a la derniere date connue. Les resultats obtenus sont, dans le contexte de la dynamique spatio-temporelle de systemes ouverts encourageants et comparables. Les scores de prediction correcte se situent autour de 73 %. L'analyse des resultats porte notamment sur la localisation geographique, les types d'occupation du sol concernes et les cearts a la realite des residus. Un croisement des trois modeles souligne le degee de de couvergence et une relative simillitude des resultats issus des deux approches statistiques comparee au modele SIG supervise. Des travaux en cours concernent la mise en aeuvre des modeles sur d'autres sites et le reprange des points forts respectifs afin de developper un modele integre._
**Mots cles :** _modelisation, modele lineaire generalise, prevision, reseaux de neurones, SIG_
**Motels cles :** _modelisation, modele lineaire generalisee, prevision, reseaux de neurones, SIG_
#
**Key words:** _forecast, GIS, modelling, neuronal network, non linear parametric model_
## 1 Problematique et objectifs
L'objet de notre recherche est la modelisation de dynamiques environnementales dans le cadre de systemes complexes et ouverts. Dans ce cadre, la variable etudiee - ainsi que les variables d'environnement, susceptibles d'expliquer son evolution dans l'espace et dans le temps - contient une part d'incertitude ou d'alea ce qui exclut, de fait, une approche deterministe. Ainsi, nous utilisons une approche stochastique (probabiliste) tenant compte de la dependance dans le temps (effet memoire) et dans l'espace : les outils probabilistes utilises sont notamment la distribution multinomiale et l'analyse de Markov. En outre, notre approche fait egalement appel a la logique floue et a un automate cellulaire. L'ensemble de ces methodes est mis en couvre dans trois modeles a but previsionnel differents afin de comparer leurs performances respectives. Plus precisement, il s'agit de comparer un modele geometique combine de simulation prospective dont la mise en couvre est possible en utilisant les fonctions de logiciels SIG disponibles sur le marche a deux modeles statistiques dont la mise en couvre, plus longue, est exterieure au SIG. En contrepartie, l'interet des deux approches statistiques reside dans leur caractere automatique tandis que le modele SIG necessite une analyse thematique experte.
Notre choix en matiere de modelisation statistique a porte sur deux approches classiques, l'une basee sur le maximum de vraisemblance (modele lineaire generalise) et l'autre utilisant un reseau de neurones. Ces deux methodes sont proches du point de vue de la modelisation et different essentiellement en ce qui concerne les algorithmes de mise en couvre.
Un des defis actuels de la recherche en geometique est celui de la modelisation prospective de donnees geographiques a haute resolution. Des methodes geostatistiques eprouvees pour l'interpolation et l'extrapolation spatiales existent depuis plusieurs decennies et sont implementees dans nombre de logiciels geomatiques commercialises. Par contre, des outils de modelisation temporelle et d'aide a la decision ne furent implementes dans les SIG que recement et doivent etre consideres plutot comme des algorithmes experimentaux interessants que de techniques operationnelles.
Depuis les annees 1990 la demande sociale en outils d'aide a la decision et de modelisation capables d'assister differentes taches de gestion environnementale (notamment la prevention de risques) et d'amenagement des territoires s'est fortement accrue.
Cet article illustre les premiers resultats d'une etude comparative de trois methodes de modelisation prospective appliquee a l'occupation du sol dans des anthroposystemes montagnards de l'Europe du sud. Nous considerons l'occupation du sol comme un indicateur pertinent, disponible a haute resolution, d'une combinaison d'activites humaines que les societes deploient dans l'espace - et auxquelles l'occupation du sol reagit avec une certaine inertie - et de facteurs naturels. Les montagnes mediterraneennes font l'objet d'une profonde restructuration socio-economique qui se manifeste, entre autres, dans de spectaculaires changements paysagers. Cette reorganisation commenca dans les Garrotxes (Pyrenees Orientales, zone d'etudes) a la fin de la premiere moitie du XIXeme siecle par le declin du systeme agropastoral traditionnel provoquant l'exode rural.
Une base de donnees georeferencees materialise les connaissances des dynamiques passees et actuelles de l'occupation du sol ainsi que des facteursd'environnement potentiellement explicatifs. Elle alimente trois methodes de modelisation prospective : l'une, supervisee, met en couvre des algorithmes implementes dans des logiciels SIG ; les deux autres approches - modele lineaire generalise et reseau neuronal - peuvent etre qualifiees de non supervisees dans la mesure ou les regles du comportement spatio-temporel sont automatiquement detectees par l'outil. L'objectif principal etant la mise en couvre et l'optimisation de chacune des trois approches sur le meme jeu de donnees. L'interpretation critique des resultats, notamment des residus de la prediction, permet de corner avantages et inconvenients respectifs. A partir de cette analyse comparative, peuvent etre envisagees la possibilite de construction d'un modele optimise integrant les points forts de chacune des methodes ainsi que les modalites de transposition et les limites de generalisation.
Afin de valider et d'optimiser les modeles ceux-ci sont appliques, dans un premier temps, a predire l'occupation du sol a la derniere date connue avant de proposer des scenarii prospectifs. Cette etude est menee dans le cadre d'une cooperation entre trois equipes de recherche travaillant sur deux sites1 : les Garrotxes dont nous presentons ici les premiers resultats et la Alta Alpujarra Granadina (Andalousie, Espagne - travaux en cours).
Footnote 1: GEODE UMR 5602 CNRS, Groupe SMASH -EA 3686 GRIM, UTM et Instituto de Desarrollo Regional - Universidad de Granada
## 2 Zone d'etudes et base de donnees
### Les Garrotxes
Les Garrotxes, situees dans le departement des Pyrenees Orientales, a l'extremite NO du Conflent forment un ensemble geographique constitue de cinq communes et d'une taille de 8 570 ha. Ce bassin versant presente une rive droite granitique, a modele geomorphologique relativement lourd, ou sont localisees la quasi-totalite des anciennes terrasses de cultures et des forets de pins a crochet (Pinus uncinata) et de pins sylvestre (Pinus sylvestris) ; un espace a dynamique vegetale tres rapide. La rive gauche du Cabrils, cours d'eau collecteur ce jetant dans la Tet a Olette, est un large soulane orientee SO sur substrat s chisteux avec un metamorphisme de contact dans les zones les plus basses et occupee par des landes majoritiarement ligneuses (a base de Genista purgans et, dans une moindre mesure, de Calluna vulgaris), fortement embroussaillee aux altitudes les plus basses par des chenes verts (Quercus ilex). La particularite des Garrotxes est leur enclavement : le bassin versant, a l'ecart des grandes routes, est delimite au nord par le massif du Madres (2469 m), a l'ouest (Cami Ramader) et au sud (Puig de la Tossa, Serrat del Cortal) par des chaines culminant entre 1600 et 2000 m d'altitude et a l'est par la crete (Lloumet) de la soulane rejoignant le Madres. La vallee du Cabrils presente une degradation progressive du climat mediterranean ; la remontee de l'influence mediterraneanne au coeur des Pyrenees Orientales etant assuree par la vallee de la Tet modifiant ainsi la rudesse du climat montagnard.
Autrefois un modele d'organisation agropastorale traditionnelle, de nos jours l'agriculture a quasiment disparu tandis que l'activite pastorale, longtemps en declin, donne des signes de renouveau suite a une profonde reorganisation entamee durant les annees 1980 ([10], [11]). Le maximum demographiqueau debut du XIXeme siecle se traduisait par une mise en valeur de toutes les ressources montagnardes traditionnelles mobilisables (agriculture, elevage, sylviculture). Ainsi en 1826 (cadastre napoleonien) un quart de la surface totale etait cultive. Le support quasi exclusif de l'agriculture a ete les terrasses de cultures (feixtes). Le declin demographique (de 1 832 habitants en 1830 a 90 en 1999) et la reconversion des terrasses de culture en paturages, broussailes et forets allaient de pair.
Parmi les agents externes consideres responsables du declin de cette societe locale a faible degre d'insertion dans l'economie nationale on peut citer, outre les processus d'industrialisation et de mise en valeur agricole de plaines au cours du XIXeme siecle, une variabilite interannuale accrue des precipitations, observee au milieu du XIXeme siecle ([14]), qui eut pu contribuer a la rupture d'un systeme pousse a bout par la pression anthropique sur le milieu. Deux evenements ponctuels - l'arrivee du chemin de fer a Olette (1911) et la Premiere Guerre Mondiale - ont accelere l'exode rural. Ainsi il est probable que l'avenir proche se joera en termes de gestion - ou de non gestion - pastorale se materialisant par divers moyens de blocage, voire d'inversion, de l'embrotsaillement et du reboisement spontane des espaces pastoraux (berger guidant le betail, clotures, ecobuage). L'instauration de groupements pastoraux (GP) et d'associations foncieres et pastorales (AFP) a partir des annees 1980 a effectivement conduit a une reprise de l'activite pastorale avec un remplacement partiel du cheptel ovin par des bovins et des equins. Les signes de reconversion economique sont recents (annees 1990) mais d'une portee limitee (ouverture d'un gite d'etape a Sansa, tentatives de valorisation en tourisme vert) malgre la concretisation prevue prochainement du Projet de Parc Naturel Regional des Pyrenees Catalanes.
Figure 1: Localisation des Garrotxes à l’intérieur du département des Pyrenees Orientales
### La base de donnees et l'evolution de l'occupation du sol
La base de donnees georeferencees consiste en une serie de cartes d'occupation du sol assorties de plans d'information representant des facteurs environnementaux et sociaux. Les cartes existent - selon les traitements envisages - soit en mode image (resolution du pixel d'environ 18 m), soit en mode objet. Ainsi les principaux traitements pour la modelisation font appel a une logique image (analyse spatiale) tandis que le mode objet offre une plus grande souplesse pour realiser des requetes attributaires.
Les cartes d'occupation du sol ont la meme resolution spatiale mais ont des origines et des l'egendes variables. La premiere carte est basee sur le cadastre napoleonien (1826) - un support permettant de distinguer entre forets, espaces pastoraux, prairies, champs agricoles et le bati (villages). La premiere mission aerienne disponible (1942) rend possible le renseignement de la categorie brussailles (landes tres embroussailleses contenant de groupes d'arbres ou un nombre important d'arbres isoles) - maillon manquant en 1826 entre les formations arborescentes denses et arbustives (landes ligneuses). La carte de 1962 conserve la meme legende tandis que l'echelle et la qualite des missions aeriennes plus recentes (1980 et 1989), egalement panchromatiques, facilient la distinction entre forets de coniferes et formations boisees de type feuillus. Il en va de meme pour une meilleure discrimination des espaces pastoraux : landes ligneuses (notamment a base de Genista purgans) et landes a graminees. La carte d'occupation du sol la plus recente (2000) est basee sur des observations de terrain et est, par consequent, nettement plus detaillee (20 categories).
En raison de la nature des sources, la classification de l'occupation du sol, sous forme de trois nomenclatures emboitees, est surtout d'ordre physionomique.
Figure 2: Changements de l’occupation du sol entre 1826 et 2000 (Garrotxes)L'evolution de l'occupation du sol (cf. Figure 2) est classique. Les terres labourees delaissees ont ete d'abord utilisees comme zones de pature avant un embroussaillement menant souvent a la reconquete par la foret. La base de donnees georeferencees contient nombre de plans d'information ayant trait a l'occupation du sol :
* Plans issus du MNT : carte altitudinale, carte des pentes, carte d'occupation du sol ;
* Plans d'accessibilite calcules a partir du reseau routier et l'emplacement des villages (habitat groupe) : indice d'accessibilite (analyse de distance-cout) selon la date ;
* Plans relatifs a la gestion pastorale : unites pastorales (UP), associations foncieres et pastorales (AFP), pression pastorale ;
* Plans tenant compte du statut particulier de certains zones : forets domainales et zone militaire ;
* Limites administratives ;
* Reseau hydrographique.
Les donnees purement attributaires (recensement de la population, recensement general agricole, \\(\\ldots\\)) sont, selon leur degre de confidentialite, connus, mais apportent peu de connaissances compte tenu de leur unite spatiale de rattachement (la commune) incompatible avec une analyse haute resolution.
## 3 Methodologie et mise en oeuvre
Il s'agit de construire trois modeles predictifs de l'occupation du sol (variable qualitative) a haute resolution, de les calibrer par un test sur la derniere date connue en utilisant la meme base de donnees afin de comparer leurs efficacites respectives. La premiere approche fait appel aux techniques disponibles dans les SIG et peut etre qualifiee de supervisee dans la mesure ou l'analyse du geographe conduit a l'implementation des regles necessaire au calcul des cartes de probabilite. Bien que les deux autres approches, statistiques, soient egalement supervisees, le role du mathematicien, peu familier avec la thematique, consiste plutot a optimiser l'algorithme que d'y introduire les conclusions de sa propre analyse du comportement spatio-temporel du milieu considere.
### Approche supervisee par SIG
Le modele que nous presentons se veut etre simple a deux egards : simple dans le lever des donnees d'entree (limitation a quelques donnees facilement disponibles, cf. paragraphe 2.2) et simple dans sa mise en oeuvre informatique (recours a des algorithmes implementes dans un logiciel SIG commercialise). Ce modele geometique combine :
* Fait appel a la logique floue afin d'ajuster les donnees environnementales dans l'evaluation multicritere.
* chaines de Markov avec memoire (deux dates initiales).
* Remedie aux limites de l'analyse markovienne en recourant a une evaluation multicritere optimisant l'affectation spatiale des probabilites markoviennes (prise en compte de la rugosite de l'espace) par constitution d'une base de connaissances et de regles d'inference (variables d'environnement) relatives a la phase d'apprentissage (calibration) du modele.
* Utilise un automate cellulaire simple pour favoriser l'emergence de zones de probabilites d'etats a extension surfacique realiste.
Mise en couvre sous le logiciel Idrisi 32, la modelisation se decoupe en trois phases:
* La constitution de la base de connaissances de la dynamique spatio-temporelle de l'occupation du sol par evaluation multicritere (EMC) des variables d'environnement. Les variables d'environnement d'origine sont transformees, pour chacun des types d'occupation du sol, par traitements statistiques et par logique floue en plans de probabilite d'occurrence de chacune des categories d'occupation du sol. Ces plans de probabilite resultants, se basant sur la periode d'apprentissage (1980 a 1989) servent a l'allocation spatiale des probabilites de transition.
* derniere date connue).
* L'allocation spatiale des probabilites de transition markoviennes: cette derniere etape utilise les resultats categoriels de l'EMC. Ceux-ci sont integres, par evaluation multiobjectif (EMO), en une seule carte de l'occupation du sol simulee laquelle est traitee par un automate cellulaire (AC) base sur un filtre de contiguite spatiale.
La calibration du modele est obtenue en modelisant l'etat de l'occupation du sol a la derniere date connue (2000) sur la base de l'information provenant d'une periode d'apprentissage englobant les deux dates precedentes (1980 et 1989). Bien que nous disposons d'une connaissance historique plus approfondie, il est evident que les conditions socio-economiques actuelles ne s'appliquent pas aux etats de l'occupation du sol du XIXeme et du XXeme siecle jusque dans les annees 1960.
#### 3.1.1 Construction d'une base de connaissances des dynamiques de l'occupation du sol
La connaissance des dynamiques recentes est essentielle pour apprehender l'evolution future et sa modelisation. Nous entendons par connaissance des mesures statistiquement significatives du comportement spatial et temporel de l'occupation du sol en relation avec des criteres environnementaux consideres explicits d'une partie de sa variabilite. Dans une evaluation multicritere (EMC), on distingue entre des criteres binaires, les contraintes, et les criteres ayant une aptitude variable dans l'espace, les facteurs. Les contraintes booleennes masquent certaines zones (occurrence possible ou non); ils peuvent s'appliquer a toutes formes d'occupation du sol (exclusion des espaces batis) ou etre specifique a certaines formations (limite altitudinale des conieres). Les facteurs traduisent une connaissance graduee pour l'objectif en question (une forme d'occupation du sol) - ils peuvent etre ponderes et on peut definir leur degre de compensation.
Pour chaque categorie d'occupation du sol la connaissance de son comportement spatio-temporel provient d'une analyse diachronique des dynamiques et de la friction geographique en comparant la repartition theorique (espace homogene) et la repartition reelle (niveaux de probabilite 99% et 99.9%). La rugosite geographique est exprimee par les facteurs d'environnement cartographies(altitude, pente, exposition, accessibilite, proximite aux entites de meme nature, statut de gestion particuliere de certaines zones et probabilite de changement) et disponibles a la meme resolution spatiale.
Les facteurs sont standardises par recodage manuel ou par l'emploi de fonctions fuzzy et ponderes par l'emploi de la matrice de Saaty ([10]) qui renvoie le vecteur propre de chaque facteur. L'approche de l'EMC developpee ([1]) inclut des poids d'ordre (ordered weighted averaging - OWA) permettant le choix du niveau de risque et de compensation entre facteurs. Ces poids d'ordre classent les aptitudes par rang croissant et aboutissent a un classement specifique a chaque pixel. Nous avons opte pour une approche conservatrice (peu de risques et niveau de compensation limite) comme l'exprime la Figure 3.
#### 3.1.2 Calcul des probabilites de transition
Le calcul predictif de l'occupation du sol est opere par une analyse des chaines de Markov, un processus discret avec des pas temporels discrets et dont les valeurs a la date predite dependent des valeurs a des dates anterieures. La prediction est exprimee par une estimation des probabilites de transition. Le test consiste a predire l'occupation du sol en 2000 (derniere date connue) sur la base de 1980 et de 1989. Le resultat se presente sous forme d'une matrice dans laquelle sont codeses les probabilites de changement de chaque categorie d'occupation du sol ainsi que le nombre de pixels affectees entre la derniere date d'apprentissage et la date projetee. La fonction calcule egalement une carte de probabilite conditionnelle pour chaque categorie d'occupation du sol indiquant la probabilite markovienne par pixel de la modalite en question a la date projetee. Une integration stochastique de l'ensemble des cartes par modalite en une seule est possible mais aboutit a un resultat relativement eloigne de la realite (image bruitee) car cette procedure purement statistique ne tient pas compte ni des regles de connaissances etablies par EMC, ni de la continuite spatiale.
#### 3.1.3 Allocation spatiale des probabilites predites
L'allocation spatiale des probabilites markoviennes integre la connaissance sur la repartition probable de l'occupation du sol (EMC), une evaluation multiobjectif (EMO) tenant compte des objectifs concurrents (chaque modalite d'occupation du sol etant un objectif) et un automate cellulaire, base sur un filtre de contiguite spatiale. La fonction utilisee sous Idrisi est CA_Markov dont l'algorithme est iteratif afin de tenir compte des distances temporelles entre
Figure 3: Espace de décision et approche EMC-OWA choisie
les deux dates d'apprentissage et la derniere date d'apprentissage et la date de projection. Elle donne en sortie une carte prospective de l'occupation du sol probable. La Figure 4 resume les principales etapes de modelisation par SIG ([10], [12]).
### Approche par reseaux de neurones
L'idee de l'utilisation de reseaux de neurones a ete principalement motivee par leurs remarquables capacites d'adaptation et de souplesse face a un tres grand nombre de problemes, notamment lorsque ceux-ci presentent des aspects non lineaires ou lorsque les variables explicatives sont fortement correlees; ces deux aspects se retrouvent dans le travail de prediction que nous cherchons a effectuer. Aussi, les reseaux de neurones ont recement connu une grande popularite et ont tres favorablement concurrence les methodes statistiques clastiques. On les retrouve notamment dans la prediction de series chronologiques (cf. [2], [13] et [14]). Le cadre dans lequel nous sommes amenes a travailler est encore plus etendu puisqu'il s'agit ici d'un processus spatio-temporel auquel s'ajoutent des variables explicatives comme nous le detailerons plus loin.
#### 3.2.1 Reseaux de neurones multi-couches (perceptrons)
Nous avons travaille avec une classe particuliere de reseaux de neurones, les reseaux multi-couches ou perceptrons. Ceux-ci ont ete les premiers a connaitre un essor important; leur creation est issue des premieres tentatives de modelisation des principes de base regissant le fonctionnement du cerveau meme si leur champ d'application s'est, depuis, considerablement elargi, notamment au traitement de donnees statistiques (pour plus de details, consulter [13] ). Lorsque l'on parle de reseaux a couches, le nombre de couches est a definir par l'utilisateur mais doit comprendre au minimum une couche d'entree et une
Figure 4: Modelisation de l’occupation du sol par SIG : apercu des principales fonctions et de leur enchainement
couche de sortie; les autres couches dont le nombre varie sont appelees couches cachees; pour leurs remarquables proprietes d'approximation, nous avons choisi d'utiliser un reseau a une couche cachee dont l'architecture generale est celle de la Figure 5.
Details un peu le fonctionnement de ce reseau: en entree, la valeur des neurones est celle des variables explicatives du modele; chacune de ces valeurs numeriques est multipliee par un certain nombre de poids pour etre, finalement, additionnee et transformee par une fonction de lien au niveau des neurones de la couche cachee. Enfin, les valeurs numeriques des neurones de la couche cachee subissent a leur tour une multiplication par des poids et leur addition donne la valeur des neurones de sortie qui modelisent la variable expliquee. Les poids, generalement notes w, sont choisis lors d'une phase dite d'apprentissage sur un jeu de donnees test et minimisent l'erreur quadratique de ce jeu de donnees. Finalement, les reseaux de neurones a une couche cachee sont les fonctions de la forme:
\\[\\Psi_{w}(x)=\\sum_{i=1}^{q}w_{i}^{(2)}g\\left(x^{T}w_{i}^{(1)}+w_{i,0}^{(1)}\\right)\\]
ou \\(x\\) est le vecteur des variables explicatives du modele, \\(q_{2}\\) le nombre de neurones sur la couche cachee, \\(g\\) la fonction de lien de la couche cachee (typiquement \\(g\\) est la sigmoide \\(g:x\\rightarrow\\frac{1}{1+e^{-x}}\\)), \\(w^{(1)}\\) sont les poids entre la couche d'entree et la couche cachee et \\(w^{(2)}\\) les poids entre la couche cachee et la couche de sortie. L'interet de ce type de reseau est explique par le resultat suivant ([Hor93]): les reseaux de neurones a une couche cachee permett d'approcher, avec la precision souhaitee, n'importe quelle fonction continue (ou d'autres fonctions qui ne sont pas necessairement continues): c'est ce que l'on appelle la capacite d'approximateur universel et c'est aussi ce qui leur permet de s'appliquer avec une grande efficacite a un grand nombre de modeles.
Figure 5: Architecture d’un réseau à une couche cachee
#### 3.2.2 Modele pour les donnees de Garrotxes
Dans l'exemple des donnees de Garrotxes, plusieurs facteurs ont ete retenus comme pouvant influencer l'occupation du sol d'un pixel donne a la date \\(t\\):
* 1) que l'on exprime sous forme disjonctive (par exemple, si l'on dispose de 8 categories d'occupation du sol, la premiere sera codee sous la forme d'un vecteur a 8 coordonnees: (1 0 0 0 0 0 0 0), la seconde: (0 1 0 0 0 0 0 0), etc);
* _pour l'aspect spatial_: la frequence de chaque type d'occupation du sol dans le voisinage du pixel considere a la date precedente (\\(t-1\\)). Se pose alors le probleme du choix du voisinage (tainle et forme): pour la forme, diverses possibilites s'offrent a nous, de la plus simple (voisinage carre ou en etoile) a des voisinages plus sophistiques (voisinage suivant la pente pour mieux tenir compte des influences morphologiques du terrain). Quant a la taille du voisinage, il s'agira de determiner jusqu'a quelle distance un pixel est susceptible d'influencer le pixel considere. Afin de respecter la spatialisation de la carte, on ponderera l'influence d'un pixel par une fonction decroissante de la distance au pixel considere (cf. Figure 6);
* des variables environmentales (pente, altitude, \\(\\ldots\\)).
#### 3.2.3 Mise en euvre
A l'issue d'une phase exploratoire qui nous a permis de cerner les variables pertinentes et divers parametres du modele comme la forme du voisinage (que, dans un soucis de simplicite, nous avons choisi carre), sa taille (finalement fixee a 3; cf. Figure 5) ou le nombre de neurones optimal sur la couche cachee, l'architecture choisie compte, en entree, 19 neurones:
* _pour l'aspect temporel_: 7 neurones pour le codage disjonctif de la valeur du pixel (le bati, constant, ayant ete retire de l'etude) a la date precedente (\\(t-1\\));
Figure 6: Un exemple de voisinage* _pour l'aspect spatial_ : 8 neurones pour la frequence des divers types d'occupation du sol dans le voisinage (frequence ponderee par une fonction decroissante de la distance) ;
* 4 neurones pour les variables d'environnement pente, altitude, exposition et distance aux infrastructures (prealablement centrees et reduites).
Le reseau dispose aussi de 8 neurones sur la couche cachee et de 7 neurones en sortie, chacun estimant la probabilite d'appartenance du pixel a un type d'occupation du sol (hors bati).
Pour la phase d'apprentissage, nous avons utilise comme jeu de donnees la carte de 1980 avec comme cible a prevoir la carte de 1989 et la carte de 1989 avec comme cible a prevoir la carte de 2000. Apres avoir repere de larges zones dans lesquelles l'occupation du sol etait stable, nous avons considere uniquement les pixels dont un voisin au moins avait une valeur d'occupation du sol differente (exploitant ainsi pleinement la spatialisation des donnees) : ces pixels seront, dans la suite, nommes pixels frontieres. Les autres pixels ont ete consideres comme stables dans un intervalle de temps de 10 ans.
D'un point de vue calculatoire, les programmes ont ete realises a l'aide du logiciel Matlab (cf [1]) et sont disponibles sur demande.
### Approche par modele lineaire generalise
Le _modele de regression logistique_ est un modele lineaire generalise (et donc parametrique) dans lequel la variable reponse est qualitative et qui permet d'obtenir une prediction de celle-ci en tenant compte d'un ensemble d'informations issues de variables explicatives. Lorsque la reponse possede plus de deux modalites, on parle de _modele de regression logistique multiple_ ou _modele de regression polychotomique_ ([11]). D'autres developpements plus recents concernant ce modele ont ete realises par [11]. Ce type de modele logistique est particulierement bien adapte a notre problematique puisqu'il s'agit de predire pour chaque pixel de la carte un type d'occupation du sol (variable reponse ayant 8 modalites codees par un entier \\(\
u\\), \\(\
u=1,2,\\ldots,8\\)). La specificite de notre etude vrient du fait que, outre la nature topographique du pixel (pente, altitude, ), le modele choisi doit tenir compte d'un effet spatial (etat de la vegetation dans l'environnement du pixel) et d'un effet temporel (evolution du type d'occupation du sol du pixel et de son environnement). En ce sens il s'agit d'adapter le modele de regression logistique a notre cadre, un des enjeux les plus importants etant le choix de la forme et de la taille de l'environnement pris en compte par le modele.
De facon generale, le modele de regression logistique permet de modeliser, en fonction d'un certain nombre de parametres, la probabilite pour que le type d'occupation du sol d'un pixel au temps \\(t\\) (c'est-a-dire la variable reponse) soit egal a un des 8 categories d'occupation du sol. Il s'agit donc d'estimer les parametres inconnus du modele, et ensuite les probabilites a posteriori de type d'occupation du sol sachant les valeurs des differentes variables explicatives. On utilise ensuite une regle de type _bayesien_ consistant a affecter au temps \\(t\\) a un pixel donne l'indice de vegetation ayant la plus forte probabilite _a posteriori_.
#### 3.3.1 Regression logistique multiple spatio-temporelle
Indexons par \\(i=1,2,\\ldots,N\\) les pixels de la carte d'occupation du sol et notons \\(\\mathcal{I}_{i}\\) l'ensemble des informations dont on dispose concernant le pixel numero \\(i\\). D'un point de vue formel, le modele de regression logistique multiple que nous adoptons peut se presenter sous la forme generale suivante :
\\[\\log\\left(\\frac{Prob(\\mathrm{pixel}_{i}=\
u|\\mathcal{I}_{i})}{Prob(\\mathrm{ pixel}_{i}=8|\\mathcal{I}_{i})}\\right)=\\alpha_{\
u}+\\gamma_{\
u,\\mathcal{I}_{i}},\\ i=1,\\ldots,N, \\tag{1}\\]
ou \\(\\alpha_{\
u}\\) est un parametre associe au type d'occupation du sol \\(\
u\\) que l'on souhaite predire pour le \\(i^{\\mbox{\\small{\\em eme}}}\\) pixel et \\(\\gamma_{\
u,\\mathcal{I}_{i}}\\) un ensemble de parametres lies a \\(\
u\\) ainsi qu'aux informations concernant toujours ce pixel numero \\(i\\). Ainsi, le nombre total de parametres mis en jeu dans ce modele depend uniquement du nombre de types d'occupation du sol et du nombre de variables explicatives. Dans l'expression (1), \\(Prob(\\mathrm{pixel}_{i}=\
u|\\mathcal{I}_{i})\\) represente la probabilite que l'occupation du sol du pixel \\(i\\) soit du type \\(\
u\\) lorsque les variables explicatives prennent les valeurs decrites par l'ensemble \\(\\mathcal{I}_{i}\\). Notons que l'expression (1) modelise le rapport (son logarithme) de la probabilite qu'un pixel prenne la modalite v sur la probabilite que ce pixel prenne la modalite coidee 8 ce qui permet d'integrer la contrainte que la somme des huit probabilites est egale a 1. Dans l'expression (1), nous devons integrer l'_effect temporel_ : celui-ci est pris en compte en faisant dependre le type d'occupation du sol du pixel \\(i\\) du temps \\(t\\) c'est-a-dire en posant pixel\\({}_{i}=\\mathrm{pixel}_{i}(t)\\). Par ailleurs l'information (ou plus exactement une partie de cette information) depend du temps \\(t-1:\\mathcal{I}_{i}=\\mathcal{I}_{i}(t-1)\\). L'idee consiste donc a calculer la probabilite qu'un pixel prenne un type d'occupation du sol \\(\
u\\) a l'instant \\(t\\) en fonction de l'information que l'on possede sur ce meme pixel a l'instant precedent \\(t-1\\) ; on repete cette procedure pour tous les pixels de la carte. Connaissant les cartes 1980 \\((t-1)\\) et 1989 \\((t)\\), on peut estimer l'ensemble des parametres de notre modele de sorte a ajuster au mieux la carte 1989. Il suffit alors d'incrementer le temps dans notre modele pour predire la carte a l'instant futur \\(t+1\\) (2000) a partir de la carte observee a l'instant \\(t\\) (1989). Quant a l'_effect spatial_, il est pris en compte de la meme facon que dans l'approche par reseau de neurones. Il est en effet naturel de penser que l'evolution de l'occupation du sol du pixel \\(i\\) depend de celle des pixels environnants. Pour cela on considere un voisinage carre \\(V_{i}\\) autour du pixel numero \\(i\\) que l'on souhaite predire et on extrait comme information du voisinage \\(V_{i}\\) le nombre de pixels prenant le type numero 1 d'occupation du sol, le type numero 2 d'occupation du sol, Cette facon de proceder revient a supposer une _invariance isotrope_, c'est-a-dire que le type d'occupation du sol autour du pixel \\(i\\) ne depend pas de la direction. Dans la mise en oeuvre de la methode, nous avons privivegie la simplicite de la forme du voisinage (carre). Lors de developpements ulterieurs, on pourrait envisager d'autres formes que le carre (etoile, rectangle, ) et varier s'il en resulte un gain ou pas en terme de prediction. On peut egalement envisager une modelisation privivegiant certaines directions c'est-a-dire rompant avec l'hypothese d'invariance isotrope. Notons cependant qu'il en resulterait un modele avec un plus grand nombre de parametres et que de ce point de vue on doit egalement composer avec la capacite a bien estimer un modele qui serait trop complexe.
En combinant effet temporel et effet spatial, on est finalement amene aconsiderer le modele suivant
\\[\\log\\left(\\frac{Prob(\\mathrm{pixel}_{i}(t)=\
u|\\mathcal{I}_{i}(t-1))}{Prob( \\mathrm{pixel}_{i}(t)=8|\\mathcal{I}_{i}(t-1))}\\right)=\\alpha_{\
u}+\\gamma_{\
u, \\mathcal{I}_{i}(t-1)},\\]
ou \\(\\mathcal{I}_{i}(t-1)\\) englobe l'information extraite du voisinage \\(V_{i}(t-1)\\), c'est-a-diretient compte de l'occupation du sol observee autour du pixel numero \\(i\\) a l'instant \\(t-1\\). Enfin, \\(\\mathcal{I}_{i}(t-1)\\) comprend egalement l'information issue des variables telles que la pente ou l'altitude decrites plus haut.
#### 3.3.2 Mise en oeuvre
D'un point de vue pratique, la mise en oeuvre se decompose en deux etapes : une etape d'estimation et une etape de validation.
_Etape d'estimation_ : On estime les parametres du modele (\\(\\alpha_{\
u}\\) et ceux contenus dans \\(\\gamma_{\
u,\\mathcal{I}_{i}(t-1)}\\)). La procedure d'estimation est basee sur la maximisation de la _vraisemblance penalisee_, critere bien connu en statistique pour la stabilite des solutions obtenues. L'algorithme d'optimisation utilise est de type _Newton-Raphson_. Remarquons que la penalisation introduit un nouveau parametre, appelle parametre de penalisation et note \\(\\epsilon\\), qu'il faudra choisir. Comme cela a ete dit precedemment, on utilise les cartes 1980 et 1989 pour estimer les parametres, ceci pour differentes tailles de voisinage et valeurs \\(\\epsilon\\).
_Etape de validation_ : Il s'agit de determiner la taille de voisinage et le parametre de penalisation optimaux en ce sens que ces choix fourniront une prediction de la carte 2000 la plus proche possible de celle observee Plus precisement, a l'aide de etape precedente, il est possible de construire plusieurs predictions de la carte 2000, chacune correspondant a differentes tailles de voisinage et valeurs \\(\\epsilon\\). En comparant les cartes predites pour 2000 avec la carte reelle de 2000, on repere la carte qui possede le plus petit nombre de pixels mal predits; la taille de voisinage et le parametre de penalisation correspondants seront consideres comme etant optimaux.
## 4 Resultats et interpretation
Les trois approches ont ete testees par la modelisation de l'occupation du sol a la derniere date connue (2000).
Les resultats globaux (pourcentage de surface) de ce test sont proches de la realite (cf. Figure 7 et Tableau 1).
Cependant il convient d'analyser la repartition spatiale de ces sommes de surface predites par categorie a haute resolution (cote du pixel environ 18 m). Le Tableau 2 compare les residus par categorie en pourcentage de la surface reelle de chaque categorie mise a part les cultures dont le nombre de pixels tend vers zero. Le premier enseignement de ce tableau est la relative concordance des resultats des trois approches. On remarque egalement que la modelisation de modalites ayant une grande superficie (foret de coniferes, landes a genet) est plus facile que celle des categories d'occupation du sol de faible ampleur. Ainsi, quel que soit le modele, moins d'un pixel sur deux a ete correctement predit pour la categorie broussailles. Parmi les modalites de faible surface, on note cependant de grandes differences selon leur stabilite spatio-temporelle : lespraires etant plus stables que les landes a graminees, leur taux de prediction est meilleur.
Les taux de prediction globale des trois methodes sont tres proches : 72.8 % (SIG), 74.3 % (r eseaux neuronaux) et 72.8 % (modele lineaire generalise).
\\begin{tabular}{|c|c|c|c|c|c|c|} \\hline Occupation & Foret de & Foret de & Brous- & Landes & Landes a & Prairies \\\\ du sol en 2000 & conifères & feuillus & -sailles & a genets & graminees & \\\\ Surface (\\%) & 40,9 & 11,7 & 15,1 & 21,6 & 5,7 & 4,8 \\\\ \\hline Residus (\\%) & & & & & & \\\\ de modelisation & & & & & & \\\\ SIG (27,2\\%) & 11,42 & 55,28 & 51,92 & 17,13 & 54,39 & 30,35 \\\\ \\hline Reseaux de & 10,60 & 45,84 & 54,54 & 16,23 & 59,38 & 19,26 \\\\ neurones (25,7\\%) & & & & & & \\\\ \\hline Modele lineaire & 11,88 & 51,65 & 57,07 & 14,35 & 59,24 & 25,57 \\\\ géneralise (27,2 \\%) & & & & & & \\\\ \\hline \\end{tabular}
Tab. 2 - Pourcentage de residus par categorie d'occupation du sol et approche modelisatrice
Les modelisations n'ont pas vocation a predire la realite mais peuvent nous aider a mieux comprendre des changements spatio-temporels environnementaux et sociaux complexes. Dans ce sens, l'interpretation des resultats des modelisations doit tenir compte des limites des modeles. La modelisation de l'occupation du sol signifie une simulation de ce que la realite pourrait etre, un scenario raisonne et quantifiable dans le contexte d'aide a la decision.
Cependant une interpretation minutieuse des resultats devrait nous permettre a ameliorer le modele et, par consequent, le taux de prediction. Dans ce sens, l'analyse focalise surtout sur les residus.
La categorie d'occupation du sol la plus representee (les conifères) obtient un tres bon score de prediction alors que les broussailles, relativement presentes sur le territoire, obtiennent un tres mauvais score (plus de la moitie de mal predits). Diverses remarques permettent d'expliquer ces phenomenes et de penser a des strategies d'amelioration de la prediction. Tout d'abord, on peut constater que les broussailles sont la categorie naturellement la plus dynamique sur un territoire caracterise par un equilibre entre espace forestier et pastoral regi notamment par la gestion pastorale. Les broussailles sont egalement soumises le plus a des effets aleatoires : un feu de foret, une coupe ou bien l'abandon de paturage transforment en l'espace de 10 ans une parcelle en broussailles; ce sont des phenomenes completement incontrolables.
\\begin{tabular}{|c|c|c|c|} \\hline Ecart de predication & SIG & R\\éseaux neuronaux & Mod\\éle linéaire g\\(\\acute{e}\\)ralis\\(\\grave{e}\\) \\\\ \\hline
1 catégorie & 12,9 & 12,5 & 13,0 \\\\ \\hline
2 catégories & 9,1 & 8,5 & 9,2 \\\\ \\hline
3 catégories & 3,2 & 2,9 & 3,1 \\\\ \\hline
4 ou 5 catégories & 1,9 & 1,8 & 1,9 \\\\ \\hline Total résidus & 27,2 & 25,2 & 27,2 \\\\ \\hline \\end{tabular}
Tab. 3 - Analyse des résidus de la modelisation par l'ecart de catégorie entre la realit\\(\\grave{e}\\) et la modelisation (donnees en pourcentage de la surface totale)
Bien que l'occupation du sol soit decrite de maniere qualitative, ses differentes catégories s'echelonnent entre des formations fermees (foret de coinfieres, foret de feuillus) et ouvertes (cultures). Ces rangs << paysagers >> permettent de quantifier l'erreur de prediction, exprimee en nombre de catégories (cf. Tableau 3). Ainsi, quelle que soit la methode de modelisation, pour environ la moitie des pixels mal predits, l'erreur de prediction n'est que d'une catégorie a une resolution spatiale elevee. Le nombre de résidus decroit fortement avec l'augmentation de l'ecart entre la realit\\(\\grave{e}\\) et la projection.
\\begin{tabular}{|c|c|c|c|c|c|c|c|} \\hline & 3 modeles & \\multicolumn{3}{c|}{2 modeles} & \\multicolumn{3}{c|}{1 mod\\(\\grave{e}\\)le} & \\multicolumn{1}{c|}{Aucun} \\\\ & & & & & & & mode\\(\\grave{e}\\) \\\\ \\hline Prediction & & RN & SIG & SIG & \\multicolumn{1}{c|}{SIG} & RN & MLG & \\\\ & correcte par : & + MLG & + MLG & + RN & & & \\\\ \\hline Foret & 85,35 & 1,65 & 0,40 & 0,68 & 0,96 & 1,75 & 0,76 & 8,53 \\\\ de conifères & & & & & & & \\\\ \\hline Foret & 46,26 & 0,90 & 0,57 & 3,11 & 4,98 & 3,93 & 0,50 & 39,75 \\\\ de feuillus & & & & & & & \\\\ \\hline Brouss- & 32,38 & 5,92 & 2,64 & 2,50 & 7,75 & 3,89 & 1,03 & 43,89 \\\\ -sailes & & & & & & & \\\\ \\hline Lande & 76,98 & 4,99 & 2,45 & 0,74 & 2,82 & 0,70 & 0,93 & 10,39 \\\\ \\(\\grave{a}\\) g\\(\\grave{e}\\)ts & & & & & & & \\\\ \\hline Lande & 26,30 & 7,63 & 3,27 & 2,24 & 7,71 & 2,91 & 2,32 & 47,62 \\\\ \\(\\grave{a}\\) g\\(\\grave{e}\\)s & & & & & & & \\\\ \\hline Prairies & 59,14 & 11,64 & 1,66 & 4,70 & 1,06 & 5,26 & 1,99 & 14,55 \\\\ \\hline
**Total** & **66,49** & **3,75** & **1,42** & **1,54** & **3,23** & **2,33** & **0,92** & **20,32** \\\\ \\hline \\end{tabular}
Tab. 4 - Mise en relation des scores de prediction correctes des trois modeles avec l'occupation du sol en 2000. Donnees en % de la surface totale. RN = mode\\(\\grave{e}\\) par reseaux neuronaux; MLG = mode\\(\\grave{e}\\) lineaire g\\(\\acute{e}\\)ralis\\(\\grave{e}\\); SIG = mode\\(\\grave{e}\\) SIG
Un autre aspect interessant est la grande concordance des trois modeles (cf. Figure 8 et Tableau 4). Ainsi 66.5 % de la surface totale est correctement predite par chacun des modeles, 20.3 % par aucune.
Un autre enseignement du croisement des pixels correctement predits par les differents modeles est la similitude des résultats des deux modeles statistiques. Globalement, les surfaces correctement predites par deux modeles sur trois le sont le plus souvent par les reseaux neuronaux et le mode\\(\\grave{e}\\) lineairegeneralise (3.75 %). Le taux de prediction combine de la methode SIG avec chacun des modeles statistiques est nettement plus faible. La relative distinction du modele SIG est corroboree par les surfaces correctement predites par seulement une des approches (3.23 %). Une analyse categorielle souligne ce constat. Ainsi, pour les surfaces correctement predites par uniquement deux modeles, cinq categories d'occupation du sol sur six le sont par les approches statistiques. A l'inverse, les surfaces uniquement predites correctement par un seul modele le sont le plus souvent par le modele SIG (quatre categories d'occupation du sol sur six). En outre, le modele SIG se distingue par le fait que son taux de prediction est meilleur pour des zones affectees par un changement d'etat, soit entre la derniere date de la phase d'apprentissage et la date simulee, soit durant la phase d'apprentissage. Les deux methodes statistiques, au contraire, aboutisent a des resultats legerement plus proches de l'observation sur les secteurs stables. Ce comportement specifique sur les marges, s'explique par les procedures d'affectation spatiale differentes des transitions temporelles ou le modele SIG fait appel a une analyse geographique de la rugosite de l'espace par evaluation multicritere. Il s'agit, en l'occurrence, d'indices a approfondir dans la perspective de l'integration des trois modeles en un seul.
Les resultats font egalement ressortir certaines limites des modelisations :
* La variable modelisee est decrite par un nombre limite de categories. Ce choix, intentionnel, est motive par la qualite et l'origine des donnees source afin de minimisez les erreurs d'interpretation. L'inconvenient est une certaine variabilite a l'interieur de chaque categorie qui n'est pas prise en compte par les modeles. Ainsi peut-on voir dans les trois cartes de modelisation (Figure 8) une vaste zone predite en broussailles dans le secteur sud-est qui est reellement occupee par des bois de feuillus. Pendant la periode d'apprentissage cette zone a ete photointerpretee comme broussailles en 1980 et en 1989. Cependant les broussailles se sont densifiees, leur composition floristique a change au profit de Quercus ilex formant une strate arborescente dominante en 2000 ou la zone a ete classee foret de feuillus.
* La periode d'apprentissage est peu fournie. Les modelisations ne se basent que sur deux dates connues. Ceci pose des problemes par rapport a la source de donnees a utiliser. En l'occurrence il s'agit de donnees haute resolution : des missions de photographies aeriennes espacees dans le temps au point que l'utilisation de missions plus anciennes nous semble problematique sachant que le contexte socio-economique a beaucoup evolue.
* Le taux d'explication de la variabilite spatio-temporelle de l'occupation du sol par les variables d'environnement est inegal selon les categories d'occupation du sol.
* Finalement chaque modele est affecte par un bruit aleatoire. Des effets aleatoires comme des incendies de foret, du chablis ou des programmes de reforestation sont difficilement previsibles.
A cela s'ajoutent des limites specifiques a chacun des modeles. Pour le modele SIG deux facteurs limitants sont a mentionner. Des zones stables pendant la periode d'apprentissage sont predites stables par l'analyse des chaines de Markov. En outre, les procedures EMC, EMO et l'automate cellulaire ne gerent que ra repartition spatiale des scores de probabilites calcules par l'ACM. Cette restriction est, en partie, aussi valable pour les approches statistiques.
Perspectives
Les resultats exposes sont les premiers au sein d'un projet de recherche portant sur trois sites d'etudes. Les memes modeles seront appliques a la Montagne de Lure en Haute Provence ainsi qu'a la Alta Alpujarra Granadina formant la partie occidentale du versant sud de la Sierra Nevada (Espagne).
Cette comparaison croisee entre plusieurs approches appliquees sur plusieurs sites devrait, d'une part, limiter l'effet de singularite propre a chaque terrain d'etudes et, d'autre part, nous donner a terme des indications afin de proposer un modele integre, issu des trois methodes mises en oeuvre actuellement parallelement. En meme temps, les modeles restent evolutifs au sens ou l'interpretation des premiers resultats reoriente les prochaines etapes. A ce propos il nous semble interessant de remedier au probleme de la variabilite interne a chaque categorie d'occupation du sol par une approche semi quantitative. Aux modalites qualitatives decrites s'ajoutent des donnees ordonnees sur le taux de recouvrement de la strate arborescente.
Enfin, les approches mises en oeuvre sans intervention << specialiste >> (du type SIG) donnent des resultats aussi bons que la methode << supervisee >>. La combinaison de methodes purement mathematiques avec un guidage expert pourrait permettre de gommer les imperfections du modele. Cette voie est encore a explorer afin de progresser vers un modele integre de la simulation prospective de l'occupation du sol.
## References
* [BD98] M. Beale and H. Demuth. _Neural network toolbox user's guide_. The Matworks Inc., version 3 edition, 1998.
* [Bis95] C.M. Bishop. _Neural Networks for Pattern Recognition_. Oxford University Press, New York, 1995.
* [DN69] E. Davaloe and P. Naim. _Des reseaux de neurones_. Ed. Eyrolles, 1969.
* [EKTW93] J.R. Eastman, P.A.K. Kyem, J. Toledano, and Jin W. _Explorations in geographic systems technology volume 4 : GIS and Decision Making_. UNITAR, Geneva, 1993.
* [Hor93] K. Hornik. Some new results on neural network approximation. _Neural Networks_, 6(8):1069-1072, 1993.
* [HS89] D. Hosmer and Lemeshow S. _Applied logistic regression_. Wiley, New York, 1989.
* [KBS97] C. Kooperberg, S. Bose, and J. Stone. Polychotomous regression. _Journal of the American Statistical Association_, 92:117-127, 1997.
* [LW01] T.L. Lai and S. Wong. Stochastic neural networks with applications to nonlinear time series. _Journal of the American Statistical Association_, 96(455):968-981, 2001.
* [MP04] J.P. Metailie and M. Paegelow. _La dynamique du pin a crochet (Pinus uncinata Ram.) dans l'Est des Pyrenees Francaises : lere tour de la foret en montagne pastorale et metallurgique_. Ed. Casa de Velasquez, 2004. A paraitre.
* [Pae03] M. Paegelow. Prospective modelling with gis of land cover in mediterranean mountain regions. In _6th AGILE Conference on GIScience. 24-26 avril 2003_, Lyon, 2003.
* [PCO03] M. Paegelow and M.T. Camacho Olmedo. _Le processus d'abandon des cultures et la dynamique de reconquete vegetale en milieu montagnard mediterranean : L'exemple des Garrotexes (P.O., France) et de la Alta Alpujarra Granadina (Sierra Nevada, Espagne)_, volume 16. Sud Ouest Europeen, 2003.
* [PCOMT04] M. Paegelow, M.T. Camacho Olmedo, and J. Menor Toribio. Modelizacion prospectiva del paisaje mediante sistemas de informacion geografica. _GEOFOCUS_, 3 :22-24, 2004.
* [PM00] U. Parlitz and C. Merkwirth. Nonlinear prediction of spatio-temporal time series. In _ESANN'2000 proceedings_, pages 317-322, Bruges, 2000.
* [Saa77] T.L. Saaty. A scaling method for priorities in hierarchical structures. _Journal of Mathematics and Psychology_, 15 :234-281, 1977.
* the lure mountain (france). In _IAG Working Group on Geochaecology ; colloque international \"Dynamiques environnementales et histoire en domaines mediterranean\"_, pages 143-149, 2003.
Figure 7: Résultats des modelisations de l’occupation du sol en 2000 et occupation du sol réelle
Figure 8: Predictions corrected croisees par les 3 modèles | _The authors apply three methods of prospective modelling to high resolution georeferenced land cover data in a Mediterranean mountain area: GIS approach, non linear parametric model and neuronal network. Land cover prediction to the latest known date is used to validate the models. In the frame of spatial-temporal dynamics in open systems results are encouraging and comparable. Correct prediction scores are about 73 %. The results analysis focuses on geographic location, land cover categories and parametric distance to reality of the residues. Crossing the three models show the high degree of convergence and a relative simillitude of the results obtained by the two statistic approaches compared to the GIS supervised model. Steps under work are the application of the models to other test areas and the identification of respective advantages to develop an integrated model._ | Write a summary of the passage below. |
arxiv-format/0705_0418v1.md | VARIOUS APPROACHLES FOR PREDICTING LAND COVER IN MOUNTAN AREAS
Nathalie Villa\\({}^{1,\\rm a}\\), Martin Paegelow\\({}^{2}\\), Maria T. Camacho Olmedo\\({}^{3}\\), Laurence Cornez\\({}^{4}\\), Frederic Ferraty\\({}^{1}\\), Louis Ferre\\({}^{1}\\) and Pascal Sarda\\({}^{1}\\).
\\({}^{1}\\) GRIMM, Equipe d'accueil 3686, Universite Toulouse Le Mirail, France
\\({}^{2}\\) GEODE UMR 5602 CNRS, Universite Toulouse Le Mirail, France
\\({}^{3}\\) Instituto de desarrollo regional, Universidad de Granada, Spain
\\({}^{4}\\) ONERA, Toulouse, France
\\({}^{\\rm a}\\) Corresponding author e-mail: [email protected]
Key Words: polychotomous regression modelling; multilayer perceptron; classification; prediction; comparison.
1. PREDICTING LAND COVER
From the sketch maps made by geographers or from the analysis of satellite images or aerial photographs, we can build land cover maps for a given country which can be rather precise: the studied area is then cut into several squared pixels whose sides are about 20 meters long and whose land cover is known on various dates. The type of land cover can be chosen from a pre-determined list: coniferous forests, deciduous forests, scribs,
Here, we are not interested in making such maps (for satellite data analysis, see (Cardot _et al._, 2003)). Our purpose is to contruct a simulated land cover map at a given future date, by the use of land cover maps at older dates and of other environmental variables; on a geographical point of view,prospective simulations have a great interest to help the local administrations to develop these mountain areas. The idea is then to compare different approaches in order to confront their ability to be generalized to various mountain areas.
For a given pixel, determined by its spatial coordinates, latitude (\\(i\\)) and longitude (\\(j\\)), the value of the land cover on date \\(t\\), \\(c_{i,j}(t)\\), is a categorical random variable depending on several variables:
* the land cover of this pixel on previous dates: \\(c_{i,j}(t-1),\\ldots,c_{i,j}(t-T)\\) (_time serie of length \\(T\\)_);
* the land covers of the neighbouring pixels on previous dates: \\(V_{i,j}(t-1),\\ldots,V_{i,j}(t-T)\\), where \\(V_{i,j}(t-\\tau)\\) is a set of values of land cover on date \\(t-\\tau\\) for the pixels in a neighbourhood of the pixel \\((i,j)\\) (_vectorial time serie_);
* some environmental variables: for example, the elevation, the aspect, the proximity of roads and villages, \\(\\ldots\\): \\(Y_{i,j}^{1},\\ldots,Y_{i,j}^{p}\\).
We face here a problem of classification in which the predictors are both qualitative and quantitative and are also highly dependent (spatial time process). To solve this question, we propose to use and to compare two well-known statistical approaches with the empirical geographic method (namely the GIS, Geographic Information System). The first of these methods is a generalized linear model in which we estimate the parameters of the model by maximizing a log-likelihood type criterion. The second one uses a supervised multilayer perceptron. By confronting these various approaches, we expect to give ideas in order to improve the GIS approach.
A comparison of these two approaches was done on two little areas: the \"Garrotxes\" (\"Pyrenees Orientales\", south west of France) and the \"Alta Alpujarra Granaderia\" (Sierra Nevada, Spain) where several surveys of the land cover were done at various dates. We confronted the various scenarii constructed with the real maps.
In the following, we describe the data more precisely (section 2) and present the two approaches (section 3). Then we present how we applied these methodologies on these data sets (section 4) and finally, we compare the results obtained by analyzing the advantages and the limits of the models (section 5).
2. DESCRIPT OF THE DATA SETSThe areas under study stand in the moutains \"Pyrenees\" for the Garrotxes and Sierra Nevada \"Alta Alpujarra\". A big drift from the land has led to the desertion of the land under cultivation and the recovery of the fields by scrubs and forests. There is almost none human action on these areas. The aridity of the climate explains a much slower dynamic in the spanish area than in the Garrotxes: we count 3 times less pixels changing in the Alta Alpujarra than in the Garrotxes. On the contrary, the french area is considered, at least on a geographical point of view, as a dynamic area and it is then more difficult to predict the land cover.
We are given quantitative and qualitative informations through maps divided into pixels: about 241 000 pixels for the French area and 560 000 for the Spanish one (which is much bigger). For each pixel, we know:
* a categorical variable which is the land cover at different dates: 3 dates (1980, 1990 and 2000) were avalaible for the Garrotxes and 4 dates (1957, 1974, 1987 and 2001) for the Alta Alpujarra. As the land cover evolution is very slow in the Sierra Nevada (less than 25% of the pixels had changed their value between 1957 and 2001), these dates were considered as equidistant, according to geographers opinion. This categorical variable was taken from a list of several choices (8 for the Garrotxes and 9 for the Alta Alpujarra) which are of classical use in geography. These data were used to make maps of the studied area (see Figure 1);
* several environmental variables; some of them are of numeric type (the elevation, the slope, the aspect, the distance of roads and villages, ) and others are of categorical type (forest and pasture management: governmental or not? ground geological type, ). The environmental variables were not the same for the Garrotxes and the Alta Alpujarra (see Figure 2 for examples of environmental variables); all these environmental variables kept the same value at all dates.
3. PRESENTATION OF THE TWO APPROACHGeographers usually estimate the land cover evolution by an empirical method which allows to introduce some expert knowledge. The so-called GIS (Geographic Information System) approach is time expensive and necessitates precise knowledge on the geographic constraints of the area under study. Roughly speaking, the method consists in two steps: at first one computes time transition probabilities for each land cover type whereas, in a second step, one uses spatial constraints (introduced by an expert) for \"smoothing\" the maps obtained at the first step (see (Paegelow _et al._, 2004) or (Paegelow and Camacho Olmedo, 2005) for further details on GIS for these data sets). In order to propose automatic alternatives to the GIS, which can take in the same model the spatio-temporal nature of the problem, two approaches have been adapted to estimate the evolution of the land cover: the first one, polychotomous regression modelling, is a generalized linear approach based on the maximum log-likelihood method. The second one, multilayer perceptron, is a popular method which has recently proved its great efficiency to solve various types of problems.
The idea is to confront a parametric linear model with a non parametric one to provide a collection of automatic statistical methods for geographers. They both have concurrent advantages that have to be taken into account when choosing one of them: the polychotomous regression modelling is faster to train than multilayer perceptrons, especially in high dimensional spaces and does not suffer from the existence of local minima. On the contrary, multilayer perceptrons can provide nonlinear solutions and are then more flexible than the linear modelling; moreover, both methods are easy to implement even for non statisticians through the pre-made softwares (for example, \"Neural Network\" Toolbox for neural network with Matlab).
### The Model
Let us now describe the statistical setting more formally. We note \\(X_{i,j}(t)\\) the vector of variables that could explain the value of the land cover for a given pixel \\((i,j)\\) on date \\(t\\). We suppose that the time dependence is of order \\(1\\); then, \\(X_{i,j}(t)\\) contains:
* _for the time series:_ the value of the land cover for the pixel \\((i,j)\\) at the previous time \\(t-1\\);
* _for the spatial aspect:_ the frequency of each type of land cover in the neighbourhood of pixel \\((i,j)\\) on the previous date. Then, the shape and the size of the neighbourhood had to be chosen. For the shape, we had many choices: the simpler one was a square neighbourhood or a star-shaped neighbourhood around the pixel \\((i,j)\\); the most sophisticated could use the slope to better take into account the morphological influences of the land. For the size of the neighbourhood, we had to find at which distance a pixel could influence the land use of pixel \\((i,j)\\). Moreover, for the multilayer perceptrons, in order to respect the spatial aspect of the problem, we weighted the influence of a pixel by a decreasing function of its distance to the pixel \\((i,j)\\) (see Figure 3).
* _environmental variables_ (slope, elevation, ).
Let us repeat that \\(c_{i,j}(t)\\) is the land cover for a given pixel on date \\(t\\). We note \\(\\mathcal{C}_{1},\\ldots,\\mathcal{C}_{K}\\) the different types of land cover. Then, for every \\(k=1,\\ldots,K\\), we try to estimate the probability \\(P(c_{i,j}(t)=\\mathcal{C}_{k}|X_{i,j}(t))\\) that the pixel \\((i,j)\\) has a land cover equal to \\(\\mathcal{C}_{k}\\) given the vector \\(X_{i,j}(t)\\); thus, the model is of the following form :
\\[P(c_{i,j}(t)=\\mathcal{C}_{k}|X_{i,j}(t))=f_{k}(X_{i,j}(t)). \\tag{1}\\]
Once a model was chosen through \\(f_{k}\\), these probabilities were estimated by the way of a multi-layer perceptron or a generalized linear model and we predicted the type of land cover, \\(c_{i,j}(t)\\), by the rule of maximum:
\\[\\operatorname{argm}\\operatorname{ax}_{k=1,\\ldots,K}P(c_{i,j}(t)=\\mathcal{C}_{ k}|X_{i,j}(t)).\\]
In both approaches, we estimated \\(f_{k}\\) thanks to a training sample. To that end, we have collected the values of the predictors and of the land cover for many pixels on various dates (see next section for more details); the observations are denoted by \\((X^{(1)},c^{(1)}),\\ldots,(X^{(N)},c^{(N)})\\).
The time and spatial aspects are taking into account together both by the polychotomous regression modelling and by the multilayer perceptron and the land cover prediction is performed in a single estimation procedure. This is not the case for the usual GIS approach which is performed in two steps: it first estimates the land cover probability by modelling a time serie and it then introduces a spacial smoothing with environmental constraints.
3.2. POLYCHOTOMOUS REGRESSION MODELLINGWhen we wish to predict a categorical response given a random vector, a useful model is the _multiple logistic regression_ (or _polychotomous regression_) model (Hosmer and Lemeshow, 1989). A smooth version of this kind of method can be found in (Kooperberg _et al._, 1997). Applications of these statistical techniques to several situations such as in medicine or for phoneme recognition can be found in these two works. Their good behaviour both on theoretical and practical grounds have been emphasized. In our case, where the predictors are both categorical and scalar, we then have the derived model below.
Let us note, for \\(k=1,\\ldots,K\\)
\\[\\theta\\left(\\mathcal{C}_{k}|X_{i,j}(t)\\right)=\\log\\frac{P\\left(c_{i,j}(t)= \\mathcal{C}_{k}|X_{i,j}(t)\\right)}{P\\left(c_{i,j}(t)=\\mathcal{C}_{K}|X_{i,j}(t )\\right)}.\\]
Then, we get the following expression
\\[P\\left(c_{i,j}(t)=\\mathcal{C}_{k}|X_{i,j}(t)\\right)=\\frac{\\exp\\theta\\left( \\mathcal{C}_{k}|X_{i,j}(t)\\right)}{\\sum_{k^{\\prime}=1}^{K}\\exp\\theta\\left( \\mathcal{C}_{k^{\\prime}}|X_{i,j}(t)\\right)}. \\tag{2}\\]
Now, to estimate these conditional probabilities, we use the parametric approach to the polychotomous regression problem, that is the linear model
\\[\\theta\\left(\\mathcal{C}_{k}|X_{i,j}(t)\\right)=\\alpha_{k}+\\sum_{c\\in V_{i,j}(t -1)}\\sum_{l=1}^{K}\\beta_{kl}1\\!\\!1_{[c=\\mathcal{C}_{l}]}+\\sum_{r=1}^{p}\\gamma _{kr}Y_{i,j}^{r}, \\tag{3}\\]
where we recall that \\(V_{i,j}(t-1)\\) are the values of the land cover in the neighbourhood of the pixel \\((i,j)\\) on the previous date \\(t-1\\) and \\((Y_{i,j}^{r})_{r}\\) are the values of the environment variables. Let us call \\(\\delta=(\\alpha_{1},\\ldots,\\alpha_{K-1},\\beta_{1,1},\\ldots,\\beta_{1,K},\\beta_{2,1}\\),
\\(\\ldots,\\beta_{2,K},\\ldots,\\beta_{K-1,1},\\ldots,\\beta_{K-1,K},\\gamma_{1,1}, \\ldots,\\gamma_{1,K},\\ldots,\\gamma_{K-1,1},\\ldots,\\gamma_{K-1,p})\\), the parameters of the model to be estimated. We have to notice that since \\(\\theta\\left(\\mathcal{C}_{K}|X_{i,j}(t)\\right)=0\\), we have \\(\\alpha_{K}=0\\), \\(\\beta_{K,l}=0\\) for all \\(l=1,\\ldots,K\\), and \\(\\gamma_{K,r}=0\\) for all \\(r=1,\\ldots,p\\). We now have to estimate the vector of parameters \\(\\delta\\). For that end, we use a penalized likelihood estimator which is performed on the training sample. Let us write the penalized log-likelihood function for model (3). It is given by
\\[l_{\\varepsilon}(\\delta)\\ =\\ l(\\delta)-\\varepsilon\\sum_{n=1}^{N}\\sum_{k=1}^{K}u_{ nk}^{2}, \\tag{4}\\]where the log-likelihood function is
\\[l(\\delta)=\\log\\left(\\prod_{n=1}^{N}P_{\\delta}\\left(c^{(n)}|X^{(n)}\\right)\\right). \\tag{5}\\]
In this expression, \\(P_{\\delta}(c^{(n)}|X^{(n)})\\) is the value of the probability given by (2) and (3) for the observations \\((X^{(n)},c^{(n)})\\) and the value \\(\\delta\\) of the parameter.
In expression (5), \\(\\varepsilon\\) is a penalization parameter and, for \\(k=1,\\ldots,K,\\)
\\[u_{nk}=\\theta_{\\delta}(\\mathcal{C}_{k}|X^{(n)})-\\frac{1}{K}\\sum_{k^{\\prime}=1 }^{K}\\theta_{\\delta}(\\mathcal{C}_{k^{\\prime}}|X^{(n)})\\text{. Our penalized likelihood estimator}\\]
\\(\\widehat{\\delta}_{\\varepsilon}\\) satisfies:
\\[\\widehat{\\delta}_{\\varepsilon}=\\text{argmax}_{\\delta\\in\\ \\mathbb{R}^{M}}l_{ \\varepsilon}(\\delta),\\]
where \\(M=K^{2}+(K-1)*p-1\\) denotes the number of parameters to be estimated.
As pointed out by (Kooperberg _et al._, 1997) in the context of smooth polychotomous regression, it is possible that, without the penalty term, the maximization of the log-likelihood function \\(l(\\delta)\\) leads to infinite coefficients \\(\\beta_{k,l}\\). In our model it may be the case, for example, when, for fixed \\(k\\), the value of the predictor is equal to zero for all \\((i,j)\\). Actually, this \"pathological\" case cannot really occurs in practice but for classes \\(k\\) with a few number of members, the value of the predictor is low and then a numerical unstability happens when maximizing the log-likelihood. Then, the form of the penalty based on the difference between the value \\(\\theta_{\\delta}(\\mathcal{C}_{k}|X^{(n)})\\) for class \\(k\\) and the mean over all the classes has the aim of preventing this unstability by forcing \\(\\theta_{\\delta}(\\mathcal{C}_{k}|X^{(n)})\\) to be not too far from the mean. On another side, for reasonable values of \\(\\epsilon\\), we can expect that the penalty term does not affect so much the estimation of parameters while it guarantees numerical stability. Finally, numerical maximization of the penalized log-likelihood function is achieved by a Newton-Raphson algorithm.
### Multilayer Perceptron
Neural networks have a great adaptability to any statistical problems and especially to overcome the difficulties of non linear problems even if the predictors are highly correlated; thus it is not surprising to find them used in the chronological series prediction ((Bishop, 1995), (Lai and Wong, 2001) and (Parlitz and Merkwirth, 2000)). The main interest of neural networks is their ability to approximate any function with the desired precision (universal approximation): see, for instance, (Hornik, 1991).
Here we propose to estimate, in model (1), the function \\(f_{k}\\) in the form of a multilayer perceptron with one hidden layer (see Figure 4), \\(\\psi\\), which is a function from \\(\\mathbb{R}^{q}\\) to \\(\\mathbb{R}\\) that can be written, for all \\(x\\) in \\(\\mathbb{R}^{q}\\), as
\\[\\psi_{w}(x)=\\sum_{i=1}^{q_{2}}w_{i}^{(2)}g\\left(\\langle x,w_{i}^{(1)}\\rangle+w _{i,0}^{(1)}\\right),\\]
where \\(q_{2}\\) in \\(\\mathbb{N}\\) is the number of neurons on the hidden layer, \\((w_{i}^{(1)})_{i=1,\\ldots,q_{2}}\\) (respectively \\((w_{i}^{(2)})_{i=1,\\ldots,q_{2}}\\), \\((w_{i,0}^{(1)})_{i=1,\\ldots,q_{2}}\\)) are in \\(\\mathbb{R}^{q}\\) (resp. \\(\\mathbb{R}\\)) and are called weights of the first layer (resp. weights of the second layer, bias) and where \\(g\\), the activation function, is a sigmoid; for example, \\(g(x)=\\frac{1}{1+e^{-x}}\\).
Then, the output of the multilayer perceptron is a smooth function (here it is indefinitly continuous and derivable) of its input. This property ensures that the neural network took into account the spatial aspect of the data set, since two neighbouring pixels have \"close\" values for their predictor variables.
To determine the optimal value for weights \\(w=((w_{i}^{(1)})_{i},(w_{i}^{(2)})_{i},(w_{i,0}^{(1)})_{i})\\), we minimized, as it is usual, the quadratic error on the training sample: for all \\(k=1,\\ldots,K\\), we chose
\\[w_{opt}^{k}=\\operatorname*{argmin}_{w\\in\\mathbb{R}^{q_{2}(q+2)}}\\sum_{n=1}^{N} \\left[c_{k}^{(n)}-\\psi_{w}^{k}(X^{(n)})\\right]^{2}, \\tag{6}\\]
where \\(c^{(n)}\\) and the categorical data in \\(X^{(n)}\\) are written on a disjunctive form. This can be performed by classical numerical methods of the first or the second order (such as gradient descent or conjugate gradients, ) but faces local minima problems. We explain in section 4 how we overcome this difficulty. Finally, (White, 1989) gives many results that ensure the convergence of the optimal empirical parameters to the optimal theoretical parameters.
4. PRACTICAL APPLICATION TO THE DATA SETS
In order to compare the two approaches, we applied the same methodology: we first determined the optimal parameters for each approach (trainingstep, see below) and then, we used the first maps to predict the last one and compared the errors to real map (comparison step, see section 5).
As usual in statistical methods, there are two stages in the training step: the _estimation step_ and the _validation step_.
* The _estimation step_ consists in estimating the parameters of the models (either for the polychotomous regression or the neural network);
* The _validation step_ allows us to choose, for both methodologies, the best neighbourhood, for polychotomous regression, the penalization parameter and, for neural network, the number of neurons on the hidden layer. Concerning the neighbourhood, we only considered square shapes so choosing a neighbourhood is equivalent, in our procedure, to determine its size.
For the Sierra Nevada, we saw that large areas are constant, thus we only used the pixels for which one neighbour, at least, has a different land cover. These pixels are called \"frontier pixels\"; the others were considered as constant (see Figure 5). For the generalized linear model, we used the whole frontier pixels of the 1957/1974 maps for the estimation set and the whole 1974/1987 maps for the validation set. We then constructed the estimated 2001 map from the 1987 one. For the multilayer perceptron, we reduced the training set size in order not to have huge computational times when minimizing the loss function. Then, estimation and validation data sets were chosen randomly in the frontier pixels of the 1957/1974 and 1974/1987 maps.
For the Garrotxes data set, due to the fact that we only had got 3 maps and much less pixels, we had to use the 1980/1990 maps for the estimation step (only their frontier pixels for the MLP) and the whole 1990/2000 ones for the validation step. This led to a biased estimate when constructing the 2000 map from the 1990 map but, as our purpose is to compare two models and not to make significant the error rate, we do not consider this bias as important.
### Polychotomous Regression
* The _estimation step_ produces the estimated parameter vector \\(\\widehat{\\delta}_{\\varepsilon}\\) of the parameters \\(\\delta_{\\varepsilon}\\) of model (3) for given neighbourhood and penalization parameter \\(\\varepsilon\\). This step was repeated for various values concerning both neighbourhood and penalization parameter.
* _Validation step:_ Once given an estimated parameter vector \\(\\widehat{\\delta}_{\\varepsilon}=(\\widehat{\\alpha}_{1},\\ldots,\\widehat{\\alpha}_{K -1},\\widehat{\\beta}_{1,1},\\ldots,\\widehat{\\beta}_{1,K},\\widehat{\\beta}_{2,1}, \\ldots,\\widehat{\\beta}_{2,K},\\ldots,\\widehat{\\beta}_{K-1,1},\\ldots,\\widehat{ \\beta}_{K-1,K}\\), \\(\\widehat{\\gamma}_{1,1},\\ldots,\\widehat{\\gamma}_{1,p},\\ldots,\\widehat{\\gamma}_{K -1,1},\\ldots,\\widehat{\\gamma}_{K-1,p})\\), the quantities \\[\\widehat{P}\\left(c_{i,j}(t)=\\mathcal{C}_{k}|X_{i,j}(t)\\right)=\\frac{\\exp \\widehat{\\theta}\\left(\\mathcal{C}_{k}|X_{i,j}(t)\\right)}{\\sum_{k^{\\prime}=1}^ {K}\\exp\\widehat{\\theta}\\left(\\mathcal{C}_{k^{\\prime}}|X_{i,j}(t)\\right)},\\] were calculated, for all \\(k=1,\\ldots,K\\), with \\[\\widehat{\\theta}\\left(\\mathcal{C}_{k}|X_{i,j}(t)\\right)=\\widehat{\\alpha}_{k} +\\sum_{c\\in V_{i,j}(t)}\\sum_{l=1}^{K}\\widehat{\\beta}_{kl}1\\!\\!1_{[c=\\mathcal{C }_{l}]}+\\sum_{r=1}^{p}\\widehat{\\gamma}_{kr}Y_{i,j}^{r}.\\] At each pixel \\((i,j)\\) for the predicted map on date \\(t\\), we affected the most probable vegetation type namely the \\(\\mathcal{C}_{k}\\) which maximizes \\[\\left\\{\\widehat{P}\\left(c_{i,j}(t)=\\mathcal{C}_{k}|X_{i,j}(t)\\right)\\right\\}_ {k=1,\\ldots,K}.\\] Programs were made using R programm (see (R Development Core Team, 2005)) and are avalaible on request.
### Multilayer Perceptron
We used a neural network with one hidden layer having \\(q_{2}\\) neurons (where \\(q_{2}\\) is a parameter to be calibrated). The inputs of the neural network were:
* For the _time series_, the disjunctive form of the value of the pixel;
* For the _spatial aspect_, the weighted frequency of each type of land cover in the neighbourhood of the pixel;
* the environmental variables.
The output was the estimation of the probabilities (1).
The estimation was also made in two stages:* The _estimation step_ produces the estimated weights as described in (6) for a given number of neurons \\((q_{2})\\) and a given neighbourhood. For this step, the neural network was trained with an early stopping procedure which allows to stop the optimization algorithm when the validation error (calculated on a part of the data set) is starting to increase (see (Bishop, 1995)). This step was repeated for various values of both neighbourhood and \\(q_{2}\\).
* _Validation step:_ once an estimation of the optimal weights was given, we chose \\(q_{2}\\) and the size of neighbourhood, as for the previous model. Moreover, in order to escape the local minima during the training step, we trained the perceptrons many times for each value of neighbourhood and of \\(q_{2}\\) with various training sets; the \"best\" perceptron was then chosen according to the minimization of the validation error among both the values of the parameters (size of the neighbourhood and \\(q_{2}\\)) and the optimization procedure results.
Programs were made using Matlab (Neural Networks Toolbox, see (Beale and Demuth, 1998)) and are avalaible on request.
5. COMPARISON AND DISCUSSION
The validation step led to select the following parameters (Table 1):
[Table 1 about here.]
After the two models were trained, we built the predicting map on date 2000 (Garrotxes data set) and 2001 (Alt a Alpujarra data set). The performances of the two models were compared with a GIS approach.
For the Garrotxes data set, the results are summarized in Table 2 and the frequencies of errors for each land cover type were calculated on the pixels which are really of this land cover type. We focus on the 6 more frequent land cover types, since the number of agriculture pixels tends to zero. In Figure 6, we can see the three predictive maps given by our approaches and the GIS approach that can be confronted with the real map.
[Table 2 about here.]
[Figure 6 about here.]For the Alta Alpujarra data set, the results are summarized in Table 3 (land cover type under 5 % of the area have been omitted). Predicted maps and real maps are compared in Figure 7.
First of all, the predictive maps provided by the two statistical methods are coherent, smooth and close to reality. This can also be shown through the good error rates (about 25 % - 27 % for the Garrot xes data set and 9 % - 12 % for the Alta Alpujarra) which are clearly a good performance considering the poverty of the data (we only had got 3 or 4 dates to train the models).
Furthermore, the striking fact is that the \"automatic\" statistical approaches did as well (Garrotxes data set) or even much better (Alta Alpujarra) than the guided GIS approach. This is an interesting point in order to help improving the classical geographical approach to predicting land cover, and better understand the environmental changes in time and space. Moreover, the \"automatic\" statistical methods were much faster than the GIS as they do not use any expert knowledge which takes a long time to be modelized and needs to be remade for each area. On the contrary, the polychotomous regression modelling and the multilayer perceptron approaches did not lead on these data sets to significant differences. The first method was much faster to train and it was then quite attractive to use it. However, we think that, on a general point of view, the greater flexibility of multilayer perceptron could be usefulness to predict land cover for other data sets where a parametric model could fail.
The main advantage of the automatic statistical approaches is in the fact that they simultaneously take into account the spatio-temporal aspect of the problem and also the environmental variables. GIS works in two steps: it first predicts the number of pixels for each land cover type by a simple temporal model and then takes into account the spatial aspect and the environmental variables to allocate these pixels spatially. This could partially explain that GIS had worse performances for the Alta Alpujarra data set, as the coniferous reforestation used to be important in the 60's and has then be given up. This led the GIS to predict, in the 2001 map, much more coniferous reforestation pixels than in the real map: 18.8 % of the pixels were predicted in the coniferous reforestation type against 7.9 % for the multilayer perceptron, 9.6 % for the polychotomous regression modelling and 9.2 % for the real map. Then GIS approach had a much lower error rate on the coniferous reforestation land cover type but a bigger one for the other ones.
Finally, looking further in the missclassification rates for the various land cover types, we can see that the most dynamic land cover type were harder to train: this is the case, for instance, for the scrubs in the Garrotxes area where they tended to grow fast and became deciduous forests; this is also the case, in the Alta Alpujarra for the fallows and irrigated croplands because agricultural lands were tending to be left. These dynamics could be better predicted by adding pertinent informations for these kinds of land cover types (density of the scrubs, for example, can help knowing if they can, or not, become forests).
6. CONCLUSION
Finally, this work shows the great potential of the two statistical models in predictive prospection on geographical data. These models had as good performances as GIS approach and we can hope that a combination of the two points of view (statistics and GIS) can improve the land cover predictions: the empirical first step of the GIS could be improved by being replaced by one of these statistical approaches. This issue, that is of big interest for geographers, is still under study as the GIS approach was performed through pre-made programs and has then to be totally re-though to that aim.
Another aspect that has to be worked on is the form of the data: for example, we underlined that an information on the density of the scrubs is needed to better understand their evolution. This could help geographers to better understand what is of interest for predicting the land cover evolution for their future studies.
7. ACKNOWLEDGEMENTS
The authors are grateful to the Ministerio de Cencia y Tecnologia who supports this research (Plan nacional de investigacion cientifica, Desarollo e innovacion tecnologica, BIA 2003_01499).
The authors are also grateful to the anonymous referees for their detailed and constructive comments and suggestions which have substantially improved the manuscript.
BIBIOGRAPHY
(1) Beale, M., Demuth, H. (1998). _Neural network toolbox user's guide_.
Version 3. The Matworks Inc.
(2) Bishop, C. (1995). _Neural Networks for Pattern Recognition_. New York: Oxford University Press.
(3) Cardot, H., Faivre, R., Goulard, M. (1993). Functional approaches for predicting land use with the temporal evolution of coarse resolution remote sensing data. _Journal of Applied Statistics_, 30: 1185-1199.
(4) Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. _Neural Networks_, 4(2): 251-257.
(5) Hosmer, D., Lemeshow, S. (1989). _Applied logistic regression_. New York: Wiley.
(6) Kooperberg C., Bose, S., Stone, J. (1997). Polychotomous regression. _Journal of the American Statistical Association_, 92: 117-127.
(7) Lai, T. and Wong, S. (2001). Stochastic neural networks with applications to nonlinear time series. _Journal of the American Statistical Association_, 96(455): 968-981.
(8) R Development Core Team (2005). _R: a language and environment for statistical computing_. Vienna, Austria: R Foundation for Statistical Computing.
(9) Paegelow, M., Camacho Olmedo M.T. (2005). Possibilities and limits of prospective GIS land cover modeling - a compared case study: Garrotxes (France) and Alta Alpujarra Granadina (Spain). _International Journal of Geographical Information Science_, 19(6): 697-722.
(10) Paegelow, M., Villa, N., Cornez, L., Ferraty, F., Ferre, L. and Sarda, P. (2004). Modelisations prospectives de l'occupation du sol. Le cas d'une montagne mediterranean. _Cybergeo_, 295.
(11) Parlitz, U., Merkwirth, C. (2000). Nonlinear prediction of spatio-temporal time series. In _ESANN'2000 proceedings_. Bruges, Belgium: 317-322.
(12) White, H. (1989). Learning in artificial neural network: a statistical perspective. _Neural Computation_, 1: 425-464.
List of Figures
* left) and for the Alta Alpujarra (1957
* left) and a categorical one (ground geological type for the Alta Alpujarra
* 3 An example of neighbourhood
* 4 Multilayer perceptron with one hidden layer
* 5 Frontier pixels (order 4) for the 1957 map (Alta Alpujarra)
* 6 Predictive maps for the various approaches on date 2000 and real map (bottom right)
* 7 Predictive maps for the various approaches on date 2001 and real map (bottom right)Figure 1: Land cover for the Garrotxes (1980 - left) and for the Alta Alpujarra (1957 - right)
Figure 2: Examples of a numerical variable (elevation for the Garrotxes - left) and a categorical one (ground geological type for the Alt a Alpujarra - right)
Figure 3: An example of neighbourhood
Figure 4: Multilayer perceptron with one hidden layer
Figure 5: Frontier pixels (order 4) for the 1957 map (Alta Alpujarra)
Figure 6: Predictive maps for the various approaches on date 2000 and real map (bottom right)
Figure 7: Predictive maps for the various approaches on date 2001 and real map (bottom right)
List of Tables
* 1 Parameters selected by the validation step
* 2 Missclassification rates for the Garrotaxes
* 3 Missclassification rates for the Alta Alpujarra
\\begin{table}
\\begin{tabular}{|l|c|c|} \\cline{2-3} \\multicolumn{1}{c|}{} & **Garrotxes** & **Alta** & **Alpujarra** \\\\ \\hline
**Poly. regression** & & & \\\\ Size of neighbourhood & 9 & 1 \\\\ \\(\\epsilon\\) & 10 & 0.1 \\\\ \\hline \\hline
**ML perceptron** & & & \\\\ Size of neighbourhood & 7 & 4 \\\\ \\(q_{2}\\) & 8 & 30 \\\\ perceptron size & 19-8-7 & 35-30-9 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Parameters selected by the validation step
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline Land cover & Frequency & Poly. Regression & ML perceptron & GIS \\\\ types & in the area & error rate & error rate & error rate \\\\ \\hline Coniferous forests & 40.9 \\% & 11.9 \\% & 10.6 \\% & 11.4 \\% \\\\ Deciduous forests & 11.7 \\% & 51.7 \\% & 45.8 \\% & 55.3 \\% \\\\ Scribs & 15.1 \\% & 57.1 \\% & 54.5 \\% & 51.9 \\% \\\\ Broom lands & 21.6 \\% & 14.4 \\% & 16.2 \\% & 17.1 \\% \\\\ Grass pastures & 5.7 \\% & 59.2 \\% & 59.4 \\% & 54.4 \\% \\\\ Grasslands & 4.8 \\% & 25.6 \\% & 19.3 \\% & 30.4 \\% \\\\ \\hline
**Overall** & & **27.2 \\%** & **25.7 \\%** & **27.2 \\%** \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Missclassification rates for the Garrotaxes
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline Land cover & Frequency & Poly. Regression & ML perceptron & GIS \\\\ types & in the area & error rate & error rate & error rate \\\\ \\hline Deciduous forests & 10.9 \\% & 3.5 \\% & 2.6 \\% & 14.3 \\% \\\\ Scribs & 33.0 \\% & 3.1 \\% & 1.4 \\% & 15.2 \\% \\\\ Pasture & 20.8 \\% & 0.6 \\% & 0 \\% & 12.5 \\% \\\\ Coniferous refor. & 9.23 \\% & 3.5 \\% & 16.3 \\% & 1.9\\% \\\\ Fallows & 18.8 \\% & 32.5 \\% & 41.4 \\% & 46.8\\% \\\\ Irrigated cropland & 5.8 \\% & 8.9 \\% & 6.8 \\% & 38.9\\% \\\\ \\hline
**Overall** & & **9.0 \\%** & **11.28 \\%** & **21.1 \\%** \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Misclassification rates for the Alta Alpujarra
[MISSING_PAGE_POST] | Using former maps, geographers intend to study the evolution of the land cover in order to have a prospective approach on the future landscape; predictions of the future land cover, by the use of older maps and environmental variables, are usually done through the GIS (Geographic Information System). We propose here to confront this classical geographical approach with statistical approaches: a linear parametric model (polychotomous regression modelling) and a nonparametric one (multilayer perceptron). These methodologies have been tested on two real areas on which the land cover is known at various dates; this allows us to emphasize the benefit of these two statistical approaches compared to GIS and to discuss the way GIS could be improved by the use of statistical models. | Write a summary of the passage below. |
arxiv-format/0705_0487v3.md | # Natural Priors, CMSSM Fits and LHC Weather Forecasts
Benjamin C Allanach\\({}^{1}\\), Kyle Cranmer\\({}^{2}\\), Christopher G Lester\\({}^{3}\\) and Arne M Weber\\({}^{4}\\)
\\({}^{1}\\) DAMTP, CMS, Wilberforce Road, Cambridge CB3 0WA, UK
\\({}^{1}\\) Dept. of Physics, New York University, New York, USA
\\({}^{3}\\) Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE, UK
\\({}^{4}\\) Max Planck Inst. fur Phys., Fohringer Ring 6, D-80805 Munich, Germany
## 1 Introduction
The impending start-up of the LHC makes this a potentially exciting time for supersymmetric (SUSY) phenomenology. Anticipating the arrival of LHC data, a small industry has grown up aiming to forecast the LHC's likely discoveries. There are big differences between nature of the questions answered by a forecast, and the questions that will be answered by the experiments themselves when they have acquired compelling data. A weather forecast predicting \"severe rain in Cambridgeshire at the end of the week\" should not be confused with a discovery of water. However, the forecast _is_ something which influences short-term flood plans and will set priorities within the list of \"urgent repairs needed by flood defences\".
LHC weather forecasts for sparticle masses or cross sections set priorities among signals needing to be investigated, or among expensive Monte Carlo background samples competing to be generated. Forecasts can influence the design parameters of future experiments and colliders. In advance of LHC we would like to have some sort of idea of what luminosity will be required in order to detect and/or measure supersymmetry. There is also the question of which signatures are likely to be present.
In order to answer questions such as these, a programme of fits to simple SUSY models has proceeded in the literature [4, 5, 6, 7, 8]. The fits that we are interested in have made the universality assumption on soft SUSY breaking parameters: the scalar masses are set to be equal to \\(m_{0}\\), the trilinear scalar couplings are set to be \\(A_{0}\\) multiplied by the corresponding Yukawa couplings and all gaugino masses are set to be equal to \\(M_{1/2}\\). Such assumptions, when applied to the MSSM, are typically called mSUGRA or the constrained minimal supersymmetric standard model. The universality conditions are typically imposed at a gauge unification scale \\(M_{GUT}\\sim 2\\times 10^{16}\\) GeV. The universality conditions are quite strong, but allow phenomenological analysis of a varied subset of MSSM models. The universality assumption is not unmotivated since, for example, several string models [9] predict MSSM universality.
Until recently, CMSSM fits have relied upon fixed input parameters [1, 2, 3, 4, 5, 6, 7] in order to reduce the dimensionality of the CMSSM parameter space, rendering scans viable. Such analyses provide a good idea of what are the relevant physical processes in the various parts of parameter space. More recently, however, it has been realised that many-parameter scans are feasible if one utilises a Markov Chain Monte Carlo (MCMC) [6]. Such scans were used to perform multi-dimensional a Bayesian analysis of indirect constraints [10]. A particularly important constraint came from the relic density of dark matter \\(\\Omega_{DM}h^{2}\\), assumed to consist solely of neutralinos, the lightest of which is the lightest supersymmetric particle (LSP). Under the assumption of a discrete symmetry such as \\(R-\\)parity, the LSP is stable and thus still present in the universe after being thermally produced in the big bang. The results of ref. [10] were confirmed by an independent study [11], which also examined the prospects of direct dark matter detection. Since then, a study of the \\(\\mu<0\\) branch of the CMSSM was performed [12] and implications for Tevatron Higgs searches have been discussed [13].
It is inevitable that LHC forecasts will contain a large degree of uncertainty. This is unavoidable as, in the absence of LHC data, constraints are at best indirect and also few in number. Within a Bayesian framework, the components of the answer that are incontestable lie within a simple \"likelihood\" function, whereas the parts which parameterise our ignorance concerning the nature of the parameter space we are about to explore are rolled up into a prior. By separating components into these two domains, we have an efficient means of testing not only what the data is telling is about new physics, but also of warning us of the degree to which the data is (or isn't) compelling enough to disabuse us of any prior expectations we may hold.
In [10, 11], Bayesian statements were made about the posterior probability density of the CMSSM, after indirect data had been taken into account. The final result of a Bayesian analysis is the posterior probability density function (pdf), which in previous MCMC fits, was set to be
\\[p(m_{0},M_{1/2},A_{0},\\tan\\beta,s|\\text{data})=p(\\text{data}|m_{0},M_{1/2},A_ {0},\\tan\\beta,s)\\frac{p(m_{0},M_{1/2},A_{0},\\tan\\beta,s)}{p(\\text{data})} \\tag{1}\\]for certain Standard Model (SM) inputs \\(s\\) and ratio of the two MSSM Higgs vacuum expectation values \\(\\tan\\beta=v_{2}/v_{1}\\). The likelihood \\(p(\\mbox{data}|m_{0},M_{1/2},A_{0},\\tan\\beta,s)\\) is proportional to \\(e^{-\\chi^{2}/2}\\), where \\(\\chi^{2}\\) is the common statistical measure of disagreement between theoretical prediction and empirical measurement. The prior \\(p(m_{0},M_{1/2},\\)\\(A_{0},\\tan\\beta,s)\\) was taken somewhat arbitrarily to be flat (i.e. equal to a constant) within some ranges of the parameters, and zero outside those ranges. Eq. 1 has an implied measure for the input parameter. If, for example, we wish to extract the posterior pdf for \\(m_{0}\\), all other parameters are marginalised over
\\[p(m_{0}|\\mbox{data})=\\int dM_{1/2}\\ dA_{0}\\ d\\tan\\beta\\ ds\\ p(m_{0},M_{1/2},A_{0}, \\tan\\beta,s|\\mbox{data}). \\tag{2}\\]
Thus a flat prior in, say, \\(\\tan\\beta\\) also corresponds to a choice of measure in the marginalisation procedure: \\(\\int d\\tan\\beta\\). Before one has a variety of accurate direct data (coming, for instance, from the LHC), the results depend somewhat upon what prior pdf is assumed.
In all of the previous MCMC fits, Higgs potential parameters \\(\\mu\\) and \\(B\\) were traded for \\(M_{Z}\\) and \\(\\tan\\beta\\) using the electroweak symmetry breaking conditions, which are obtained by minimising the MSSM Higgs potential and obtaining the relations [16]:
\\[\\mu B = \\frac{\\sin 2\\beta}{2}(\\bar{m}_{H_{1}}^{2}+\\bar{m}_{H_{2}}^{2}+2\\mu^ {2}), \\tag{3}\\] \\[\\mu^{2} = \\frac{\\bar{m}_{H_{1}}^{2}-\\bar{m}_{H_{2}}^{2}\\tan^{2}\\beta}{\\tan ^{2}\\beta-1}-\\frac{M_{Z}^{2}}{2}. \\tag{4}\\]
Eqs. 3,4 were applied at a scale \\(Q=\\sqrt{\\bar{m_{t_{1}}}m_{\\tilde{t_{2}}}}\\), i.e. the geometrical average of the two stop masses1. \\(|\\mu|\\) was set in order to obtain the empirically measured central value of \\(M_{Z}\\) in Eq. 4 and then Eq. 3 was solved for \\(B\\) for a given input value of \\(\\tan\\beta\\) and sign(\\(\\mu\\)). The flat prior in \\(\\tan\\beta\\) in Eq. 1 does not reflect the fact that \\(\\tan\\beta\\) (as well as \\(M_{Z}\\)) is a derived quantity from the more fundamental parameters \\(\\mu\\), \\(B\\). It also does not contain information about regions of fine-tuned parameter space, which we may consider to be less likely than regions which are less fine-tuned. Ref. [15] clearly illustrates that if one includes \\(\\mu\\) as a fundamental MSSM parameter, LEP has ruled out the majority of the natural region of MSSM parameter space.
Footnote 1: Higgs potential loop corrections are taken into account by writing [16]\\(\\bar{m}_{H_{i}}\\equiv m_{H_{i}}^{2}-t_{i}/v_{i}\\), \\(t_{i}\\) being the tadpoles of Higgs \\(i\\) and \\(v_{i}\\) being its vacuum expectation value.
A conventional measure of fine-tuning [26] is
\\[f=\\max_{p}\\left[\\frac{d\\ln M_{Z}^{2}}{d\\ln p}\\right], \\tag{5}\\]
where the maximisation is over \\(p\\in\\{m_{0},M_{1/2},A_{0},\\mu,B\\}\\). Here, Eq. 4 is viewed as providing a prediction for \\(M_{Z}\\) given the other MSSM parameters. When the SUSYparameters are large, a cancellation between various terms in Eq. 4 must be present in order to give \\(M_{Z}\\) at the experimentally measured value. Eq. 5 is supposed to provide a measure of how sensitive this cancellation is to the initial parameters. In Ref. [14], a prior \\(\\propto 1/f\\) was shown to produce fits that were not wildly different to those with a flat prior, but the discrepancy illustrated the level of uncertainty in the fits. The new (arguably less arbitrary) prior discussed in section 2 will be seen to lead to much larger differences.
Here, we extend the existing literature in two main ways: firstly, we construct a natural prior in the more fundamental parameters \\(\\mu\\), \\(B\\), showing in passing that it can be seen to act as a check on fine-tuning. We display the MCMC fit results from such priors. Secondly, we present posterior pdfs for LHC supersymmetric (SUSY) production cross-sections. These have not been calculated before. We also present a comparison with a more frequentist statistics oriented fit, utilising the profile likelihood. The difference between the flat-priors Bayesian analysis and the profile likelihood contains information about volume effects in the marginalised dimensions of parameter space. We describe an extremely simple and effective way to extract profile likelihood information from the MCMC chains already obtained from the Bayesian analysis with flat priors.
In the proceeding section 2, we derive the new more natural form for the prior distributions mentioned above. In section 3, we describe our calculation of the likelihood. In section 4, we investigate the limits on parameter space and pdfs for sparticle masses resulting from the new more natural priors. We go on to discuss what this prior-dependence means in terms of the \"baseline SUSY production\" for the LHC, and find out what it tells us about the \"error-bars\" which should be attached to this and earlier LHC forecasts. In section 5, we present our results in the profile likelihood format. In the following section 6 we present pdfs for total SUSY production cross-sections at the LHC. Section 7 contains a summary and conclusions. In Appendix A, we compare the fit results assuming the flat \\(\\tan\\beta\\) priors with a well-known result in the literature in order to find the cause of an apparent discrepancy.
## 2 Prior Distributions
We wish to start with a measure defined in terms of fundamental parameters \\(\\mu\\) and \\(B\\), hence
\\[p(\\mbox{all data}) = \\int d\\mu\\ dB\\ dA_{0}\\ dm_{0}\\ dM_{1/2}\\ ds\\left[p(m_{0},M_{1/2}, A_{0},\\mu,B,s)\\right. \\tag{1}\\] \\[\\left.p(\\mbox{all data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\right],\\]
where \\(p(\\mbox{all data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\) is the likelihood of the data with respect to the CMSSM and \\(p(m_{0},M_{1/2},A_{0},\\mu,B,s)\\) is the prior probability distribution for CMSSM and SM parameters. Of these two terms, the former is well defined, while the latter is open to a degree of interpretation due to the lack of pre-existing constraints on \\(m_{0}\\), \\(M_{1/2}\\), \\(A_{0}\\), \\(\\mu\\), and \\(B\\)2. We may approximately factorise the unambiguous likelihood into two independent pieces: one for \\(M_{Z}\\) and one for other data not including \\(M_{Z}\\), the latter defined to be \\(p({\\rm data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\)
Footnote 2: If an earlier experiment had already set clear constraints on \\(m_{0}\\), \\(M_{1/2}\\), \\(A_{0}\\), \\(\\mu\\), \\(B\\), then even the prior would be well defined, being the result of that previous experiment. As things stand, however, we don’t know anything about the likely values of these parameters, and so the prior must encode our ignorance/prejudice as best we can.
\\[p({\\rm all\\ data}|m_{0},M_{1/2},A_{0},\\mu,B,s) \\tag{2}\\] \\[\\approx p({\\rm data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\times p(M_{Z}|m_{0}, M_{1/2},A_{0},\\mu,B,s)\\] \\[\\approx p({\\rm data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\times\\delta(M_{Z}-M _{Z}^{\\rm cen}).\\]
In the last step we have approximated the \\(M_{Z}\\) likelihood by a delta function on the central empirical value \\(M_{Z}^{\\rm cen}\\) because its experimental uncertainties are so tiny. According to the Particle Data Group [17], the current world average measurement is \\(M_{Z}=91.1876\\pm 0.0021\\) GeV.
Using Eqs. 3,4 to calculate a Jacobian factor and substituting Eq. 2 into Eq. 1, we obtain
\\[p({\\rm all\\ data}) \\approx \\int d\\tan\\beta\\ dA_{0}\\ dm_{0}\\ dM_{1/2}\\left[r(B,\\mu,\\tan\\beta)\\right. \\tag{3}\\] \\[\\left.p({\\rm data}|m_{0},M_{1/2},A_{0},\\mu,B,s)p(m_{0},M_{1/2},A_{ 0},\\mu,B,s)\\right]_{M_{Z}=M_{Z}^{\\rm cen}},\\]
where the condition \\(M_{Z}=M_{Z}^{\\rm cen}\\) can be applied by using the constraints of Eqs. 3,4 with \\(M_{Z}=M_{Z}^{\\rm cen}\\). The Jacobian factor
\\[r(B,\\mu,\\tan\\beta)=M_{Z}\\left|\\frac{B}{\\mu\\tan\\beta}\\frac{\\tan^{2}\\beta-1}{ \\tan^{2}\\beta+1}\\right| \\tag{4}\\]
disfavours high values of \\(\\tan\\beta\\) and \\(\\mu/B\\) and comes from our more natural initial parameterisation of the Higgs potential parameters in terms of \\(\\mu\\), \\(B\\). We will refer below to \\(r(B,\\mu,\\tan\\beta)\\) in Eq. 9 as the \"REWSB prior\". Note that, if we consider \\(B\\to\\tilde{B}\\equiv\\mu B\\) to be more fundamental than the parameter \\(B\\), one loses the factor of \\(\\mu\\) in the denominator of \\(r\\) and by sending \\(\\int dB\\ d\\mu\\to\\int d\\tilde{B}\\ d\\mu\\ \\mu\\). However, in the present paper we retain \\(B\\) as a fundamental parameter because of its appearance in many supergravity mediation models of SUSY breaking.
It remains for us to define the prior, \\(p(m_{0},M_{1/2},A_{0},\\mu,B,s)\\), a measure on the parameter space. In our case, this prior must represent our degree of belief in each part of the space, in advance of the arrival of _any_ experimental data. There is no single \"right\" way of representing ignorance in a prior3, and so some subjectivity must enter into our choice. We must do our best to ensure that our prior is as \"even handed\" as possible. It must give approximately equal measures to regions of parameter space which seem equally plausible. \"Even handed\" need not mean \"flat\" however. A prior flat in \\(m_{0}\\) is not flat in \\(m_{0}^{2}\\) and very non-flat in \\(\\log m_{0}\\). We must do our best to identify the important (and unimportant) characteristics of each parameter. If the absolute value of a parameter \\(m\\) matters, then flatness in \\(m\\) may be appropriate. If dynamic range in \\(m\\) is more expressive, then flatness in \\(1/m\\) (giving equal weights to each order of magnitude increase in \\(m\\)) may make sense. If only the size of \\(m\\) relative to some related scale \\(M\\) is of importance, then a prior concentrated near the origin in \\(\\log(m/M)\\) space may be more appropriate. The freedoms contained within these, to some degree subjective, choices permit others to generate priors different from our own, and thereby test the degree to which the data or the analysis is compelling. If the final results are sensitive to changes of prior, then more data or a better analysis may be called for.
The core idea that we have chosen to encode in (and which therefore defines) our prior on \\(m_{0}\\), \\(M_{1/2}\\), \\(A_{0}\\), \\(\\mu\\), \\(B\\), and \\(s\\) may be summarised as follows. (1) We define regions of parameter space where there parameters all have similar orders of magnitude to be more natural than those where they are vastly different. For example we regard \\(m_{0}=10^{1}\\) eV, \\(M_{1/2}=10^{20}\\) eV as unnatural. In effect, we will use the distance measure between each parameter and a joint'supersymmetry scale\" \\(M_{S}\\) to define our prior. (2) We do not wish to impose unity of scales at anything stronger than the order of magnitude level. (3) We do not wish to presuppose any particular scale for \\(M_{S}\\) itself - that is for the data to decide.
Putting these three principles together, we first define a measure that would seem reasonable _were the supersymmetry scale of \\(M_{S}\\) to be known_. Later we will integrate out this dependence on \\(M_{S}\\). To begin with we factorise the prior probability density for a given SUSY breaking scale \\(M_{S}\\):
\\[p(m_{0},M_{1/2},A_{0},\\mu,B,s|M_{S}) = p(m_{0}|M_{S})\\ p(M_{1/2}|M_{S})\\ p(A_{0}|M_{S})\\] \\[p(\\mu|M_{S})\\ p(B|M_{S})\\ p(s),\\]
where we have assumed that the SM experimental inputs do not depend upon \\(M_{S}\\). This factorisation of priors could be changed to specialise for particular models of SUSY breaking. For example, dilaton domination in heterotic string models predicts \\(m_{0}=M_{1/2}=-A_{0}/\\sqrt{3}\\). In that case, one would neglect the separate prior factors for \\(A_{0}\\), \\(M_{1/2}\\) and \\(m_{0}\\) in Eq. 5, leaving only one of them. Since it is our intention to impose unity between \\(m_{0}\\), \\(M_{1/2}\\), \\(A_{0}\\) and \\(M_{S}\\) at the \"order of magnitude\" level, we take a prior probability density
\\[p(m_{0}|M_{S})=\\frac{1}{\\sqrt{2\\pi w^{2}}m_{0}}\\exp\\left(-\\frac{1}{2w^{2}} \\log^{2}(\\frac{m_{0}}{M_{S}})\\right). \\tag{6}\\]The normalising factor in front of the exponential ensures that \\(\\int_{0}^{\\infty}dm_{0}\\ p(m_{0}|M_{S})=1\\). \\(w\\) specifies the width of the logarithmic exponential, Eq. 6 implies that \\(m_{0}\\) is within a factor \\(e^{w}\\) of \\(M_{S}\\) at the \"\\(1\\sigma\\) level\" (i.e. with probability 68%). We take analogous forms for \\(p(M_{1/2}|M_{S})\\) and \\(p(\\mu\\ |M_{S})\\), by replacing \\(m_{0}\\) in Eq. 6 with \\(M_{1/2}\\) and \\(|\\mu|\\) respectively. Note in particular that our prior \\(p(\\mu|M_{S})\\) favours superpotential parameter \\(\\mu\\) to be within an order of magnitude of \\(M_{S}\\) and thus also within an order of magnitude of the soft breaking parameters. This should be required by whichever model is responsible for solving the \\(\\mu\\) problem of the MSSM, for example the Giudice-Masiero mechanism [18]. \\(A_{0}\\) and \\(B\\) are allowed to have positive or negative signs and values may pass through zero, so we chose a different form to Eq. 6 for their prior. However, we still expect that their order of magnitude isn't much greater than \\(M_{S}\\) and the prior probability density
\\[p(A_{0}|M_{S})=\\frac{1}{\\sqrt{2\\pi e^{2w}}M_{S}}\\exp\\left(-\\frac{1}{2(e^{2w}) }\\frac{A_{0}^{2}}{M_{S}^{2}}\\right), \\tag{7}\\]
ensures that \\(|A_{0}|<e^{w}M_{S}\\) at the \\(1\\sigma\\) level. The prior probability density of \\(B\\) is given by Eq. 7 with \\(A_{0}\\to B\\). We don't know \\(M_{S}\\) a priori, so we marginalise over it:
\\[p(m_{0},M_{1/2},A_{0},\\mu,B)=\\int_{0}^{\\infty}dM_{S}\\ p(m_{0},M_ {1/2},A_{0},\\mu,B|M_{S})\\ p(M_{S})\\] \\[= \\frac{1}{(2\\pi)^{5/2}w^{5}m_{0}|\\mu|M_{1/2}}\\int_{0}^{\\infty} \\frac{dM_{S}}{M_{S}^{2}}\\exp\\left[-\\frac{1}{2w^{2}}\\left(\\log^{2}(\\frac{m_{0} }{M_{S}})+\\log^{2}(\\frac{|\\mu|}{M_{S}})+\\right.\\right.\\] \\[\\left.\\left.\\log^{2}(\\frac{M_{1/2}}{M_{S}})+\\frac{w^{2}A_{0}^{2}} {e^{2w}M_{S}^{2}}+\\frac{w^{2}B^{2}}{M_{S}^{2}e^{2w}}\\right)\\right]p(M_{S})\\]
and \\(p(M_{S})\\) is a prior for \\(M_{S}\\) itself, which we take to be \\(p(M_{S})=1/M_{S}\\), i.e. flat in the logarithm of \\(M_{S}\\). The marginalisation over \\(M_{S}\\) amounts to a marginalisation over a family of prior distributions, and as such constitutes a hierarchical Bayesian approach [19]. The integration over several distributions is equivalent to adding smearing due to our uncertainty in the form of the prior. As far as we are aware, the present paper is the first example of the use of hierarchical Bayesian techniques in particle physics. In general, we could also have marginalised over the hyper-parameter \\(w\\), for example using a Gaussian centred on 1, but we find it useful below to examine sensitivity of the posterior probability distribution to \\(w\\). We therefore leave it as an input parameter for the prior distribution. We evaluate the integral in Eq. 8 numerically using an integrator that does not evaluate the integrand at the endpoints, where it is not finite. We have checked that the integral is not sensitive to the endpoints chosen: the change induced by changing the integration range to [10 GeV, \\(10^{16}\\)] GeV is negligible. We refer to Eq. 8 as the \"same order\" prior. To summarise, the posterior probability density function is given by
\\[p(m_{0},M_{1/2},A_{0},\\tan\\beta,s|\\text{data}) \\propto \\left[p(\\text{data}|m_{0},M_{1/2},A_{0},\\mu,B,s)\\times\\right.\\] \\[\\left.r(B,\\mu,\\tan\\beta)\\ p(s)\\ p(m_{0},M_{1/2},A_{0},\\mu,B) \\right]_{M_{Z}=M_{Z}^{\\text{gen}}},\\]where we have written \\([\\dots]_{M_{Z}=M_{Z}^{\\text{gen}}}\\) on the right hand side of above relation, implying that \\(\\mu\\) and \\(B\\) are eliminated in favour of \\(\\tan\\beta\\) and \\(M_{Z}^{\\text{gen}}\\) by Eqs. 1, 4.
We may view the prior factors in Eq. 9 to be inverse fine-tuning parameters: where the fine-tuning is high, the priors are small. It is interesting to note that a cancellation of order \\(\\sim 1/\\tan\\beta\\) is known to be required in order to achieve high values of \\(\\tan\\beta\\)[25]. This appears in our Bayesian prior as a result of transforming from the fundamental Higgs potential parameters \\(\\mu\\), \\(B\\) to \\(\\tan\\beta\\) and the empirically preferred value of \\(M_{Z}\\). We display the various prior factors in Fig. 1 as a function of \\(m_{0}\\) for all other parameters at the SPS1a CMSSM point [20]: \\(M_{1/2}=250\\) GeV, \\(A_{0}=100\\) GeV, \\(\\tan\\beta=10\\) and all SM input parameters fixed at their central empirical values. The figure displays the REWSB prior, the REWSB prior+same order priors with \\(w=1,2\\) (simply marked \\(w=1\\), \\(w=2\\) respectively) and the inverse of the fine-tuning parameter defined in Eq. 5. We see that the REWSB prior actually increases with \\(m_{0}\\) along the chosen line in CMSSM parameter space. This is due to decreasing \\(\\mu\\) in Eq. 4 towards the focus-point4 at high \\(m_{0}\\)[55]. The conventional fine-tuning measure \\(f\\) remains roughly constant as a function of \\(m_{0}\\), whereas the same order priors decrease strongly as a function of \\(m_{0}\\). This is driven largely by the \\(1/m_{0}\\) factor in Eq. 8 and the mismatch between large \\(m_{0}\\) and \\(M_{1/2}=250\\) GeV, which leads to a stronger suppression for the smaller width \\(w=1\\) rather than \\(w=2\\).
Footnote 4: The focus-point region is a subset of the hyperbolic branch [53].
The SM input parameters \\(s\\) used are displayed in Table 1. Since they have all been well measured, their priors are set to be Gaussians with central values and widths as listed in the table. We use Ref. [17] for the QED coupling constant \\(\\alpha^{\\overline{MS}}\\), the strong coupling constant \\(\\alpha^{\\overline{MS}}_{s}(M_{Z})\\) and the running mass of the bottom quark \\(m_{b}(m_{b})^{\\overline{MS}}\\), all in the \\(\\overline{MS}\\) renormalisation scheme. A recent Tevatron top mass \\(m_{t}\\) measurement [21] is also employed, although the absolutely latest value has shifted slightly [22]. \\(p(s)\\) is set to be a product of Gaussian probability distributions5\\(p(s)\\propto\\prod_{i}e^{-\\chi_{i}^{2}}\\), where
Footnote 5: Taking the product corresponds to assuming that the measurements are independent.
\\[\\chi_{i}^{2}=\\frac{(c_{i}-p_{i})^{2}}{\\sigma_{i}^{2}} \\tag{10}\\]
for observable \\(i\\). \\(c_{i}\\) denotes the central value of the experimental measurement,
Figure 1: Prior factors \\(p\\) in the CMSSM at SPS1a with varying \\(m_{0}\\). Standard Model inputs have been fixed at their empirically central values.
represents the value of SM input parameter \\(i\\). Finally \\(\\sigma_{i}\\) is the standard error of the measurement.
We display marginalised prior pdfs in Fig. 2 for the REWSB, REWSB+same order (\\(w=1\\)) and REWSB+same order (\\(w=2\\)) priors. The plots have 75 bins and the prior pdf has been marginalised over all unseen dimensions. No indirect data has been taken into account in producing the distributions, a feasible electroweak symmetry breaking vacuum being the only constraint. The priors have been obtained by sampling with a MCMC using the Metropolis algorithm [23, 24], taking the average of 10 chains of 100 000 steps each. Figs. 1(a),(b) shows that although the same order priors are heavily peaked towards small values of \\(m_{0}<500\\) GeV and \\(M_{1/2}\\sim 180\\) GeV, the 95% upper limits shown by the vertical arrows are only moderately constrained for \\(m_{0}\\). \\(w=1\\) is not surprisingly more peaked at lower mass values. The REWSB histograms on the other hand, prefer high \\(m_{0}\\) (due to the lower values of \\(\\mu\\) there) and are quite flat in \\(M_{1/2}\\). The same order of magnitude requirement is crucial in reducing the preferred scalar masses. The REWSB prior is fairly flat in \\(A_{0}\\) whereas the \\(w=1\\), \\(w=2\\) priors are heavily peaked around zero. The \\(M_{1/2}\\) same-order priors are more strongly peaked than, for example, \\(m_{0}\\) because \\(M_{1/2}\\) is strongly correlated with \\(|\\mu|\\) and so the logarithmic measure of the prior (leading to the factor of \\(1/(m_{0}M_{1/2}|\\mu|)\\) in Eq. 8 becomes more strongly suppressed. \\(\\tan\\beta\\) is peaked very strongly toward lower values of the considered range for the REWSB prior due to the \\(1/\\tan\\beta\\) suppression, but becomes somewhat diluted when the same order priors are added, as shown in Fig. 1(d).
## 3 The Likelihood
Our calculation of the likelihood closely follows Ref. [14]. For completeness, we describe the procedure here. Including the SM inputs in Table 1, eight input parameters are varied simultaneously. The range of CMSSM parameters considered is shown in Table 2. The SM input parameters are allowed to vary within \\(4\\sigma\\) of their central values. Experimental errors are so small on the muon decay constant \\(G_{\\mu}\\) that we fix it to its central value of \\(1.16637\\times 10^{-5}\\) GeV\\({}^{-2}\\).
In order to calculate predictions for observables from the inputs, the program SOFTSUSY2.0.10[27] is first employed to calculate the MSSM spectrum. Bounds upon the sparticle spectrum have been updated and are based upon the boundscollected in Ref. [11]. Any spectrum violating a 95% limit from negative sparticle searches is assigned a zero likelihood density. Also, we set a zero likelihood for any inconsistent point, e.g. one which does not break electroweak symmetry correctly, or a point that contains tachyonic sparticles. For points that are not ruled out, we then link the MSSM spectrum via the SUSY Les Houches Accord [28] to micrOMEGAs1.3.6[29], which then calculates \\(\\Omega_{DM}h^{2}\\), the branching ratios \\(BR(b\\to s\\gamma)\\) and \\(BR(B_{s}\\to\\mu^{+}\\mu^{-})\\) and the anomalous magnetic moment of the muon \\((g-2)_{\\mu}\\).
The anomalous magnetic moment of the muon \\(a_{\\mu}\\equiv(g-2)_{\\mu}/2\\) was measured to be \\(a_{\\mu}^{\\rm exp}=(11659208.0\\pm 5.8)\\times 10^{-10}\\)[30]. Its experimental value is in conflict with the SM predicted value \\(a_{\\mu}^{\\rm SM}=(11659180.4\\pm 5.1)\\times 10^{-10}\\) from [31], which comprises the latest QED [32], electroweak [33], and hadronic [31] contributions to \\(a_{\\mu}^{\\rm SM}\\). This SM prediction however does not account for \\(\\tau\\) data which is known to lead to significantly
Figure 2: Prior probability distributions marginalised to the (a) \\(m_{0}\\), (b) \\(M_{1/2}\\), (c) \\(A_{0}\\) and (d) \\(\\tan\\beta\\) directions. 95% upper limits are shown by the labelled arrows except in (c), where the arrows delimit the 2-sided 95% confidence region. All distributions have been binned with 75 equally spaced bins.
different results for \\(a_{\\mu}\\), implying underlying theoretical difficulties which have not been resolved so far. Restricting to \\(e^{+}e^{-}\\) data, hence using the numbers given above, we find
\\[\\delta\\frac{(g-2)_{\\mu}}{2}\\equiv\\delta a_{\\mu}\\equiv a_{\\mu}^{\\rm exp}-a_{\\mu}^ {\\rm SM}=(27.6\\pm 7.7)\\times 10^{-10}. \\tag{10}\\]
This excess may be explained by a supersymmetric contribution, the sign of which is identical to the sign of the superpotential \\(\\mu\\) parameter [34]. After obtaining the one-loop MSSM value of \\((g-2)_{\\mu}\\) from micrOMEGAs1.3.6, we add the dominant 2-loop corrections detailed in Refs. [35, 36]. The \\(W\\) boson mass \\(M_{W}\\) and the effective leptonic mixing angle \\(\\sin^{2}\\theta_{w}^{l}\\) are also used in the likelihood. We take the measurements to be [37, 38]
\\[M_{W}=80.398\\pm 0.027~{}{\\rm GeV},\\qquad\\sin^{2}\\theta_{w}^{l}=0.23153\\pm 0.000175, \\tag{11}\\]
where experimental errors and theoretical uncertainties due to missing higher order corrections in SM [39] and MSSM [40, 41] have been added in quadrature. The most up to date MSSM predictions for \\(M_{W}\\) and \\(\\sin^{2}\\theta_{w}^{l}\\)[40] are finally used to compute the corresponding likelihoods. A parameterisation of the LEP2 Higgs search likelihood for various Standard Model Higgs masses is utilised, since the lightest Higgs \\(h\\) of the CMSSM is very SM-like once the direct search constraints are taken into account. It is smeared with a 2 GeV assumed theoretical uncertainty in the SOFTSUSY2.0.10 prediction of \\(m_{h}\\) as described in Ref. [14]. The rare bottom quark branching ratio to a strange quark and a photon \\(BR(b\\to s\\gamma)\\) is constrained to be [42]
\\[BR(b\\to s\\gamma)=(3.55\\pm 0.38)\\times 10^{-4}, \\tag{12}\\]
obtained by adding the experimental error with the estimated theory error [43] of \\(0.3\\times 10^{-4}\\) in quadrature. The WMAP3 [44] power law \\(\\Lambda\\)-cold dark matter fitted value of the dark matter relic density is
\\[\\Omega\\equiv\\Omega_{DM}h^{2}=0.104^{+0.0073}_{-0.0128} \\tag{13}\\]
In the present paper, we assume that all of the dark matter consists of neutralino lightest supersymmetric particles and we enlarge the errors on \\(\\Omega_{DM}h^{2}\\) to \\(\\pm 0.02\\) in order to incorporate an estimate of higher order uncertainties in its prediction.
We assume that the measurements and thus also the likelihoods extracted from \\(\\Omega\\), \\(BR(b\\to s\\gamma)\\), \\(M_{W}\\), \\(\\sin^{2}\\theta_{w}^{l}\\), \\((g-2)_{\\mu}\\), \\(BR(B_{s}\\rightarrow\\mu^{+}\\mu^{-})\\) are all independent of each other so that the individual likelihood contributions may be multiplied. Observables that have been quoted with uncertainties are assumed to be Gaussian distributed and are characterised by \\(\\chi^{2}\\).
## 4 CMSSM Fits With the New Priors
In order to sample the posterior probability density, we ran 10 independent MCMCs of 500 000 steps each using a newly developed banked [45] Metropolis-HastingsMCMC. The banked method was specifically designed to sample several well isolated or disconnected local maxima, for example maxima in the posterior pdfs of \\(\\mu>0\\) and \\(\\mu<0\\). Previously, we had normalised the two samples via bridge sampling [12], which requires twice the number of samples than for one maximum, with additional calculations required after the sampling. Bank sampling, on the other hand, can be performed with roughly an identical number of sampling steps to the case of one maximum and does not require additional normalisation calculations after the sampling. The chance of a bank proposal for the position of the next point in the chain was set to 0.1, meaning that the usual Metropolis proposal had a chance of 0.9. The bank was formed from 10 initial Metropolis MCMC runs with 60 000 steps each and random starting points that were drawn from pdfs flat in the ranges displayed in Tables 1,2. The initial 4000 steps were discarded in order to provide adequate \"burn-in\" for the MCMCs. We check convergence using the Gelman-Rubin \\(\\hat{R}\\) statistic [48; 10], which provides an estimated upper bound on how much the variance in parameters could be decreased by running for more steps in the chains. Thus, values close to 1 show convergence of the chains. In previous publications, we considered \\(\\hat{R}<1.05\\) to indicate convergence of the chains for every input parameter. We have checked that this is easily satisfied for all of our results.
We compare the case of flat \\(\\tan\\beta\\) priors to the new prior in Fig. 3. The posterior pdf has been marginalised down to the \\(M_{1/2}-m_{0}\\) plane and binned into 75\\(\\times\\)75 bins, as with all two-dimensional distributions in the present paper. Both signs of \\(\\mu\\) have been marginalised over, again like all following figures in this paper unless explicitly mentioned. The bins are normalised with respect to the bin with maximum posterior. We identify the usual CMSSM regions of good-fit in Fig. 3a. The maximum at the lowest value of \\(m_{0}\\) corresponds to the stau co-annihilation region [49], where \\(\\tilde{\\tau}_{1}\\) and \\(\\chi^{0}_{1}\\) are quasi-mass degenerate and efficiently annihilate in the early universe. This region is associated with \\(\\tan\\beta<40\\), as Fig. 3b indicates. \\(m_{0}\\sim 1\\) TeV in Fig. 3a has large \\(\\tan\\beta\\sim 50\\). This region corresponds to the case where the neutralinos efficiently annihilate through \\(s-\\)channel pseudoscalar Higgs bosons \\(A^{0}\\) into \\(b\\bar{b}\\) and \\(\\tau\\bar{\\tau}\\) pairs [50; 51]. The region at low \\(M_{1/2}\\) and high \\(m_{0}\\) in Fig. 3a is the \\(h^{0}\\) pole region [52], where neutralinos annihilate predominantly through an \\(s-\\)channel of the lightest CP even Higgs \\(h^{0}\\). In order to evade LEP2 Higgs constraints, this also requires large \\(\\tan\\beta\\). The focus point region [54; 55; 56] is the region around \\(M_{1/2}\\sim 0.5\\) TeV and \\(m_{0}=2-4\\) TeV, where the lightest neutralino has a significant higgsino component, leading to efficient annihilation into gauge boson pairs. This region is somewhat sub-dominant in the fit, but extends through most of the range of \\(\\tan\\beta\\) considered.
We see a marked difference between Figs. 3a and 3b. The \\(A^{0}\\) and \\(h^{0}\\) pole regions have vanished with the REWSB priors. The \\(A^{0}\\) pole region is suppressed because the REWSB prior disfavours the required large values of \\(\\tan\\beta\\), as shown in Fig. 2d. The \\(h^{0}\\) pole region is suppressed because the REWSB prior disfavours large values of \\(|A_{0}|\\)see Fig. 2c, and large values of \\(|A_{0}|/M_{1/2}\\). Large values of \\(|A_{0}|\\) are necessary in this region in order to achieve large stop mass splitting and therefore large corrections to the lightest Higgs mass. Without such corrections, \\(h^{0}\\) falls foul of LEP2 Higgs mass bounds. The focus-point region has been diminished by the REWSB priors mainly because the large values of \\(m_{0}\\) required become suppressed as in Fig. 2a. This suppression comes primarily from the requirement that SUSY breaking and Higgs parameters be roughly of the same order as each other. Figs. 3b,d display only one good-fit region corresponding to the stau co-annihilation region at low \\(m_{0}\\). The banked method [45] allows an efficient normalisation of the \\(\\mu>0\\) and \\(\\mu<0\\) branches, both of which are included in the figure.
We now turn to a comparison of the REWSB+same order prior fits. We consider such fits to give much more reliable results than the flat \\(\\tan\\beta\\) fits, and a large difference between fits for \\(w=1\\) to \\(w=2\\) would provide evidence for a lot of sensitivity
Figure 3: CMSSM fits marginalised in the unseen dimensions for (a,c) flat \\(\\tan\\beta\\) priors, (b,d) the REWSB+same order prior with \\(w=1\\). Contours showing the 68% and 95% regions are shown in each case. The posterior probability in each bin, normalised to the probability of the maximum bin, is displayed by reference to the colour bar on the right hand side of each plot.
to our exact choice of prior. Some readers might consider the flat \\(\\tan\\beta\\) priors to be not unreasonable, and those readers could take the large difference between flat priors and the new more natural ones as a result of uncertainty originating from scarce data.
Pdfs of sparticle and Higgs masses coming from the fits are displayed in Figs. 4a-4h along with 95% upper bounds calculated from the pdfs. The pdfs
Figure 4: MSSM particle mass pdfs and profile likelihoods: dependence upon the prior in the CMSSM. The vertical arrows display the one-sided 95% upper limits on each mass. There are 75 bins on each abscissa. Histograms marked “profile” are discussed in section 5 and have been multiplied by different dimensionful constants in order to be comparable by eye with the \\(w=1,2\\) pdfs. The profile 95% confidence level upper limits are calculated by finding the position for which the 1-dimensional profile likelihood has \\(2\\Delta\\ln L=2.71\\)[46].
displayed are for the masses of (a) the lightest CP even Higgs, (b) the CP-odd Higgs, (c) the left-handed squark, (d) the gluino, (e) the lightest neutralino, (f) the lightest chargino, (g) the right-handed selectron and (h) the lightest-stau lightest-neutralino mass splitting respectively. The most striking feature of the figure is that the Higgs and sparticle masses tend to be very light for the REWSB and same order prior, boding well for future collider sparticle searches. This effect is consistent with a preference for smaller \\(m_{0}\\), \\(M_{1/2}\\) exhibited by the new priors in Fig. 2b,d. In general, there is remarkably little difference between the two different cases of \\(w=1\\) or \\(w=2\\). This fact is perhaps not so surprising considering that the shape of the priors doesn't change enormously with \\(w\\), as Figs. 1,2 show. The sparticle mass distributions for priors that are flat in \\(\\tan\\beta\\) were displayed in Refs. [10; 11; 12] and show a spread up to much higher values of the masses. As we have explained above, we do not believe flat \\(\\tan\\beta\\) to be an acceptable prior. Some readers may consider it to be so: such readers may consider our fits to be considerably less robust to changes in the prior than Fig. 4 indicates. Lower values of \\(A_{0}\\) and \\(\\tan\\beta\\) help to make the lightest CP-even Higgs light in the REWSB+same order prior case, shown in Fig. 4a. The mass ordering \\(m_{\\tilde{q}_{l}}>m_{\\chi^{0}_{2}}>m_{\\tilde{l}_{R}}>m_{\\chi^{0}_{1}}\\) allows a \"golden channel\" decay chain of \\(\\tilde{q}_{l}\\to\\chi^{0}_{2}\\to\\tilde{l}_{R}\\to m_{\\chi^{0}_{1}}\\). Such a decay chain has been used to provide several important and accurate constraints upon the mass spectrum [60]. In some regions of parameter space, it can also allow spin information on the sparticles involved to be extracted [47]. We may calculate the Bayesian posterior probability of such circumstances by integrating the posterior pdf over the parameter space that allows such a mass ordering. From the MCMC this is simple: we simply count the fraction of sampled points that have such a mass ordering6. The posterior probability of such a mass ordering is high: 0.93 for \\(w=1\\) and 0.85 for \\(w=2\\), indicating that analyses using the decay chain are likely to be possible (always assuming the CMSSM hypothesis, of course).
Footnote 6: Other absolute probabilities quoted below are calculated in an analogous manner.
As pointed out in Ref. [10], the flat \\(\\tan\\beta\\) posteriors extend out to the assumed upper range taken on \\(m_{0}\\) and so the flat \\(\\tan\\beta\\) pdf for the scalar masses were artificially cut off at the highest masses displayed. This is no longer the case for the new choice of priors since the regions of large posterior do not reach the chosen ranges of parameters, as shown in Figs. 3b,d. Thus our derived upper bounds on, for instance \\(m_{\\tilde{q}_{L}}\\) in Fig. 4c and \\(m_{\\tilde{e}_{R}}\\) in Fig. 4g are not dependent upon the \\(m_{0}<4\\) TeV range chosen. The mass splitting between the lightest stau and the neutralino is displayed in Fig. 4h. The insert shows a blow-up of the quasi-degenerate stau-co-annihilation region and has a different normalisation to the rest of the plot. Since the REWSB+same order prior fit results lie in the co-annihilation region, nearly all of the probability density predicts that \\(m_{\\tilde{\\tau}_{1}}-m_{\\chi^{0}_{1}}<20\\) GeV. It is a subject of ongoing research as how to best verify this at the LHC [57]. In Fig. 4g, the plot has been cut off at a probability \\(P\\) of 0.1 and the histograms actually extend to 0.70,0.68 in the lowest bin for \\(w=1\\) and \\(w=2\\) respectively. Similarly, we have cut off Fig. 4h at a probability of 0.05. The fits extend to 0.93, 0.85 for \\(w=1\\), \\(w=2\\) respectively in the lowest bin.
Figure 5: Statistical pull of different observables in CMSSM fits. We show the pdfs for the experimental measurements as well as the posterior pdf of the predicted distribution in \\(w=1\\) and \\(w=2\\) fits. Profile histograms are discussed in section 5 and are multiplied by different dimensionful constants in order to be comparable by eye with the \\(w=1,2\\) pdfs.
We examine the statistical pull of the various observables in Fig. 5. In each case, the likelihood coming from the empirical constraint is shown by the continuous distribution. The histograms show the fitted posterior pdfs depending upon the prior. We have sometimes slightly altered the normalisation of the curves and histograms to allow for clearer viewing. Fig. 5a shows that the \\(\\Omega_{DM}h^{2}\\) pdf is reproduced well by all fits irrespective of which prior distribution is used. This is because the fits are completely dominated by the \\(\\Omega_{DM}h^{2}\\) contribution, since the CMSSM parameter space typically predicts a much larger value than that observed by WMAP [12]. Figs. 5b,5c,5d show that \\(BR[b\\to s\\gamma]\\), \\(M_{W}\\), \\(\\sin^{2}\\theta_{w}^{l}\\) are all constrained to be near their central values, with less variance than is required by the empirical constraint. Direct sparticle search limits mean that sparticles cannot be too light and hence cannot contribute strongly to the three observables. The rare decay branching ratio \\(BR[B_{s}\\to\\mu\\mu]\\) is displayed in Fig. 5e. Both fits are heavily peaked around the SM value of \\(10^{-8.5}\\), indeed the most probable bin has been decapitated in the figure for the purposes of clarity, and really should extend up to a probability of around 0.9. The SUSY contribution to \\(BR(B_{s}\\to\\mu\\mu)\\propto\\tan\\beta^{6}/M_{SUSY}^{4}\\) and so the preference for small \\(\\tan\\beta\\) beats the preference for smallish sparticle masses \\(\\sim O(M_{SUSY})\\) in the new fits. In all of Figs. 5a-e, changing the width of the priors from 1 to 2 has negligible effect on the results. The exception to this trend is \\(\\delta a_{\\mu}\\), as shown in Fig. 5f. \\(\\delta a_{\\mu}\\) has a shoulder around zero for \\(w=2\\), corresponding to a small amount of posterior probability density at high scalar masses, clearly visible from Fig. 4g. Such high masses suppress loops responsible for the SUSY contribution to \\((g-2)_{\\mu}\\). \\(\\delta a_{\\mu}\\) is pulled to lower values than the empirically central value by direct sparticle limits and the preference for values of \\(\\tan\\beta\\) that are not too large. The almost negligible portion of the graph for which \\(\\delta a_{\\mu}<0\\) corresponds to \\(\\mu<0\\) in the CMSSM. \\((g-2)_{\\mu}\\) has severely suppressed the likelihood, and therefore the posterior, in this portion of parameter space. For flat \\(\\tan\\beta\\) priors, and \\(\\delta a_{\\mu}=22\\pm 10\\times 10^{-10}\\), we had previously estimated that the ratio of integrated posterior pdfs between \\(\\mu<0\\) and \\(\\mu>0\\) was \\(0.7-0.16\\). For the new priors, where sparticles are forced to be lighter, their larger contribution to \\(\\delta a_{\\mu}\\) further suppresses the \\(\\mu<0\\) posterior pdf. From the samples, we estimate7\\(P(\\mu<0)/P(\\mu>0)=0.001\\pm 002\\) for \\(w=1\\) and \\(0.003\\pm 0.003\\) for \\(w=2\\), respectively for \\(\\delta a_{\\mu}=(27.6\\pm 7.7)\\times 10^{-10}\\). Thus, while the probabilities are not accurately determined, we know that they are small enough to neglect the possibility of \\(\\mu<0\\).
Footnote 7: These numbers come from the mean and standard deviation of 10 chains, each of which is considered to deliver an independent estimate.
## 5 Profile Likelihoods
Since, for a flat prior, Eq. 1 implies that the posterior is proportional to the likelihood in a Bayesian analysis, one can view the distributions resulting from the MCMCscan as being a \"likelihood map\" [10]. If one marginalises in the unseen dimensions in order to produce a one or two-dimensional plot, one either interprets the resulting distribution probabilistically in terms of the posterior, or alternatively as a way of viewing the full \\(n\\)-dimensional likelihood map, but without a probabilistic interpretation in terms of confidence limits, or credible intervals. Instead, frequentist often eliminate unwanted parameters (nuisance parameters) by maximization instead of marginalization. The likelihood function of the reduced set of parameters with the unwanted parameters at their conditional maximum likelihood estimates is called the profile likelihood [58]. Approximate confidence limits can be set by finding contours of likelihood that differ from the best-fit likelihood by some amount. This amount depends upon the number of \"seen dimensions\" and the confidence level, just as in a standard \\(\\chi^{2}\\) fit [46].
While we believe that dependence on priors actually tells us something useful about the robustness of the fit, we are also aware that many high energy physicists find the dependence upon a subjective measure distasteful, and would be happier with a frequentist interpretation. When the fits are robust, i.e. there is plentiful accurate data, we expect the Bayesian and frequentist methods to identify similar regions of parameter space in any fits. We are not in such a situation with our CMSSM fits, as we have shown in previous sections, and so we provide the profile likelihood here for completeness.
We can use the scanned information from the MCMC chains to extract the profile likelihood very easily. Let us suppose, for instance, that we wish to extract the profile in \\(m_{0}-M_{1/2}\\) space. We therefore bin the chains obtained in \\(m_{0}-M_{1/2}\\) as before. We find the maximum likelihood in the chain for each bin and simply plot that. The 95% confidence level region then is delimited by the likelihood contour at a value \\(2\\Delta\\ln L=5.99\\)[46], where \\(\\Delta\\ln L=\\ln L_{max}-\\ln L\\). The profile likelihoods in the \\(m_{0}-M_{1/2}\\) and \\(m_{0}-\\tan\\beta\\) plane are shown in Fig. 6.
Comparing Figs. 6a and 3a, we see that the profile likelihood gives similar information to the Bayesian analysis with flat likelihoods. The main difference is that the profile likelihood's confidence limit only extends out to \\((M_{1/2},m_{0})<(1.0,2)\\) TeV, whereas for the Bayesian flat-prior analysis, values up to \\((M_{1/2},m_{0})<(1.5,4)\\) TeV are viable. Comparing Fig. 6b and 3c, we again see similar constraints, except that the tail at high \\(\\tan\\beta\\) up to larger values of \\(m_{0}>2\\) TeV has been suppressed in the profile. From the difference we learn the following facts: in this high \\(\\tan\\beta\\)-high \\(m_{0}\\) tail, the fit to data is less good than in other regions of parameter space. However, it has a relatively large volume in unseen dimensions of parameter space, which enhances the posterior probability in Fig. 3c. The difference between the two plots is therefore a good measure of such a so-called \"volume effect\". In ref. [11; 13], an average-\\(\\chi^{2}\\) estimate was constructed in order to identify such effects. We find the profile likelihood to be easier to interpret, however. It also has the added bonus of allowing a frequentist interpretation.
We show the profile likelihoods of the various relevant masses in Fig. 4. There is a general tendency for all of the masses to spread to somewhat heavier values than the \\(w=1,2\\) same order+REWSB priors. We remind the reader that the profile likelihood histograms are not pdfs. In the figure, they have been multiplied by dimensionful constants that make them comparable eye to the Bayesian posteriors on the plot. The gluino mass shows the most marked difference: it appears that higher gluino masses are disfavoured by volume effects in the Bayesian analyses. However, while the profiles differ from the Bayesian analyses to a much larger degree than the \\(w=1\\) or \\(w=2\\) prior fits differ from each other, they are not wildly different to the Bayesian analyses. The higgs mass distributions look particularly similar. There is a qualitative difference in Fig. 4g,h, where \\(m_{\\tilde{e}_{R}}\\) and \\(m_{\\tilde{\\tau}_{1}}-m_{\\chi^{0}_{1}}\\) have a non-negligible likelihood up to 1 TeV, unlike the posterior probabilities.
Figs. 5a-f show the profile likelihoods of the pull of various observables. We see that \\(\\Omega_{DM}h^{2}\\) shows a negligible difference to the posteriors. This is because the dark matter relic density constraint dominates the fit and determines the shape and volume of the viable parameter space. Most of the profiles are similar to the posteriors in the figure except for Fig. 5e, where the likelihood extends out to much higher values of the branching ratio of \\(B_{s}\\to\\mu\\mu\\). These values correspond in Fig. 6b to high \\(\\tan\\beta\\) but low \\(m_{0}\\) points. The posteriors for high \\(BR(B_{s}\\to\\mu\\mu)\\propto 1/M_{SUSY}^{2}\\) are suppressed because of the large volumes at high \\(m_{0}\\) (and hence at high \\(M_{SUSY}\\), where \\(BR(B_{s}\\to\\mu\\mu)\\) approaches the Standard Model limit due to decoupling). In Fig. 5c, we see enhanced statistical fluctuations in the upper tail of the profile likelihood of \\(M_{W}\\), presumably due to a small number of sampled points there. These fluctuations could be reduced with further running of the MCMCs, however.
Figure 6: Two dimensional profile likelihoods in the (a) \\(m_{0}-M_{1/2}\\) plane, (b) \\(m_{0}-\\tan\\beta\\) plane. There are 75 bins along each direction. The inner (outer) contours show the 68% and 95% confidence level regions respectively.
## 6 LHC SUSY Cross Sections
In order to calculate pdfs for the expected CMSSM SUSY production cross-sections at the LHC, we use HERWIG6.500[59] with the default parton distribution functions. We calculate the total cross-section of the production of two sparticles with transverse momentum \\(p_{T}>100\\) GeV. We take the fitted probability distributions of the previous section with the REWSB+same order priors and use HERWIG6.500 to calculate cross-sections for (a) strong SUSY production i.e. squark and gluino production, (b) inclusive weak gaugino production (i.e. a neutralino or chargino in association with another neutralino, a chargino, a gluino, a squark or a gluino) and (c) 2-slepton production. No attempt is made here to fold in experimental efficiencies or the branching ratios which follow the decays into final state products. The total cross-section times assumed integrated luminosity therefore serves as an upper-bound on the number of events expected at the LHC in the different channels (a)-(c). Some analyses give a few percent for efficiencies, but for specific cases of more difficult signatures, the efficiencies can be tiny.
We show the one dimensional pdfs for the various SUSY production cross-sections in Fig. 7a. We should bear in mind that the LHC is expected to deliver 10 fb\\({}^{-1}\\) of luminosity per year in \"low-luminosity\" mode, whereas afterward this will increase to 30 fb\\({}^{-1}\\). Several years running at \\(\\log_{10}\\sigma\\)/fb\\(=0\\) therefore corresponds to of order a hundred production events for 100 fb\\({}^{-1}\\). \\(\\log_{10}\\sigma\\)/fb\\(=0\\) then gives some kind of rough limit for what might be observable at the LHC, once experimental efficiencies and acceptances are factored in. Luckily, we see that strong production and inclusive weak gaugino production are always above this limit, providing the optimistic conclusion that SUSY will be discovered at the LHC (provided, as always in the present paper, that the CMSSM hypothesis is correct and that the reader accepts our proposal for the prior pdfs). The 95% lower limits on the total direct production cross-sections are 360 fb, 90 fb and 0.01 fb for strongly interacting sparticle, inclusive weak gaugino and slepton production respectively. There therefore is a small chance that direct slepton production may not be at observable rates. The posterior probability that \\(\\sigma(pp\\to\\tilde{l}^{+}\\tilde{l}^{-})<1\\) fb is 0.063. Even in the event that direct slepton production is at too slow a rate to be observable, it is possible that sleptons can be observed and measured by the decays of other particles into them [60]. The pdfs of total SUSY production cross-sections for \\(w=2\\) are almost identical to those shown in the figure. The main difference is in the total direct slepton production cross section, where the small bump at \\(\\sigma\\sim 10^{-2}\\) fb is somewhat enlarged. It has the effect of placing the 95% lower bound on the slepton production cross-section at 4.8\\(\\times 10^{-4}\\) fb. For \\(w=2\\), the chance of the di-slepton production cross-section being less than 1 fb is 0.15. The strong and weak gaugino production cross-sections have 95% lower bounds of 570,90 fb respectively for \\(w=2\\).
We examine correlations between the various different cross-sections in Figs. 7b
\\(\\sigma_{\\rm slepton}<0\\) corresponds to the focus point region, where scalar x masses are large. This region will probably directly produce too few sleptons to be observed at the LHC and so will not be useful there for discriminating the CMSSM focus point region from the co-annihilation region unless there is a significant luminosity upgrade [61].
The profile likelihoods of SUSY production cross-sections are shown in Fig. 8. In the figure, \"strong\" refers to squark/gluino production, \"weak\" to inclusive weak gaugino production and \"slepton\" to direct slepton production. By comparison to fig. 7a, we see that the profile likelihoods generally prefer somewhat larger SUSY production cross-sections than the Bayesian analysis with REWSB+same order \\(w=1\\) priors. The 95% one-sided lower confidence level bounds upon them are for 2000 fb for sparton production, 300 fb for weak gaugino production and 80 fb for slepton production. This last bound is particularly different from the Bayesian analysis since there the small probability for the focus-point regime, evidenced by the low bump to the left hand side of Fig. 7a, was only pushed just above an integrated posterior pdfs of 5% by volume effects.
## 7 Conclusion
This analysis constitutes the first use in a serious physics context of a new \"banked\" MCMC proposal function [45]. This new proposal function has allowed us to sample simultaneously, efficiently and correctly from both signs of \\(\\mu\\). The resulting sampling passed convergence tests and therefore gave reliable estimates of LHC SUSY cross-section pdfs. MCMCs have also been used to determine the impact of potential future collider data upon the MSSM [62, 63, 13]. The development of tools such as the banked proposal MCMC constitutes a goal at least as important as the interesting physics results derived here. In case they may be of use for future work, we have placed the samples obtained by the banked MCMC on the internet, with instructions on how to read them, at the following URL:
[http://users.hepforge.org/~allanach/benchmarks/kismet.html](http://users.hepforge.org/~allanach/benchmarks/kismet.html)
We argued that prior probability distributions that are flat in \\(\\tan\\beta\\) are less natural than those that are flat in the more fundamental Higgs potential parameters \\(\\mu\\)
Figure 8: SUSY production cross-section profile likelihoods. One-sided 95% lower confidence level limits are shown as calculated from these histograms by the vertical arrows.
\\(B\\) of the MSSM. We have derived a more natural prior distribution in the form of Eq. 8, which is originally flat in \\(\\mu\\), \\(B\\) and also encodes our prejudice that \\(\\mu\\) and the SUSY breaking parameters are \"of the same order\". There is actually a marginalisation over a family of priors, and as such our analysis uses a hierarchical Bayesian prior distribution. It should be noted that this prior pdf can replace definitions of fine-tuning in the MSSM Higgs sector. Its use in Bayesian statistics is well-defined, and we have examined its effect on Bayesian CMSSM analysis. The main effect is to strongly disfavour the Higgs-pole and focus point dark matter annihilation regions of CMSSM parameter space. The sparticle masses are then predicted to be probably lighter than previously thought as a result of the new prior. There is little difference in the results when one changes the widths of the same order pdfs, but the results are very different to previous ones in the literature where flat priors in \\(\\tan\\beta\\) were examined. If one rejects the prior flat in the SUSY breaking parameters, as we have advocated here, our results appear rather robust with respect to changes in the prior. However, for readers that find the same order priors too strong, one can view the difference between the flat prior results and those using the same order priors as a result of uncertainty originating from scarce data. This dependence upon priors does indicate the need for caution when interpreting our results; constraining data are currently too scarce to render the posterior pdfs approximately independent of the prior assumption. We feel that the sensitivity to priors must be studied, and find the large dependence on priors consistent with something that is intuitively obvious [64]; that a few pieces of indirect data are not sufficient to robustly constrain a complex model of 8 parameters. The frequentist analysis does not depend on any prior, but it also does not allow us to inject reasonable assumptions about the naturalness of the theory. A comparison between the likelihood profile and posteriors is ideal because it contains information about volume effects in the Bayesian analyses. The frequentist confidence levels on MSSM particle masses are different to Bayesian credible intervals, but within the same ball-park as each other. Thus we may infer some rough limits, but to be conservative one might take the _least constraining_ upper bound by any of the different methods. The lighter sparticles from the new priors result in more optimistic total SUSY cross-section predictions for the LHC. It would be interesting to see the footprints of other SUSY breaking models to see whether the correlations between different cross-sections are a good discriminator [65].
## Appendix A Comparison With Previous Literature
The flat-prior results may at first sight seem to be in contradiction with the analysis of Ellis _et al_[7], where a preference for light SUSY was found from quite similar global fits to those in the present paper. They also fit \\(M_{W},\\sin^{2}\\theta_{w}^{l}(\\text{eff})\\) as well as \\((g-2)_{\\mu}\\), while using the relic density of dark matter as a constraint. In their paper, Ellis _et al_ fixed \\(\\tan\\beta\\), and all Standard Model inputs at their central experimental values. For every value of \\(M_{1/2}\\), \\(A_{0}\\) scanned, \\(m_{0}\\) is adjusted until the central WMAP3 value of \\(\\Omega_{DM}h^{2}\\) results. The smearing due to the finite error on \\(\\Omega_{DM}h^{2}\\) is very small and so it is argued that this procedure well approximates the full constraints upon parameter space. We display the resulting constraint on the \\(A_{0}-M_{1/2}\\) plane for \\(\\tan\\beta=10\\) and \\(\\mu>0\\) in Fig. 9a. The partial ellipses show the authors' claimed 68% and 90% confidence level limits calculated with \\(\\Delta\\chi^{2}=2.30,4.61\\)[7] from the best-fit point, marked by a cross. Actually, since the confidence level regions are constrained within a wedge-shape in the figure, the 68% (90%) limits should not necessarily correspond to \\(\\Delta\\chi^{2}=2.30(4.61)\\) respectively. The regions shown on the figure should therefore be re-calculated, by calculating what sort of probability distribution \\(\\Delta\\chi^{2}\\) has when trapped in such a wedge.
In order to emulate these results, we perform a similar but Bayesian analysis with the MCMC algorithm: all Standard Model inputs are fixed at their central empirical values, \\(\\tan\\beta=10\\) is fixed and \\(m_{0}\\), \\(A_{0}\\), \\(M_{1/2}\\) are allowed to vary in the MCMC algorithm in order to fit the combined posterior probability of dark matter plus other measurements. For this comparison, we choose flat priors in \\(m_{0}<1\\) TeV, \\(M_{1/2}<1\\) TeV and -3 TeV\\(<A_{0}<\\)3 TeV. The likelihood is calculated as in section 3. The main conclusion from Fig. 9 is that the two results are similar. If the correct relationship between \\(\\Delta\\chi^{2}\\) and confidence-level were used in Fig. 9a, the confidence level region could extend out to higher values of \\(M_{1/2}\\). We should note strictly that, being Bayesian confidence regions as compared to frequentist, we do not _exactly_ compare like with like in Figs. 9a,b but we do expect roughly similar
Figure 9: (a) Reduced parameter space global fit from Ref. [7] for \\(\\tan\\beta=10\\), \\(\\mu>0\\). In the plot, \\(A_{0}\\) has a relative minus sign with respect to the definition used in the present paper, (b) our version of the same fit, marginalised over \\(m_{0}\\). 68% and 90% confidence level regions are shown.
confidence regions in the two cases. When we perform a similar fit with a larger allowed range of \\(m_{0}<4\\) TeV, Fig. 9b deforms due to contributions from \\(h^{0}\\) and fixed-point regions but the preference for \\(M_{1/2}<800\\) GeV remains. We conclude from this that Ellis _et al_ did not scan larger values of \\(m_{0}\\) where the focus point regime resides. The procedure of Ellis _et al_ is not suited for including the \\(h^{0}\\) and fixed-point regions, since then there is no unique solution of \\(m_{0}\\) which provides the central value of \\(\\Omega_{DM}h^{2}\\). If we then additionally include smearing due to \\(\\tan\\beta\\) in Fig. 9b with a flat prior, the \\(A^{0}\\)-pole region extends the region of valid \\(M_{1/2}\\) out to higher values \\(>1\\) TeV. Allowing variations of Standard Model input parameters produces further smearing in the fits until, finally, Fig. 3a is obtained.
## Acknowledgments
This work has been partially supported by STFC. We thank R Rattazzi for discussions which lead to the re-examination of priors. We also thank the Cambridge SUSY working group and T Plehn for helpful discussions and observations. The computational work has been performed using the Cambridge eScience CAMGRID computing facility, with the invaluable help of M Calleja.
## References
* [1] R. Arnowitt, A. Chamseddine and P. Nath, _Locally supersymmetric grand unification_, Phys. Rev. Lett. **49** (1982) 970;
* [2] R. G. Roberts and L. Roszkowski, _Implications for minimal supersymmetry from grand unification and the neutrino relic abundance_, Phys. Lett. B **309** (1993) 329 [arXiv:hep-ph/9301267].
* [3] G. L. Kane, C. F. Kolda, L. Roszkowski and J. D. Wells, _Study of constrained minimal supersymmetry_, Phys. Rev. **D49**19946173 [hep-ph/9312272].
* [4] J. R. Ellis, K. A. Olive, Y. Santoso and V. C. Spanos, _Likelihood analysis of the CMSSM parameter space_, Phys. Rev. D **69** (2004) 095004 [arXiv:hep-ph/0310356].
* [5] S. Profumo and C. E. Yaguna, _A statistical analysis of supersymmetric dark matter in the MSSM after WMAP_, Phys. Rev. D **70** (2004) 095004 [arXiv:hep-ph/0407036].
* [6] E. A. Baltz and P. Gondolo, _Markov chain Monte Carlo exploration of minimal supergravity with implications for dark matter_, JHEP **0410** (2004) 052 [arXiv:hep-ph/0407039].
* [7] J. R. Ellis, S. Heinemeyer, K. A. Olive and G. Weiglein, _Indirect sensitivities to the scale of supersymmetry_, JHEP **0502** (2005) 013 [arXiv:hep-ph/0411216].
* [8] L. S. Stark, P. Hafliger, A. Biland and F. Pauss, _New allowed mSUGRA parameter space from variations of the trilinear scalar coupling A0_, JHEP **0508** (2005) 059 [arXiv:hep-ph/0502197].
* [9] J. P. Conlon and F. Quevedo, _Gaugino and scalar masses in the landscape_, JHEP **0606** (2006) 029 [arXiv:hep-th/0605141]; L. E. Ibanez, _The fluxed MSSM_, Phys. Rev. D **71** (2005) 055005 [arXiv:hep-ph/0408064]; A. Brignole, L. E. Ibanez and C. Munoz, _Towards a theory of soft terms for the supersymmetric Standard Model_, Nucl. Phys. B **422** (1994) 125, Erratum-ibid. B **436** (1995) 747 [arXiv:hep-ph/9308271].
* [10] B. C. Allanach and C. G. Lester, _Multi-dimensional mSUGRA likelihood maps_, Phys. Rev. D **73** (2006) 015013 [arXiv:hep-ph/0507283].
* [11] R. R. de Austri, R. Trotta and L. Roszkowski, _A Markov chain Monte Carlo analysis of the CMSSM_, JHEP **0605** (2006) 002 [arXiv:hep-ph/0602028].
* [12] B. C. Allanach, C. G. Lester and A. M. Weber, _The dark side of mSUGRA_, JHEP **12** (2006) 065 [arXiv:hep-ph/0609295]
* [13] L. Roszkowski, R. R. de Austri and R. Trotta, _On the detectability of the CMSSM light Higgs boson at the Tevatron_, arXiv:hep-ph/0611173.
* [14] B. C. Allanach, _Naturalness priors and fits to the constrained minimal supersymmetric standard model_, Phys. Lett. B **635** (2006) 123 [arXiv:hep-ph/0601089].
* [15] G. F. Giudice and R. Rattazzi, _Living dangerously with low-energy supersymmetry_, Nucl. Phys. B **757** (2006) 19 [arXiv:hep-ph/0606105].
* [16] D. M. Pierce, J. A. Bagger, K. T. Matchev and R.-J. Zhang, _Precision corrections in the minimal supersymmetric standard model_, Nucl. Phys. B **491** (1997) 3, [arXiv:hep-ph/9606211].
* [17] W.-M. Yao _et al_, _The review of particle physics_, Jnl. Phys. **G33** (2006) 1, [http://pdg.lbl.gov/](http://pdg.lbl.gov/)
* [18] G. F. Giudice and A. Masiero, _A natural solution to the \\(\\mu\\) problem in supergravity theories_, Phys. Lett. B **206** (1988) 480.
* [19] G. E. P. Box and G. C. Tiao, _Bayesian inference in statistical analysis_, Addison Wesley (1973).
* 21 Jul 2001, pp P125_ [arXiv:hep-ph/0202233].
* [21] The Tevatron Electroweak Working Group, _Combination of CDF and D0 results on the mass of the top quark_, [arXiv:hep-ex/0608032].
* [22] The Tevatron Electroweak Working Group, _A combination of CDF and D0 results on the mass of the top quark_, [arXiv:hep-ex/0703034].
* [23] N. Metropolis, A.W. Rosenbluth, M.N. Teller and E. Teller, _Equations of State Calculations by Fast Computing Machines_, Journal of Chemical Physics, **21** (1953) 1087-1091
* [24] D. MacKay, _Information Theory, Inference, and Learning Algorithms_. Cambridge University Press, 2003.
* [25] A. E. Nelson and L. Randall, _Naturally large \\(\\tan\\beta\\)_, Phys. Lett. B **316** (1993) 516 [arXiv:hep-ph/9308277]; R. Rattazzi and U. Sarid, _The unified minimal supersymmetric model with large yukawa couplings_, Phys. Rev. D **66** (2002) 010001 [arXiv:hep-ph/9505428].
* [26] R. Barbieri and G. F. Giudice, _Upper bounds on supersymmetric particle masses_, Nucl. Phys. B **306** (1988) 63; B. de Carlos and J. A. Casas, _One loop analysis of the electroweak breaking in supersymmetric models and the fine tuning problem_, Phys. Lett. B **309** (1993) 320, [arXiv:hep-ph/9303291]; R. Barbieri and A. Strumia, _About the fine-tuning price of LEP_, Phys. Lett. B **433** (1998) 63, [arXiv:hep-ph/9801353]; C. Giusti, A. Romanano and A. Strumia. _Natural ranges of supersymmetric signals_, Nucl. Phys. B **550**, 3 (1999) [arXiv:hep-ph/9811386]; L. E. Ibanez and G. G. Ross, _Supersymmetric Higgs and radiative electroweak breaking_, arXiv:hep-ph/0702046.
* [27] B.C. Allanach, _SOFTSUSY: A program for calculating supersymmetric spectra_, Comput. Phys. Commun. **143** (2002) 305, [arXiv:hep-ph/0104145].
* [28] P. Skands _et al_, _SUSY Les Houches accord: Interfacing SUSY spectrum calculators, decay packages, and event generators_, JHEP **0407** (2004) 036, [arXiv:hep-ph/0311123].
* [29] G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, _micrOMEGAs: Version 1.3_, Comput. Phys. Commun. **174** (2006) 577 [arXiv:hep-ph/0405253]; G. Belanger, F. Boudjema, A. Pukhov and A. Semenov, _micrOMEGAs: A program for calculating the relic density in the MSSM_, Comput. Phys. Commun. **149** (2002) 103 [arXiv:hep-ph/0112278].
* [30] G.W. Bennett _et al._ [Muon g-2 collaboration], _Final report of the muon E821 anomalous magnetic moment measurement at BNL_, Phys. Rev. D **73** (2006) 072003 [arXiv:hep-ex/0602035].
* [31] K. Hagiwara, A.D. Martin, D. Nomura, and T. Teubner, _Improved predictions for g-2 of the muon and alpha(QED)(M(Z)**2)_, (2006) [arXiv:hep-ph/0611102].
* [32] M. Passera, _Precise mass-dependent QED contributions to leptonic g-2 at order alpha**2 and alpha**3_, Phys. Rev. D **75** (2007) 013002, [arXiv:hep-ph/0606174].
* [33] A. Czarnecki, W.J. Marciano, and A. Vainshtein, _Refinements in electroweak contributions to the muon anomalous magnetic moment_, Phys. Rev. D **67** (2003) 073006 [arXiv:hep-ph/0212229]
* [34] U. Chattopadhyay and P. Nath, _Probing supergravity grand unification in the Brookhaven g-2 experiment_, Phys. Rev. D **53** (1996) 1648, [arXiv:hep-ph/9507386].
* [35] S. Heinemeyer, D. Stockinger and G. Weiglein, _Electroweak and supersymmetric two-loop corrections to (g-2)(mu)_, Nucl. Phys. **B690** (2004) 103, [arXiv:hep-ph/0405255]; S. Heinemeyer, D. Stockinger and G. Weiglein, _Two-loop SUSY corrections to the anomalous magnetic moment of the muon_, Nucl. Phys. B **690** (2004) 62 [arXiv:hep-ph/0312264].
* [36] D. Stockinger, _The muon magnetic moment and supersymmetry_, [arXiv:hep-ph/0609168].
* [37] Tevatron Electroweak Working Group & The CDF Collaboration, _Winter 2007 Conference Note_, [http://fcdfwww.fnal.gov/physics/ewk/2007/wmass/](http://fcdfwww.fnal.gov/physics/ewk/2007/wmass/).
* [38] [The ALEPH, DELPHI, L3, OPAL, SLD Collaborations, the LEP Electroweak Working Group, the SLD Electroweak and Heavy Flavour Groups], _Precision electroweak measurements on the Z resonance_, Phys. Rept. **427** (2006) 257 [arXiv:hep-ex/0509008]; [The ALEPH, DELPHI, L3 and OPAL Collaborations, the LEP Electroweak Working Group], _A combination of preliminary electroweak measurements and constraints on the standard model_, [arXiv:hep-ex/0511027].
* [39] M. Awramik, M. Czakon, A. Freitas, and G. Weiglein, _Precise prediction for the W-boson mass in the standard model_, Phys. Rev. D **69** (2004) 053006 [arXiv:hep-ph/0311148].
* [40] The code is forthcoming in a publication by A. M. Weber et al.; S. Heinemeyer, W. Hollik, D. Stockinger, A. M. Weber and G. Weiglein, _Precise prediction for M(W) in the MSSM_, JHEP **08** (2006) 052, [arXiv:hep-ph/0604147].
* [41] J. Haestier, S. Heinemeyer, D. Stockinger and G. Weiglein, _Electroweak precision observables: Two-loop Yukawa corrections of supersymmetric particles_, _JHEP_**0512** (2005) 027, [arXiv:hep-ph/0508139].
* [42] E. Barberio _et al_ [Heavy Flavour Averaging Group], _Averages of b-hadron Properties at the End of 2005_, arXiv:hep-ex/0603003.
* [43] P. Gambino, U. Haisch and M. Misiak, _Determining the sign of the \\(b\\to s\\gamma\\) amplitude_, Phys. Rev. Lett. **94** (2005) 061803 [arXiv:hep-ph/0410155].
* [44] D. N. Spergel _et al._, _Wilkinson Microwave Anisotropy Probe (WMAP) three year results Implications for cosmology_, arXiv:astro-ph/0603449.
* [45] B. C. Allanach and C. G. Lester, _Sampling using a 'bank' of clues_, arXiv:0705.0486 [hep-ph]
* [46] see, for example, section 7.3 of F. James, _Minuit, function minimization and error analysis_, CERN Program Library Long Writeup D506.
* [47] A. J. Barr, _Using lepton charge asymmetry to investigate the spin of supersymmetric particles at the LHC_, Phys. Lett. B **596** (2004) 205 [arXiv:hep-ph/0405052].
* [48] A. Gelman and D. Rubin, _Inference from Iterative Simulation Using Multiple Sequences_, Stat. Sci. **7** (1992) 457.
* [49] K. Griest and D. Seckel, _Three exceptions in the calculation of relic abundances_, Phys. Rev. **D43** (1991) 3191-3203.
* [50] M. Drees and M. M. Nojiri, _The Neutralino Relic Density in Minimal N=1 Supergravity_, Phys. Rev. **D47** (1993) 376-408, [arXiv:hep-ph/9207234].
* [51] R. Arnowitt and P. Nath, _Cosmological Constraints and SU(5) Supergravity Grand Unification_, Phys. Lett. **B299** (1993) 58-63, [arXiv:hep-ph/9302317].
* [52] A. Djouadi, M. Drees and J. L. Kneur, _Neutralino dark matter in mSUGRA: Reopening the light Higgs pole window,_, Phys. Lett. B **624** (2005) 60 [arXiv:hep-ph/0504090].
* [53] K. L. Chan, U. Chattopadhyay and P. Nath, _Naturalness, Weak Scale Supersymmetry and the Prospect for the Observation of Supersymmetry at the Tevatron and at the LHC_, Phys. Rev. **D5** (1998) 096004, [arXiv:hep-ph/9710473]; A. B. Lahanas, N. E. Mavromatos and D. V. Nanopoulos, _WMAPing the universe: Supersymmetry, dark matter, dark energy, proton decay and collider physics_, Int. J. Mod. Phys. D **12**, 1529 (2003) [arXiv:hep-ph/0308251]; U. Chattopadhyay, A. Corsetti and P. Nath, _WMAP constraints, SUSY dark matter and implications for the direct detection of SUSY_, Phys. Rev. D **68**, 035005 (2003) [arXiv:hep-ph/0303201].
* [54] J. L. Feng, K. T. Matchev, and T. Moroi, _Multi-TeV scalars are natural in minimal supergravity_, Phys. Rev. Lett. **84** (2000) 2322-2325, [arXiv:hep-ph/9908309].
* [55] J. L. Feng, K. T. Matchev, and T. Moroi, _Focus points and naturalness in supersymmetry_, Phys. Rev. **D61** (2000) 075005, [arXiv:hep-ph/9909334].
* [56] J. L. Feng, K. T. Matchev, and F. Wilczek, _Neutralino dark matter in focus point supersymmetry_, Phys. Lett. **B482** (2000) 388-399, [arXiv:hep-ph/0004043];
* neutralino(1) mass difference in co-annihilation scenarios at the LHC_, arXiv:hep-ph/0608193.
* [58] S. S. Wilks, _The large-sample distribution of the likelihood ratio for testing composite hypotheses_, Ann. Math. Statist. 9, **60** (1938) 95.
* [59] G. Corcella _et al_, _HERWIG 6: an event generator for hadron emission reactions with interfering gluons (including supersymmetric processes)_, JHEP **0101** (2001) 010 [arXiv:hep-ph/0011363]; _ibid._, _HERWIG 6.5 release note_, arXiv:hep-ph/0210213; S. Moretti, K. Odagiri, P. Richardson, M.H. Seymour and B.R. Webber, _Implementation of supersymmetric processes in the HERWIG event generator_, JHEP 0204 (2002) 028 [hep-ph/0204123].
* [60] B. C. Allanach, C. G. Lester, M. A. Parker and B. R. Webber, _Measuring sparticle masses in non-universal string inspired models at the LHC_, JHEP **0009** (2000) 004 [arXiv:hep-ph/0007009].
* [61] F. Gianotti _et al._, _Physics potential and experimental challenges of the LHC luminosity upgrade_, Eur. Phys. J. C **39** (2005) 293 [arXiv:hep-ph/0204087].
* [62] C. G. Lester, M. A. Parker and M. J. White, _Determining SUSY model parameters and masses at the LHC using cross-sections, kinematic edges and other observables_, JHEP **0601** (2006) 080 [arXiv:hep-ph/0508143].
* [63] E. A. Baltz, M. Battaglia, M. E. Peskin and T. Wizansky, _Determination of dark matter properties at high-energy colliders_, Phys. Rev. D **74** (2006) 103521 [arXiv:hep-ph/0602187].
* [64] M. Goldstein, proceedings of Advanced Statistical Techniques in Particle Physics, Durham (2002), URL [http://www.ippp.dur.ac.uk/Workshops/02/statistics/proceedings.shtml](http://www.ippp.dur.ac.uk/Workshops/02/statistics/proceedings.shtml)
* [65] B. C. Allanach, D. Grellscheid, F. Quevedo, _Genetic Algorithms and Experimental Discrimination of SUSY Models_, JHEP **07** (2004) 069 [arXiv:hep-ph/0406277]; J. L. Bourjaily, G. L. Kane, P. Kumar and T. T. Wang, _Outside the mSUGRA box_, arXiv:hep-ph/0504170; N. Arkani-Hamed, G. L. Kane, J. Thaler and L. T. Wang, _Supersymmetry and the LHC inverse problem_, JHEP **0608** (2006) 070 [arXiv:hep-ph/0512190]; G. L. Kane, P. Kumar and J. Shao, _LHC string phenomenology_, arXiv:hep-ph/0610038. J. Conlon, S. Kom, K. Suraliz, B. C. Allanach and F. Quevedo, _Supersymmetric particle spectra and LHC collider phenomenology for large volume string models_, arXiv:0704.3403 [hep-ph]. | Previous LHC forecasts for the constrained minimal supersymmetric standard model (CMSSM), based on current astrophysical and laboratory measurements, have used priors that are flat in the parameter \\(\\tan\\beta\\), while being constrained to postdict the central experimental value of \\(M_{Z}\\). We construct a different, new and more natural prior with a measure in \\(\\mu\\) and \\(B\\) (the more fundamental MSSM parameters from which \\(\\tan\\beta\\) and \\(M_{Z}\\) are actually derived). We find that as a consequence this choice leads to a well defined fine-tuning measure in the parameter space. We investigate the effect of such on global CMSSM fits to indirect constraints, providing posterior probability distributions for Large Hadron Collider (LHC) sparticle production cross sections. The change in priors has a significant effect, strongly suppressing the pseudoscalar Higgs boson dark matter annihilation region, and diminishing the probable values of sparticle masses. We also show how to interpret fit information from a Markov Chain Monte Carlo in a frequentist fashion; namely by using the profile likelihood. Bayesian and frequentist interpretations of CMSSM fits are compared and contrasted.
Supersymmetry Effective Theories, Cosmology of Theories beyond the Standard Model, Dark Matter ++
Footnote †: preprint: hep-ph/0903115 | Condense the content of the following passage. |
arxiv-format/0705_1671v1.md | # All optical sensor for automated magnetometry based on coherent population trapping
J.Belfi
CNISM-Unita di Siena, Dipartimento di Fisica - Universita di Siena, via Roma 56, 53100 Siena, Italy
G.Bevilacqua
CNISM-Unita di Siena, Dipartimento di Fisica - Universita di Siena, via Roma 56, 53100 Siena, Italy
V.Biancalana
CNISM-Unita di Siena, Dipartimento di Fisica - Universita di Siena, via Roma 56, 53100 Siena, Italy
Y.Dancheva
CNISM-Unita di Siena, Dipartimento di Fisica - Universita di Siena, via Roma 56, 53100 Siena, Italy
L.Moi
CNISM-Unita di Siena, Dipartimento di Fisica - Universita di Siena, via Roma 56, 53100 Siena, Italy
## I Introduction
Atomic magnetometers, developed since 1960's [1], have today a central role in the field of high sensitive magnetometry with important applications in geophysics, medicine, biology, testing of materials and of fundamental physics symmetries. Technical advances reached in the last years, will make optical magnetometers more suitable than SQUIDs in most of these applications because of the comparable - in some cases, even better [2] - sensitivity and the possibility to operate at room temperature with no need of cryogenic cooling.
Recently a direct measure of geophysical-scale field, with the impressive sensitivity of 60 \\(\\Gamma\\)/\\(\\sqrt{\\rm Hz}\\) was demonstrated in a Non-linear Magneto-Optical Rotation experiment [3].
Detection of weak biological-scale magnetic fields (magneto-cardiometry) has been demonstrated using optically pumped magnetometers [4; 5] based on the 'double optical-RF excitation'. Direct RF magnetic excitation, anyway, does not permit the realization of fully optical sensors as RF coils have to be placed next to the vapor cell.
All-optical sensors based on Coherent Population Trapping (CPT) effect can be instead built. CPT effect occurs when two long-lived ground states are coupled to a common excited state by two coherent laser fields. When the frequency difference of the laser fields exactly matches the frequency separation of two not coupled ground levels, the population is trapped in the so called 'dark' state. This is a quantum-mechanical superposition of both ground states that is not coupled to the laser fields. An accumulation of population in such coherent state gives rise to a resonant transparency [6; 7]. CPT resonances have line-widths much narrower than the natural line-width of the corresponding optical transitions. This makes them particularly suitable for precision spectroscopy applications in many other fields besides magnetometry [8; 9] like metrology [10; 11], detection of gravitational waves [12], laser physics [13; 14] and laser cooling[15].
In this work we present the characteristics of an all-optical magnetometer, working as a compact, automated device able to measure magnetic fields in wide range of amplitudes and time-scales, for example the daily Earth magnetic field variations or weak signals varying in the msec time-scales, superposed on the Earth magnetic field. The sensor works in the magnetically polluted environmental conditions typical of a scientific laboratory. Neither additional RF magnetic field excitation nor particular magnetic field shielding are necessary.
The principle of operation of our CPT magnetometer can be described as follows. Couples of laser fields, such that their frequency separation matches the energy splitting between the Zeeman sub-levels of a given hyperfine ground state of Cs, are produced by frequency modulation of the diode laser junction current in the 10 kHz - 10 MHz range. The obtained frequency modulated radiation is characterized by a very high modulation index (\\(\\sim 10^{3}\\)) and the spectral structure of the laser can be seen as a comb of coherent modes with an overall width of the order of the Doppler broadened optical transition. With such a broad band excitation almost all the atomic velocity classes interact with resonant light and furthermore the power absorbed by each single class is very low. The first feature allows us to increase the resonance contrast, the second one reduces the power broadening. It is worth noting that this solution allows us to work without complex laser phase locking systems and, furthermore, without expensive and bulky highly stable microwave oscillators that are, instead, necessary in the case of CPT generation on Zeeman sub-levels of different hyperfine ground states[16].
## II Experimental setup and measurement procedure
A sketch of the experimental setup is presented in Fig.1. The sensor measures the resonant absorption of laser light by Cs vapor contained in a cylindrical cell2.5 cm in diameter and 1 cm in length. The cell is kept at room temperature and contains 2 Torr of N\\({}_{2}\\) as a buffer gas. N\\({}_{2}\\) minimizes the multiple, incoherent re-absorption thanks to the quenching of the resonant fluorescence. The laser radiation is tuned to excite the hyperfine transitions between the ground state with total angular momentum \\(F_{g}=3\\) and the excited \\({}^{2}P_{3/2}\\) states with \\(F_{e}=2,3,4\\).
The laser is a single-mode edge-emitting pigtail laser (\\(\\lambda\\)= 852 nm) with 100 mW of laser power and an intrinsic line-width of less than 5 MHz. Optical feedback is avoided by means of 40 dB optical isolator and the laser light is coupled into a 10 m long single-mode polarization-maintaining fiber. The laser head containing the laser chip, the optical isolator and the fiber collimator is closed in a butterfly housing of only 40 cm\\({}^{3}\\). The beam coming from the fiber is collimated and its polarization is transformed from linear to circular using a quarter-wave plate. In order to increase the light-atom interaction time the beam waist is expanded to 4.3 mm and the laser intensity used for atom excitation is reduced to 36 \\(\\mu\\)W/cm\\({}^{2}\\) using a set of neutral filters. The transmitted laser light is detected and analyzed.
CPT resonance is created when two different Zeeman sublevels are coherently coupled to a third common Zeeman sublevel. The three-level system involved is called \\(\\Lambda\\) system in the case the third common coupled level is the highest in energy, \\(V\\) system in the case it is lowest. In our case a number of \\(\\Lambda\\) and \\(V\\) systems are created with circularly polarized light preserving the selection rules \\(\\Delta m_{f}=0,+1\\) and \\(\\Delta m_{f}=0,-1\\). Chains of \\(\\Lambda\\) systems are formed on the \\(F_{g}=3\\to F_{e}=2,3\\) group of transitions, while chains of V systems are formed on the \\(F_{g}=3\\to F_{e}=4\\).
The magnetic field under measurement breaks the Zeeman degeneracy and makes adjacent (\\(|\\Delta m_{F}|=1\\)) sublevels separated by \\(\\hbar\\mu_{0}g_{F}B=\\hbar\\omega_{L}\\), where \\(\\hbar\\) is the Planck constant, \\(\\mu_{0}\\) is the Bohr magneton, \\(g_{F}\\) is the Lande factor of the considered ground-state, \\(\\omega_{L}\\) is the Larmor frequency and \\(B\\) is the magnetic field strength. In Fig.2, as example, is represented the schematics of \\(\\Lambda\\)-system chains formation for the \\(F_{g}=3\\to F_{e}=2\\) system of transitions.
The RF signal used to produce couples of suitably separated frequency components in the spectral profile of the laser emission is swept in a small interval around the resonant value \\(\\omega_{L}\\). A set of data acquired along the frequency sweep allows for visualizing the CPT resonance profile.
As the magnetic field strengths of interest range from few \\(\\mu\\)T to few mT, the modulation frequency \\(\\Omega_{RF}\\) ranges from few tens of kHz to few MHz. The \\(\\Omega_{RF}\\) is generated by a waveform generator (Agilent 33250A 80-MHz Function Generator) that is coupled to the laser by means of a passive circuit specially designed to make the response of the laser rather flat in the frequency range of interest. The circuit uses capacitors to ac couple the RF to the laser junction, and resistors of rather large value (several hundreds Ohm) to convert the voltage to current at the output of the generator. A pair of inductive elements, oriented so that possible spurious magnetic pick-up is canceled, are connected in series with the dc supply in order to prevent the modulating signal from being significantly counteracted by the laser current driver. As an overall, this coupling allows for achieving very large modulation indexes. The envelope of the unresolved frequency components of the resulting comb of laser frequencies can be observed using a Fabry-Perot spectrometer (see Fig.3).
In order to increase the S/N of the detected transmitted light, a Phase Sensitive Detection (PSD) is per
Figure 1: Experimental setup. OF: optical fiber, BC: beam collimator, IBS: intensity beam splitter, BE: beam expander, NF: neutral filters, PD: photo diode.
Figure 2: Representation of \\(\\Lambda\\)-system chains for the \\(F_{g}=3\\to F_{e}=2\\) system. The quantization axis is parallel to the magnetic field and perpendicular to the laser beam. Circular polarization is decomposed in two in-quadrature linearly polarized waves, one of which is in turn decomposed in two counter rotating fields circularly polarized around the quantization axis. The complete scheme would involve also the hyperfine components \\(F_{g}=3\\to F_{e}=3,4\\). In the table are reported the gyromagnetic factors \\(\\gamma\\) for all the hyperfine levels of interest.
formed. The \\(\\Omega_{RF}\\) is thus frequency-modulated at \\(\\Omega_{PSD}\\) and the atomic response in phase with a reference signal at \\(\\Omega_{PSD}\\) is extracted. Laser electric field can be expressed by:
\\[\\vec{E} = E_{0}\\ (\\hat{x}+i\\hat{y})\\ \\exp\\{i[\\omega_{0}t+\\varphi(t)+\\] \\[+ M_{RF}\\cos(\\Omega_{RF}t+M_{PSD}\\cos(\\Omega_{PSD}t))]\\}+c.c.\\]
where \\(\\hat{x}\\) and \\(\\hat{y}\\) are the usual unitary vectors perpendicular to the laser wave-vector, \\(\\omega_{0}\\) is the optical frequency, \\(\\varphi(t)\\) accounts for the laser linewidth invoking for instance the celebrated phase-diffusion model, M\\({}_{RF}\\) is the modulation index of the RF modulation and \\(\\Omega_{PSD}\\) is the PSD modulation frequency with its modulation index M\\({}_{PSD}\\). One typical FM spectrum of the CPT profile is presented in Fig.4 where \\(\\Omega_{PSD}=20\\) kHz and M\\({}_{PSD}=1\\). The central feature is used for magnetic field determination.
An improvement in the noise rejection and hence in the sensitivity can be achieved using a differential sensor. Such arrangement is very appropriate when a registration of very weak magnetic fields is desired [16]. In this case, two identical sensors are assembled in parallel, at a distance of 11 cm. The light is coupled to the second arm of the sensor using an intensity beam splitter (see Fig.1). When evaluating the efficiency of the differential setup in rejecting the noise, we distinguish three kinds of noise contribution. The first is due to the detection and amplification stages. This contribution, which is generally smaller than the others is increased (nominally by a factor \\(\\sqrt{2}\\)). The second kind is due to intensity fluctuations of the laser emission (rather small in our case) and to frequency noise of its optical frequency, in turn discriminated by the Doppler profile. This kind of noise is effectively rejected. As a third kind one can consider the noise due to magnetic field fluctuations generated by magnetic sources different from the one under examination (e.g. consider the case of Earth magnetic field fluctuation while measuring weak biological fields generated by close sources). In this case, provided that both the Cs cells are in conditions of CPT resonance, the differential sensor responds to the (usually very small) gradients of the field generated by far-located sources (while their common mode field is canceled) and to the field generated by sources (if any) located very close to one of the two cells. In this sense depending on the application, the differential setup can be used either as a gradiometer or as detector of field variations produced by close sources placed in magnetically polluted environment.
The components of both arms of the differential sensor, _i.e._ the fiber collimator, quarter-wave plates, beam expanders, neutral filters, Cs-N\\({}_{2}\\) cells and the PDs (we use large area, low noise, non-magnetic photo diodes) including a reference Cs-vacuum cell are assembled in a separate plate and can be placed away from the instrumentation. All used materials are highly non-magnetic so that the sensor does not perturb the magnetic field to be measured. In the condition of our Laboratory an improvement of the signal to noise ratio (S/N) of a factor of 5 was obtained.
### CPT profile and noise
The magnetic field measurement operatively consists in the determination of the central frequency \\(\
u_{0}\\) of the resonance profile (\\(2\\pi\
u_{0}\\) is the estimate of \\(\\omega_{L}\\)). The registered CPT profile, read at the output of the lock-in, reproduces the first derivative of the CPT (reduced absorption) resonance. One typical CPT profile is presented in Fig.5. The error bars are evaluated by the lock-in amplifier from the standard deviation of the output signal in steady conditions. Routinely, the noise measurement is done only once - after setting the lock-in operation parameters, because it is a time consuming operation, and can not be performed at each step.
The central frequency and the linewidth of the resonance are estimated by means of a best fit procedure. The fitting function is the first derivative of a Lorentzian profile with central frequency \\(\
u_{0}\\) and FWHM \\(\\Gamma\\). It allows
Figure 4: FM spectroscopy of the CPT resonance at modulation frequency of 20 kHz with deviation of 20 kHz..
Figure 3: Pigtail laser spectrum recorded using a confocal Fabry-Perot interferometer with FSR = 1.5 GHz. The modulation frequency is 105 kHz. The inferred modulation index is \\(\\sim\\) 2800..
for achieving an excellent agreement of fit with the data detected in the experimental conditions described above, provided that the RF scan is performed in a narrow range around the \\(\
u_{0}\\). Discrepancies appear for wider ranges of scan. In this condition we found that adding a secondary small, odd function with the same center as the principal derived Lorentzian, removes the discrepancy in the wings, making the fit result insensitive to the position of the center of the resonance with respect to the center of the scan. We used another (smaller and broader) Lorentzian derivative, to take into account such slower decays in the wings.
The uncertainty on the resonant frequency evaluated by the fit procedure, converted in magnetic field units, is consistent with the following estimation:
\\[\\frac{\\Delta B}{\\sqrt{\\Delta\
u}}=\\frac{1}{\\gamma}\\times\\frac{n\\sqrt{\\tau}}{ \\partial V/\\partial\
u}, \\tag{2}\\]
where \\(n\\) is the noise level at the output of the lock-in, \\(\\tau\\) is the measuring time (it depends on both the time-constant and the slope of the output filter in the lock-in), \\(\\partial V/\\partial\
u\\) is the slope of the CPT curve and \\(\\gamma\\) is the atomic gyromagnetic factor.
The noise level \\(n\\) can be estimated, as written above, by direct measurement from the lock-in amplifier, while the resonance central slope is simply related to the ratio between the amplitude and the FWHM of the signal. In typical working conditions, \\(n\\) amounts to about 3 times the photo-current shot-noise level, and is mainly due to fast laser frequency fluctuations (the measured noise decreases to the expected shot-noise level provided that the optical frequency is tuned out of resonance), while the FWHM line-width is about 700 Hz. The CPT parameters, \\(1/\\gamma\\) factor in our configuration and the noise pattern of our registration system, including the magnetic noise in the Laboratory, set the ultimate sensitivity of the sensor to 260 pT/\\(\\sqrt{\\rm Hz}\\) according to Eq. 2. This sensitivity limit is far above the very ultimate theoretical limit of the sensitivity, which considering only contribution of the light-atom interaction volume and Cs-Cs spin exchange collisional rate is 8 fT/\\(\\sqrt{\\rm Hz}\\)[17].
In general, in order to improve the sensitivity limit versus the theoretical one, one has to reduce the resonance line-width \\(\\Gamma\\), to increase the signal-to-noise ratio and to work in a highly shielded room.
The main CPT resonance broadening mechanism, in our case, is the limited light-atom interaction time and contributes to the total linewidth by an amount of the order of 600 Hz. Unshielded environmental conditions and, in particular, magnetic field gradients and AC magnetic fields, also contribute to the CPT line broadening. The maximum magnetic field gradient is estimated to be in the range of 80 nT/cm, leading to, worst case, a broadening contribution of the order of 280 Hz. AC magnetic fields contribute, instead, mainly with the 50 Hz and its harmonics spectral components, with an overall intensity of 40 nT, determining a corresponding broadening of about 140 Hz. Smaller broadening contributions are furthermore given by the light shift and the non-linear Zeeman effect [18].
## III PC automated controls
The experimental procedure is totally controlled by a dedicated LabView program. The program is used to synchronously communicate with the lock-in amplifier and the RF wave generator so that the CPT response can be recorded in appropriate and reproducible conditions.
### Automated registration of the CPT profile and subsequent fit
The CPT profile is registered by querying the lock-in output after having set the RF and having waited the settling time. Alternatively, as described below, the program performs numerically PSD, using large data sets produced by a 16 bit ADC card. The program also contains routines devoted to an on-line analysis of the collected data. In particular, after that each RF scan is completed, a minimum \\(\\chi^{2}\\) fit procedure is launched to determine the parameters of CPT resonance profile.
When performing numerical PSD, in contrast with what reported in Section II.1, the noise level is evaluated at each \\(\\Omega_{RF}\\) step. In both cases, we obtain rather good values for the minimized \\(\\chi^{2}\\)/DoF (degrees of freedom), demonstrating the reliability of the noise estimation and the suitability of the fitting procedure.
Figure 5: Typical CPT profile observed when scanning the modulation frequency around \\(\\omega_{\\rm L}/2\\pi\\). The lock-in time constant is 30 ms with 12 dB/oct output filter which determines the detection bandwidth of 4.16 Hz. The linewidth resulting from the fit procedure is \\(\\Gamma\\) = 700 Hz..
### Numerical PSD
The external lock-in amplifier (bulky and expensive) can be replaced by a compact ADC card. We successfully tested and used a system based on a commercial 16 bit, 50 kS/s card USB interfaced to the PC and sets to operate at 40 kS/s. The principle of operation was slightly changed with respect to the one of the lock-in amplifier. Specifically, the RF is externally frequency modulated at 20 kHz using a square wave signal obtained by scaling by two the frequency of a clock signal generated by the ADC card. Consequently, the ADC data array \\(y\\) corresponds to high and low values of the RF, accumulated in the even (\\(y_{2i}\\)) and odd (\\(y_{2i+1}\\)) elements, respectively. The PSD signal and its uncertainty are then obtained by considering the \\(N\\)-size array of differences \\(\\delta_{i}=y_{2i+1}-y_{2i}\\) and evaluating the average \\(\\langle\\delta\\rangle=\\Sigma_{i}\\delta_{i}/N\\) and the standard deviation \\([\\Sigma_{i}(\\delta_{i}-\\langle\\delta\\rangle)^{2}/N]^{1/2}\\) scaled by \\(\\sqrt{N}\\), respectively.
The number \\(2N\\) of the acquired data is chosen accordingly to the desired integration time of the PSD system, and its upper limit is set by the size of the data buffer in the ADC card (64 kB / 2 Bytes = 32767 readings which corresponds to 0.8 sec integration time). It is worth noting that the relatively large amount of data to be transferred makes the choice of USB 1.0 devices not very advantageous, because it introduces a relevant dead-time at each measurement.
### Frequency stabilization on the Doppler profile
Multi-frequency diode laser comb excitation is suitable for producing narrow and high contrast CPT signals in free-running lasing conditions. The optical frequency stabilization by the laser current and temperature controllers provides the needed short-term stability. Such passive stabilization method works well over time intervals of the order of 1 min, but over longer time-scales slow drifts of both temperature and current make it unsuitable. For this reason a long-term active stabilization system must be employed, allowing for relatively rough (accurate within some MHz) but reliable re-adjustments of the optical frequency.
We adopted a simple method based on a commercial USB ADC-DAC card with 12 bit resolution, which periodically and automatically (e.g. once per minute) performs a scan all over the Doppler profile, numerically determines center and width of the absorption curve, and finally provides a dc signal, which, sent to the modulation input of the laser current driver, establishes detuning with respect to the maximum of the absorption profile, in terms of the measured linewidth. In spite of the low cost and simplicity, such sub-system was demonstrated to be very effective and reliable for long-term compensation of the optical frequency drifts. Furthermore it made the whole system comfortable to be operated during the optimization as well as suitable for applicative long-lasting use.
Additionally, we note that such approach, at the expense of periodically suspending the CPT measurement for a few seconds, does not need any additional laser modulation (which would have effect on the CPT measurement) or external modulation elements (such as Electro optical modulators), which would make the set-up much more complex and expensive.
We have quantified the root mean square fluctuations \\(\\Delta\
u_{opt}^{rms}\\) of the optical frequency, i.e. the center of the broadband laser spectrum used for CPT creation, observing the apparent fluctuation of the fitted Doppler maximum position when scanning the laser current in the same range of nominal values over the Doppler absorption. From the observed rms variation of the fitted maximum we get \\(\\Delta\
u_{opt}^{rms}\\simeq 2\\)MHz over time scales of the order of 1 sec.
### Servo locking at the center of the CPT resonance
When a higher rate in the magnetic field measurements is required the time needed to perform RF scan in order to determine the center of the CPT resonance can be avoided performing single readings of the lock-in output. This fast operation uses the central, essentially linear, slope of the CPT resonance (the Lorentzian derivative profile) resulting from the fit, to convert the lock-in output voltage in frequency units and hence in field units.
Initially, one complete \\(\\Omega_{RF}\\) scan is accomplished, the fitting procedure runs, and, provided that the \\(\\chi^{2}\\) /DoF is reasonably close to unity, the best-fit parameters are passed to the routine devoted to evaluate the field from single readings of the lock-in and to keep \\(\\Omega_{RF}\\) locked to the resonance center.
The fit procedure gives the values of both the slope \\(\\frac{dV}{d\
u}\\) and the offset \\(V_{0}\\) at the center of resonance \\(\
u_{0}\\). When the single reading procedure starts working, \\(\\Omega_{RF}\\) is set at \\(\
u_{0}\\), and the lock-in output \\(V\\) is queried. The deviation \\(\\Delta V=V-V_{0}\\) is used in a linear approximation to obtain a new estimate of the central frequency \\(\
u_{0}+\\Delta V/(dV/d\
u)\\) and hence of the field. At each step, \\(\\Omega_{RF}\\) is updated to the new estimated \\(\
u_{0}\\) in order to keep the system working at the center of the CPT resonance, with the double aim of maintaining the linear estimation appropriate, and preventing possible large drifts of the field from bringing the system out of the CPT resonance.
The lock-in time-constant can be selected with different values for the scan and the single-reading operations. Obviously, to obtain a comparable noise rejection in single-reading operation it is necessary to increase the time-constant. The lock-in settling time derived from the time-constant and the lock-in output filter slope is taken into account in the lock operation in order to update \\(\\Omega_{RF}\\) with a rate \\(R\\) allowing for locking the system at the actual center of resonance with the maximum speed, butwithout risks of oscillations.
\\(\\Omega_{RF}\\) is actually updated at the rate \\(R^{\\prime}\\) of the readings (this value is limited by the RS232 communications, and generally exceeds \\(R\\)) consequently, the \\(\\Omega_{RF}\\) increment is scaled by a factor \\(R/R^{\\prime}\\).
The evaluated magnetic field is immediately saved on disk, possibly simultaneously with other reference signals (for instance, in the perspective of applications in magneto-cardiography, ECG signals will be saved as a reference, in view of offline analysis to be performed over long-lasting acquisitions).
## IV Signal optimization, performances and limits
The dependence of the amplitude of the CPT resonance as a function of the laser optical detuning is shown in Fig.6a, where the CPT amplitude, the Doppler broadened fluorescence line and the frequency position of the hyperfine transitions are reported. The CPT resonance amplitude shows a two-lobe structure that reflects the bridge-shaped spectral intensity profile shown in Fig.3, with the two maxima separated nearly by the same amount (about 600 MHz). The reason for the vanishing resonance at intermediate detunings lies in the opposite phase of the beating at \\(\\Omega_{RF}\\) of the FM laser spectrum. Actually, each couple of adjacent sidebands in the laser spectrum, with their amplitudes \\(J_{m}\\), \\(J_{m+1}\\), which produce a beating signal at \\(\\Omega_{RF}\\), contains one odd and one even Bessel function so that, due to the fact that \\(J_{-m}(M)=(-1)^{m}J_{m}(M)\\) the beat phases is opposite for the couple (m, m+1) and (-m, -(m+1)) respectively, i.e. for the couples belonging to the right and left wings of the bridge. From this point of view, depending on the optical detuning, a synchronous excitation [3; 19] of different velocity classes of atoms is performed, having a unique phase in the case where one side only of the bridge is in resonance with the Doppler profile, and two opposite phases when the bridge center coincides (about) with the Doppler center. A deeper analysis puts in evidence that the two lobes in Fig.6a are different in amplitude, with higher values on the blue side. This is related to the dominance of the \\(F_{g}=3\\to F_{e}=3\\) and \\(F_{g}=3\\to F_{e}=4\\) transitions on the blue wing of the Doppler line. On the other hand, the CPT resonance vanishes when the laser is tuned in the vicinity of the \\(F_{g}=3\\to F_{e}=2\\) transition, and the maxima of the two lobes are symmetric with respect to such detuning, accordingly with the fact that this latter, closed transition gives the most relevant contribution to the CPT.
Besides the evident variation of the resonance amplitude discussed above, changing the optical detuning determines also a variation both in the resonance linewidth (Fig. 6 b) and in the resonance center frequency. The linewidth dependence on optical frequency affects mainly the sensitivity of the instrument according with Eq. 2. The shift of the CPT resonance center versus optical detuning, also called light-shift or ac Stark effect, is due to the finite dephasing rate among ground states involved in the CPT preparation (see [[20; 21]]). This effect represents an essential systematic error in the determination of the CPT resonance center and then affects both the accuracy and the sensitivity of the magnetometer. The CPT center frequency shift rate versus optical detuning is presented in Fig.7 for laser intensity of 36 \\(\\mu\\)W/cm\\({}^{2}\\).
The optimal optical detuning, in view of best magnetometer operation, is around 400 MHz red detuning, where the light-shift rate passes through a minimum value and the CPT resonance has lower linewidth.
Figure 7: CPT resonance center shift rate depending on optical detuning for laser intensity of 36 \\(\\mu\\)W/cm\\({}^{2}\\). The value given at each point is obtained by averaging the difference in the CPT resonance centers measured 30 MHz above and 30 MHz bellow the corresponding value of the detuning from the maximum absorption. Error bars represent the standard deviation of each data set consisting of about 150 measurements.
Figure 6: a) CPT resonance amplitude versus optical detuning (zero frequency corresponds to the maximum of the Doppler profile). The frequency position of the three hyperfine transitions are marked with respect to the calculated Doppler profile. b) CPT HWHM versus optical detuning.
### Magnetic field monitoring
The magnetometer performance was checked by registration of the Earth magnetic field variation in time. A record of few hours continuous magnetic field registration with our magnetometer [22] is shown in Fig.8 together with the corresponding data of the L'Aquila geomagnetic station [23].
Both the procedures, when fitting the CPT resonance for determination of its center and using fast acquisition, were considered and investigated. In the first one, the whole CPT profile is registered and fitted as described above. In this case the measuring time, limited by the time necessary for CPT profile registration together with the subsequent data analysis, is of the order of 8 sec. Such procedure does not make possible to register fast magnetic field variations and in this respect can find application in geophysics, archaeology, material science, etc.
A trace of the Earth magnetic field variation obtained in fast operation is shown in Fig.9. In this case the acquisition rate was increased by a factor of 40 (210 ms per reading).
The magnetometer sensitivity to weak magnetic field variations superimposed on the Earth magnetic field was determined when working in the differential configuration. For this purpose, a calibrated variable magnetic field was applied using multi-turns coil, placed half a meter away from the sensor. The calibration of the magnetic field strengths on the two arms of the sensor produced by the coil was done using the magnetometer itself. The magnetometer response to a slow and weak variation of the magnetic field in time is presented in Fig.10. It can be seen that variations in the magnetic field difference of the order of 300 pT\\({}_{\\textrm{p-p}}\\) are well resolved. The inferred magnetometer sensitivity in differential configuration is 45 pT/\\(\\sqrt{\\textrm{Hz}}\\).
## V Conclusions
We have built an all-optical magnetometric sensor supplied with a PC-automated control of the experimental parameters and an absolute magnetic field measurement data acquisition system.
CPT resonance creation by a kHz-range frequency-modulation of a free running diode laser in totally unshielded environmental conditions is demonstrated and an accurate characterization of the optimal experimental parameters relative to the generation of broad band frequency comb spectrum and to the detection strategy is
Figure 8: Long lasting monitor of the Earth magnetic field, comparison between two different independent measurements. Upper trace: acquisition of the fluctuations of the Earth magnetic field modulus measured by the 3-axes flux-gate magnetometer in the Geophysical Institute of L’Aquila. Sampling rate is one point per minute. Lower trace: acquisition of the modulus of the Earth magnetic field in the Physics Department of the University of Siena. In this case the sampling rate is about 1 point each 8 sec. Both the direction and the strength of the magnetic field are strongly influenced by the presence of ferromagnetic objects in the laboratory and in the structure of the building.
Figure 10: The magnetometer response to a 300 pT\\({}_{p-p}\\), 0.8 Hz square-wave magnetic signal registered in differential configuration with a band-width determined by the lock-in time constant of 3 msec, 12 dB/oct output filter and 10 averages.
Figure 9: Magnetic field variation registration in single reading operation. In the inset is sketched the zoom over 1 min acquisition.
also given. We presented, furthermore, a detailed analysis of the dependence of CPT resonance amplitude and width on the optical frequency tuning, thus determining the optimal detuning from the central frequency of the single-photon absorption spectral profile. The magnetometer performs long term continuous monitoring of magnetic field in the Earth-field range providing a very sensitive tool for small magnetic field variation registration. We plan to routinely and systematically publish such data almost in real-time (preliminary sets are available in Ref.[[22]]), with the aim of making our system useful for remote Earth magnetic field continuous observations and possible comparisons with measurements performed elsewhere.
The best sensitivity, inside a totally unshielded environment, reached in the differential balanced configuration is 45 pT/\\(\\sqrt{\\mathrm{Hz}}\\).
###### Acknowledgements.
We thank S. Cartaleva and A. Vicino for useful discussions and A. Barbini for the technical support. This work was supported by the Monte dei Paschi di Siena Foundation.
## References
* (1) A. L. Bloom, \"Principles of operation of the rubidium vapor magnetometer\", Appl. Opt. **1**, 61-68 (1962).
* (2) I. K. Kominis, T. W. Kornack, J. C. Allerd, M. V. Romalis, \"A subfemtotesla multichannel atomic magnetometer\", Nature **422**, 569-599 (2003).
* (3) V. Acosta, M. P. Ledbetter, S. M. Rochester, D. Budker, D. F. Kimball, D. C. Hovde, W. Gawlik, S. Pustelny, J. Zachorowski, \"Nonlinear magneto-optical rotation with frequency-modulated light in the geophysical field range\", PRA **73**, 053404 (2006).
* (4) G. Bison, R. Wynands, and A. Weis, \"A laser-pumped magnetometer for the mapping of human cardio-magnetic fields\", Appl.Phys. B **76** (3), 325-328 (2003).
* (5) G. Bison, R. Wynands, and A. Weis, \"Dynamical mapping of the human cardiomagnetic field with a room-temperature, laser-optical sensor\", opt. expr. **11**, 904-909 (2003).
* (6) G. Alzetta, A. Gozzini, L. Moi, and G. Orriols, \"An experimental method for the observation of r.f. transitions and laser beat resonances in oriented Na vapors\", Nuovo Cimento **B 36**, 5-20 (1976).
* (7) E. Arimondo and G. Orriols, \"Nonabsorbing atomic coherences by coherent two-photon transitions in a three-level optically pumping\", Lett. Nuovo Cimento **17**, 333-338 (1976).
* (8) M. Scully and M. Fleischhauer, \"High-sensitivity magnetometer based on index-enhanced media\", PRL **69**, 1360-1363 (1992).
* (9) M. Fleischhauer and M. Scully, \"Quantum sensitivity limits of an optical magnetometer based on atomic phase coherence\", PRA **49**, 1973-1986 (1994).
* (10) J. E. Thomas, P. R. Hemmer, S. Ezekiel, C. C. Leiby Jr., R. H. Picard, C. R. Willis, \"Observation of Ramsey Fringes Using a Stimulated, Resonance Raman Transition in a Sodium Atomic Beam\", PRL **48**, 867-870 (1982).
* (11) P. Hemmer, M. Shahrair, H. Lamela-Rivera, S. Smith, B. Bernacki and S. Ezekiel, \"Semiconductor laser excitation of Ramsey fringes by using a Raman transition in a cesium atomic beam\", JOSA B **10**, 1326-1329 (1993).
* (12) C. Caves, \"Quantum-mechanical noise in an interferometer\", PRD **23**, 1693-1708 (1981).
* (13) S. Harris, \"Lasers without inversion: Interference of lifetime-broadened resonances\", PRL **62**, 1033-1036 (1989).
* (14) O. Kocharovskaya, \"Amplification and lasing without inversion\", Phys. Rep. **219**, 175-190 (1992).
* (15) C. Cohen-Tannoudji and W. Phillips, \"New mechanism for Laser cooling\", Physics Today **43**, 33-40 (1990).
* (16) C. Affolderbach, M. St\\(\\ddot{a}\\)hler, S. Knappe, R. Wynands, \"An all-optical, high-sensitivity magnetic gradiometer\", Appl.Phys. B **75**, 605-612 (2002).
* (17) J.C. Allred, R.N. Lyman, T.W. Kornak and M.V. Romalis, \"High-Sensitivity Atomic Magnetometer Unaffected by Spin-Exchange Relaxation\", PRL **89**, 130801 (2002).
* (18) Ch. Andreeva, G. Bevilacqua, V. Biancalana, S. Cartaleva, Y. Dancheva, T. Karaulanov, C.Marinelli, E. Mariotti, L. Moi, \"Two-color coherent population trapping in a single Cs hyperfine transition, with application in magnetometry\", Appl.Phys. B **76**, 667-675 (2003).
* (19) D. Budker, W. Gawlik, D. F. Kimball, M. Rochester, V. V. Yashchuk, A. Weis, \"Resonant nonlinear magneto-optical effects in atoms\", Rev.Mod.Phys. **74**, 1154-1201 (2002).
* (20) E. Arimondo, \"Coherent population trapping in laser spectroscopy\", Prog. Opt. **35**, 257-354 (1996).
* (21) C. Cohen-Tannoudji, J. Dupon-Roc, G. Grynberg, _Atom-Photon Interaction_, (Wiley, New York, 1992).
* (22)[http://magnetometer.fisica.unisi.it/lab](http://magnetometer.fisica.unisi.it/lab)
* (23)[http://www.ingv.it/geomag/laquila.htm](http://www.ingv.it/geomag/laquila.htm) | An automated magnetometer suitable for long lasting measurement under stable and controllable experimental conditions has been implemented. The device is based on Coherent Population Trapping (CPT) produced by a multi-frequency excitation. CPT resonance is observed when a frequency comb, generated by diode laser current modulation, excites Cs atoms confined in a \\(\\pi/4\\times(2.5)^{2}\\times 1\\) cm\\({}^{3}\\), 2 Torr \\(N_{2}\\) buffered cell. A fully optical sensor is connected through an optical fiber to the laser head allowing for truly remote sensing and minimization of the field perturbation. A detailed analysis of the CPT resonance parameters as a function of the optical detuning has been made in order to get high sensitivity measurements. The magnetic field monitoring performances and the best sensitivity obtained in a balanced differential configuration of the sensor are presented. OCIS 120.4640, 020.1670, 300.6380.
This work has been submitted to JOSA B for publication [Josa B (7) 2007] | Summarize the following text. |
arxiv-format/0705_2084v1.md | CDMA Technology for Intelligent Transportation Systems
Rabindranath Bera 1, Jitendranath Bera2, Sanjib Sil 3,
Dipak Mondal 1, Sourav Dhar 1 & Debdatta Kandar 4
1Sikim Manipal Institute of Technology, Sikikim Manipal University,
Majitar, Rangpo, East Sikikim 737132, India
Phone: 03592-246220 ext 280; Fax: 03592-246112
E-mail : [email protected]; [email protected]
2Department of Applied Physics, University of Calcutta
92 Acharya Pratilla Chandra Road, Kolkata 700 009, India
3Institute of Radio physics & Electronics, University of Calcutta
92 Acharya Pratilla Chandra Road, Kolkata 700 009, India.
4Department of Electronics & Telecommunication Engineering
Jadavpur University, Kolkata 700 032, India
## Introduction
Speed limit in the super highways is generally not imposed on the cars moving at their highest possible speeds. As a result, it often results in severe accidents and deaths. A CDMA radar based collision avoidance system can therefore be thought of which is to be fitted in the cars. This paper will highlight the detailed development of such radar for collision avoidance of cars.
CDMA Technology and its several versions are also popular for communication. It can also be exploited for a wide range of applications including range measurement, material penetration and low probability of interception. DS-CDMA, CDMA2000, MC-CDMA and MIMO CDMA [2],[3] are the different versions of same technology. The heart of such CDMA technology is the spread spectrum technology using PN sequence coding. The CDMA based digital radar technology will give rise to several advantages over conventional radars so that it can be used in ITS application successfully. Additionally, the same technology can also be explored to meet the communication need in ITS application [1].
## The Intelligent Mobile Campus Network (IMCN) [4][5]
The above mentioned two applications of CDMA in ITS can be further expanded in an IMCN which is modeled as shown in figure 1. There are 4 cells namely cell1, cell2, cell3 and cell4 where each cell is defined as the geographical area of typically 100 meter over which a wireless communication is to be established between a mobile user and a fixed Base station. cell 1 and cell 2 are the two neighboring cells whereas cell3 and cell4 are another two remote neighboring cells. The four base stations will be placed on the rooftop of each building. All the four Base stations have their wireless connectivity with their respective wireless mobile handsets using the carrier frequency near 5.8GHz. The two neighboring base stations are connected by an MSC (Master Switching Center). So to have a total integrity among the four base stations, the four base stations are connected by an MSC (Master Switching Center). The four base stations are connected by an MSC (Master Switching Center).
stations two MSC namely MSC1 and MSC2 are required. As shown in figure A, MSC1 connects cell1 and cell2 whereas MSC2 connects cell3 and cell4. Each MSC is physically separated by a distance of 200 meter or more and is linked with the wireless network using 12 GHz microwave carrier. Thus the total system will provide full duplex communication with higher data rate of 64 Mbits/s approximately. Thus the total IMCN is too complex requiring a knowledge of several technologies for its design and successful implementation.
This paper will highlight only the CDMA technology and other relevant technologies used for the communication and radar applications utilized in the car. The authors are encouraged to exploit the latest digital communication and digital radar technology in their design and implementation.
### CDMA Technology in Car
A radio mounted on the car will normally face a lots of problems like:
1. Active interferences comprising both adjacent as well as co-channel interference ( ACI & CCI).
2. Passive Interference coming from multi path.
### Multi path transmission
When the handset and base station are within line of sight, the primary propagation will usually be the line of sight and secondary propagations due to reflections will be less significant [6]. Reflected propagations become more significant if the line of sight is obstructed. Figure 1 illustrates a simplified multi path propagation. Whenever there is more than one significant impinging wave (with different phases) on a mobile receive antenna, the receiver will be subject to varying signal levels as it moves around. This is caused by constructive and destructive addition of the impinging waves due to their
different phase offsets. This mechanism is called multi path fading.
### Simulation of Space Diversity
The idea of antenna diversity is that if receive antenna A is experiencing a low signal level due to fading, also called a deep fade, Antenna B will probably not suffer from the same deep fade, provided the two antennas are displaced in position or in polarity. A Matlab based simulation is conducted in the Laboratory considering slow fading and the received signal strength variation is illustrated in figure 2.
The option to select the best antenna significantly improves performance in outdoor environments, but does not necessarily increase the maximum line-of-sight range of a product. Figure 3 illustrates the effect of selecting the best antenna. Antenna diversity is implemented by equipping the base station or handset (or both) with two antennas. Various selection schemes can be implemented, depending on the actual antenna setup. Preamble antenna diversity, also known as fast antenna diversity, has proven its use in fast frequency hopping systems. Preamble antenna diversity is implemented by comparing the RSSI value of each antenna in the beginning of each receive burst.
### Experiments on Frequency Diversity
A delay spread multi path propagation study experiment was carried over the sea near Sagardwip Island,The Bay of Bengal, West Bengal. A LOS link was set up over the sea saline water and two carriers at 12 & 13 GHz were transmitted simultaneously and received from a distance of about 5 km. The experiment was
Figure 1: This figure illustrates a simplified multipath propagation, that will cause fading.
Figure 3: Demonstration of how two antennas can be used to ensure adequate signal level to the receiver.
Figure 2: Typical Received Signal Strength Indication (RSSI) of a multipath faded signal.
successfully conducted and the interesting results related to justification of using FHSS technology will be highlighted
The interesting observations are as follows:
a) A typical problem of fading with a fade depth of 30 dB is noticed at 12.5 GHz lasts for about half an hour depending on the sea water condition. The fading time varies from day to day. The LOS link data of signal strength at Kakdwip, received at Sagardwip over the river with saline water near Bay of Bengal is shown in fig.4.
b) With a deep interest to observe whether the above fading is effective at the same time to other neighboring frequency, we have transmitted two radio frequencies one at 12.5 GHz and another at 11.5 GHz and three kinds of observations are noticed and shown in Fig.4.
1: A typical problem of fading with a maximum fade depth of 30 dB is noticed at both the frequencies. It
lasts for about half an hour depending on the sea water condition. The fading time varies from day to day.
2: The close observation of the results reveals that the signals of 11.5 & 12.5 GHz radio frequencies are not
degraded at the same time, rather at different times.
3: The plots of the above observations are divided into 3 separate regions:
**Region I:** reception at both the frequencies is stable & normal
**Region II:** reception at 12.5 GHz is faded by approximately 30 dB while 11.5 GHz reception remains steady.
The above three Simulation and experimental facts can be referred to choose the proper technology for ITS application. Diversity reception may be the best choice to ( both frequency as well as space diversity ) restrict the active and passive interference to an acceptable level which are very strong in ITS application. Therefore, we like to incorporate the FHSS ( frequency hopping spread spectrum ) technology as a frequency diversion method and two antenna instead of single as **space diversity** reception system with a separation of 25 cm between them.
**Choice of frequency**
To avoid the end user radio license problem, the best choice of radio carrier both for communication
Figure 4: Frequency Diversity experiment
Figure 5: Interference suppression capability of SS radio Vs. Frequency curve.
and radar should be the ISM band frequencies. There are 3 band of frequencies allotted for ISM bands, namely: 900 MHz, 2.4 GHz and 5.8 GHz. The following Table 1 will dictate the choice of frequency.
The above table is self explanatory which will justify our choice of frequency for ITS application to be at 5.8 GHz.
### Specifications of the Radio
Before launching any kind of radio application with the above specification (e.g. Communication or radar) the two channel characteristics, _bandwidth_ and _Power_, constitute the primary resources available to the designer [7]. Accordingly, in an experimental laboratory set up (consists of developed radio and several Instrument like Pulse generator, Distortion analyzer, spectrum Analyzer, Digital Storage Oscilloscope etc.), the following results are established as shown in figure 6.
The radio is best usable in the range of 350- 600 micro second PRT range.
### Radar Modes of Operation
A 13 bit code is transmitted from the Car using omni directional antenna so that signal from the nearby cars can be echoed back towards the space diversity reception system using two antenna. The space diversity antenna is helping us in determining the nearest car and rejecting reflection from distant car. The'start' bit preceding the actual code is also helping in authentication of signal received i.e. if start bit followed by the received code then only the delay between transmitted and received code will be measured. It is then translated into the distance of the nearest car.
## Conclusion
The IMCN system is thus operational in its basic form for its two modes of operation supporting several mobile users over a distance of 500 meter or more. Lots of R&D efforts to be imparted for its commercialization.
## Further Extension
The above radio model is successfully implemented at SMIT but is operational in 'either or' mode i.e. the same radio is either used as communication device or as radar installed in the Car. To make the system operational in simultaneous mode, we have started the'Software Defined Radio' approach which is very much useful today for best speed and flexibility. The block diagram of revised ITS system is as shown in figure7. It utilizes two radio front end namely Radio I and Radio II serving the purposes of communication and radar respectively. At the backend, FPGA based signal processor will be used which will be finally controlled and monitored by PC. The other major components are A/D and D/A converter, Flash and SDRAM and 4 channel receivers.
\\begin{table}
\\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \\hline Freq. & Available bandwidth & Value of wavelength & Effectiveness of space & Remarks \\\\ (MHz) & \\(\\Delta\\)f ( MHz) for frequency & & & \\\\ \\hline
900-930 & 30 & 33 & Not effective & \\(\\Delta\\)f less, \\(\\lambda\\) more. \\\\ \\hline
2400-2480 & 80 & 12.5 & Effective & \\(\\Delta\\)f more, \\(\\lambda\\) less \\\\ \\hline
5760-5840 & 80 & **5.172** & **More** & \\(\\Delta\\)f more, \\(\\lambda\\) least \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Choice of Frequency for ITS
Figure 6: Channel Characterization of the Radio
## Reference
[1] James D. Taylor, \"Ultra Wideband Radar Technology\"., _CRC Press,_ 2001
[2] \"On the System Design Aspects of Code Division Multiple Access (CDMA) Applied to Digital Cellular and Personal Communications Networks,\" A. Salmasi and K. Gilhousen, _Proc. 4\\({}^{\\text{st}}\\) IEEE VTC'91, 1991_.
[3] \"Code Division Multiple Access -A Modern Trends in Multimedia Mobile Communication\", D. Kandar, R. Bera,A. R. Sardar, S. Kandar, S. S. Singh and S. K. Sarkar, International Conference on Services Management ICSM2005 _held at IIMT, Gurgaon, Delhi during March 11-12,2005_.
[4] \"Wireless communication and networks\", W. Stallings, _Pearson Education Asia publication, pp 320-333,2002_
[5] \" Computer networking -a top down approach featuring the internet\", J.A. Kurose and K. W. Ross, _Pearson Education Asia publication, pp 480-487,2003_
[6] J. Korhonen, \"Introduction to 3G mobile Communications\", _Artech House,_2001
[7] \"Digital communication systems\", Simon Haykin, _John Wiley & Sons, 2004_.
Sanjib Sil: Born in 1965 at Kolkata, West Bengal, B. Tech from IETE ( 1989), M. Tech from BIT, Meshra (1991). Ph.D. registration from the Institute of Radiophysics & Electronics, The University of Calcutta ( 2002). Currently working as Asst. Professor, International Institute of Information Technology, Kolkata. Microwave/ Millimeter wave based Broadband Wireless Mobile Communication, Remote Sensing are the area of specialisatoion.
Dipak Mondal: Born in 1976 at Baruipur, West Bengal, B. Tech, M. Tech from the Institute of Radiophysics & Electronics, the University of Calcutta, in the year 2002, 2004 respectively. Currently working as Lecturer, Dept. of Electronics & Comm. Engg., Sikkim Manipal University, Microwave/ Millimeter wave based Broadband Wireless Mobile Communication, Remote Sensing are the area of specialisatoion.
Sourav Dhar: Born in 1980 at Raiganj, West Bengal, B. E from Bangalore Institute of Technology, Visveswaraiah Technological University in the year 2002, M. Tech from Sikkim Manipal Institute Of Technology, Sikkim Manipal University in the year 2005. Currently working as Lecturer, Dept. of Electrical & Electronics, SMIT, Broadband Wireless Mobile Communication is the area of specialisatoion.
Debdatta Kandar: Born in 1977 at Deulia, West Bengal, B.Sc. ( Honours) from The University of Calcutta in the year 1997, M. Sc from Vidyasagar University in the year 2001. Currently working as Research Fellow in Jadavpur University.
Figure 7: The block diagram of revised ITS system. | Scientists and Technologists involved in the development of radar and remote sensing systems all over the world are now trying to involve themselves in saving of manpower in the form of developing a new application of their ideas in Intelligent Transport system( ITS). The world statistics shows that by incorporating such wireless radar system in the car would decrease the world road accident by 8-10% yearly. The wireless technology has to be chosen properly which is capable of tackling the severe interferences present in the open road. A combined digital technology like Spread spectrum along with diversity reception will help a lot in this regard. Accordingly, the choice is for FHSS based space diversity system which will utilize carrier frequency around 5.8 GHz ISM band with available bandwidth of 80 MHz and no license. For efficient design, the radio channel is characterized on which the design is based. Out of two available modes e.g. Communication and Radar modes, the radar mode is providing the conditional measurement of the range of the nearest car after authentication of the received code, thus ensuring the reliability and accuracy of measurement. To make the system operational in simultaneous mode, we have started the 'Software Defined Radio' approach for best speed and flexibility.
Key words:- ISM, DS-CDMA, CDMA2000, MC-CDMA, MIMO CDMA, IMCN, FHSS, MSC, ACI, CCI | Give a concise overview of the text below. |
arxiv-format/0705_2459v1.md | # Hierarchical Markovian Models for Hyperspectral Image Segmentation
## 1 Introduction
Hyperspectral images data can be represented either as a set of images \\(x_{\\omega}(\\mathbf{r})\\) or as a set of spectra \\(x_{\\mathbf{r}}(\\omega)\\) where \\(\\omega\\in\\Omega\\) indexes the wavelength and \\(\\mathbf{r}\\in\\mathcal{R}\\) a pixel position [1, 2, 3]. In both representations, the data are dependent in both spatial positions and in spectral wavelength variable. Classical methods of hyperspectral image analysis try either to classify the spectra \\(x_{\\omega}(\\mathbf{r})\\) in \\(K\\) classes \\(\\{a_{k}(\\omega),k=1,\\cdots,K\\}\\) or to classify the images \\(x_{\\omega}(\\mathbf{r})\\) in \\(K\\) classes \\(\\{s_{k}(\\mathbf{r}),j=1,\\cdots,N\\}\\), using the classical classification methods such as distance based methods (like \\(K\\)-means) or probabilistic methods using the mixture of Gaussian (MoG) modeling of the data. These methods thus either neglect the spatial structure of the spectra or the spectral natures of the pixels along the wavelength bands.
The dimensionality reduction problem in hyperspectral images can be written as:
\\[x_{\\mathbf{r}}(\\omega)=\\sum_{k=1}^{K}s_{k}(\\mathbf{r})\\;a_{k}(\\omega)+\\epsilon_{\\mathbf{r }}(\\omega), \\tag{1}\\]
where the \\(a_{k}(\\omega)\\) are the \\(K\\) spectral source components and \\(s_{k}(\\mathbf{r})\\) are their associated images.
This relation, when discretized, can be written as follows:
\\[\\mathbf{x}(\\mathbf{r})=\\mathbf{A}\\mathbf{s}(\\mathbf{r})+\\mathbf{\\epsilon}(\\mathbf{r}) \\tag{2}\\]
\\(\\mathbf{x}(\\mathbf{r})=\\{x_{i}(\\mathbf{r}),i=1,\\cdots,M\\}\\) is the set of \\(M\\) observed images in different bands \\(\\omega_{i}\\), \\(\\mathbf{A}\\) is the mixing matrix of dimensions \\((M,K)\\) whose columns are composed of the spectra \\(a_{k}(\\omega)\\), \\(\\mathbf{s}(\\mathbf{r})=\\{s_{k}(\\mathbf{r}),k=1,\\cdots,K\\}\\) is the set of \\(K\\) unknown components (source images) and \\(\\mathbf{\\epsilon}(\\mathbf{r})=\\{\\epsilon_{i}(\\mathbf{r}),i=1,\\cdots,m\\}\\) represents the errors.
The main objective in unsupervised classification of the spectra is to find both the spectra \\(a_{k}(\\omega)\\) and their associated image components \\(s_{k}(\\mathbf{r})\\). This problem, written as in equation (2) is recognized as the Blind Source Separation (BSS) in signal processing community, for which, many general solutions such as Principal Components Analysis (PCA) and Independent Components Analysis (ICA) have been proposed. However these general purpose methods do not account for the specificity of the hyperspectral images.
Indeed, as we mentioned, neither the classical methods of spectra or images classification nor the PCA and ICA methods of BSS give satisfactory results for hyperspectral images. The reasons are that, in the first category of methods either they account for spatial or for spectral properties and not for both of them simultaneously, and PCA and ICA methods do not account for the specificity of the mixing matrix and the sources.
In this paper, we propose to use this specificity of the hyperspectral images and consider the dimensionality reduction problem as the blind sources separation (BSS) of equation 2 and use a Bayesian estimation framework with a hierarchical model for the sources with a common hidden classification variable which is modelled as a Potts-Markov field. The joint estimation of this hidden variable, the sources and the mixing matrix of the BSS problem gives a solution for all the three problems of dimensionality reduction, spectra classification and segmentation of hyperspectral images.
## 2 Proposed Model and Method
We propose to consider the equation (2) written in the following vector form:
\\[\\underline{\\mathbf{x}}=\\mathbf{A}\\underline{\\mathbf{s}}+\\mathbf{\\epsilon} \\tag{3}\\]
where we used \\(\\underline{\\mathbf{x}}=\\{\\mathbf{x}(\\mathbf{r}),\\mathbf{r}\\in\\mathcal{R}\\}\\), \\(\\underline{\\mathbf{s}}=\\{\\mathbf{s}(\\mathbf{r}),\\mathbf{r}\\in\\mathcal{R}\\}\\) and \\(\\underline{\\mathbf{\\epsilon}}=\\{\\mathbf{\\epsilon}(\\mathbf{r}),\\mathbf{r}\\in\\mathcal{R}\\}\\) and we are going to account for the specificity of the hyperspectral images through a probabilistic modeling of all the unknowns, starting by assuming that the errors \\(\\mathbf{\\epsilon}(\\mathbf{r})\\) are centered, white, Gaussian with covariance matrix \\(\\mathbf{\\Sigma}_{\\epsilon}=\\text{diag}\\left[\\sigma_{\\epsilon_{1}}^{2},..,\\sigma_{ \\epsilon_{M}}^{2}\\right]\\). This leads to
\\[p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{s}},\\mathbf{A},\\mathbf{\\Sigma}_{\\epsilon})=\\prod_ {\\mathbf{r}}\\mathcal{N}(\\mathbf{A}\\mathbf{s}(\\mathbf{r}),\\mathbf{\\Sigma}_{\\epsilon}) \\tag{4}\\]
The next step is to model the sources. As we mentioned in the introduction, we want to impose to all these sources \\(\\mathbf{s}(\\mathbf{r})\\) to be piecewise homogeneous and share the same common segmentation, where the pixels in each region are considered to be homogeneous and associated to a particular spectrum representing the type of the material in that region. We also want that those spectra be classified in \\(K\\) distinct classes, thus all the pixels in regions associated with a particular spectrum share some common statistical parameters. This can be achieved through the introduction of a discrete valued hidden variable \\(z(\\mathbf{r})\\) representing the labels associated to each type of material and thus assuming the following:
\\[p(s_{j}(\\mathbf{r})|z(\\mathbf{r})=k))=\\mathcal{N}(m_{j_{k}},\\sigma_{j_{k}}^{2}),\\quad k =1,\\cdots,K \\tag{5}\\]
with the following Potts-Markov field model
\\[p(\\mathbf{z})\\propto\\exp\\left[\\beta\\sum_{\\mathbf{r}}\\sum_{\\mathbf{r}^{\\prime}\\in\\mathcal{ V}(\\mathbf{r})}\\delta(z(\\mathbf{r})-z(\\mathbf{r}^{\\prime}))\\right] \\tag{6}\\]
where \\(\\mathbf{z}=\\{z(\\mathbf{r}),\\mathbf{r}\\in\\mathcal{R}\\}\\) represents the common segmentation of the sources and the data. The parameter \\(\\beta\\) controls the mean size of those regions.
We may note that, assuming _a priori_ that the sources are mutually independent and that pixels in each class \\(k\\) are independent form those of class \\(k^{\\prime}\\), we have
\\[p(\\underline{\\mathbf{s}}|\\mathbf{z})=\\sum_{k}\\sum_{\\mathbf{r}\\in\\mathcal{R}_{k}}\\sum_{j}p (s_{j}(\\mathbf{r})|z(\\mathbf{r})=k)) \\tag{7}\\]
where \\(\\mathcal{R}_{k}=\\{\\mathbf{r}:z(\\mathbf{r})=k\\}\\) and \\(\\mathcal{R}=\\cup_{k}\\mathcal{R}_{k}\\).
To insure that each image \\(s_{j}(\\mathbf{r})\\) is only non-zero in those regions associated with the \\(k\\)th spectrum, we impose \\(K=n\\) and \\(m_{j_{k}}=0,\\forall j\
eq k\\) and \\(\\sigma_{j_{k}}^{2}=0,\\forall j\
eq k\\). We may then write
\\[p(\\underline{\\mathbf{s}}|\\mathbf{z})=\\sum_{\\mathbf{r}}p(\\mathbf{s}(\\mathbf{r})|z(\\mathbf{r})=k))=\\sum _{\\mathbf{r}}\\mathcal{N}(\\mathbf{m}_{k}(\\mathbf{r}),\\mathbf{\\Sigma}_{k}(\\mathbf{r})) \\tag{8}\\]
where \\(\\mathbf{m}_{k}(\\mathbf{r})\\) is a vector of size \\(n\\) with all elements equal to zero except the \\(k\\)-th element \\(k=z(\\mathbf{r})\\) and \\(\\mathbf{\\Sigma}_{k}(\\mathbf{r})\\) is a diagonal matrix of size \\(n\\times n\\) with all elements equal to zero except the \\(k\\)-th main diagonal element where \\(k=z(\\mathbf{r})\\).
Combining the observed data model (3) and the sources model (6) of the previous section, we obtain the following hierarchical model:
## 3 Bayesian Estimation Framework
Using the prior data model (5), the prior source model (6) and the prior Potts-Markov model (8) and also assigning appropriate prior probability laws \\(p(\\mathbf{A})\\) and \\(p(\\underline{\\mathbf{\\theta}})\\) to the hyperparameters \\(\\underline{\\mathbf{\\theta}}=\\{\\mathbf{\\theta}_{\\epsilon},\\mathbf{\\theta}_{s}\\}\\) where \\(\\mathbf{\\theta}_{\\epsilon}=\\mathbf{R}_{\\epsilon}\\) and \\(\\mathbf{\\theta}_{s}=\\{(m_{jk},\\sigma_{j_{k}}^{2})\\}\\), we obtain an expression for the posterior law
\\[p(\\underline{\\mathbf{s}},\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}}|\\underline{\\mathbf{x} })\\propto p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{s}},\\mathbf{A},\\mathbf{\\theta}_{ \\epsilon})\\,p(\\underline{\\mathbf{s}}|\\mathbf{z},\\mathbf{\\theta}_{s})\\,p(\\mathbf{z})\\,p(\\mathbf{A} )\\,p(\\underline{\\mathbf{\\theta}}) \\tag{9}\\]
I this paper, we used conjugate priors for all of them, i.e., Gaussian for the elements of \\(\\mathbf{A}\\), Gaussian for the means \\(m_{j_{k}}\\) and inverse Gamma for the variances \\(\\sigma_{j_{k}}^{2}\\) as well as for the noise variances \\(\\sigma_{\\epsilon_{i}}\\).
When given the expression of the posterior law, we can then use it to define an estimator such as Joint Maximum A Posteriori (JMAP) or the Posterior Means (PM) for all the unknowns. The first needs optimization algorithms and the second integration methods. Both are computationally demanding. Alternate optimization is generally used for the first while the MCMC techniques are used for the second.
In this work, we propose to separate the unknowns in two sets \\((\\underline{\\mathbf{s}},\\mathbf{z})\\) and \\((\\mathbf{A},\\underline{\\mathbf{\\theta}})\\) and then use the following iterative algorithm:
* Estimate \\((\\underline{\\mathbf{s}},\\mathbf{z})\\) using \\(p(\\underline{\\mathbf{s}},\\mathbf{z}|\\widehat{\\mathbf{A}},\\widehat{\\underline{\\mathbf{ \\theta}}},\\underline{\\mathbf{x}})\\) by \\(\\widehat{\\underline{\\mathbf{s}}}\\sim p(\\underline{\\mathbf{s}}|\\widehat{\\mathbf{z}},\\widehat {\\mathbf{A}},\\widehat{\\underline{\\mathbf{\\theta}}},\\underline{\\mathbf{x}})\\) and \\(\\widehat{\\mathbf{z}}\\sim p(\\underline{\\mathbf{z}}|\\widehat{\\mathbf{A}},\\widehat{\\underline{ \\mathbf{\\theta}}},\\underline{\\mathbf{x}})\\)
* Estimate \\((\\mathbf{A},\\underline{\\mathbf{\\theta}})\\) using \\(p(\\mathbf{A},\\underline{\\mathbf{\\theta}}|\\widehat{\\underline{\\mathbf{s}}},\\widehat{\\underline {\\mathbf{x}}},\\underline{\\mathbf{x}})\\) by \\(\\widehat{\\mathbf{A}}\\sim p(\\underline{\\mathbf{A}}|\\widehat{\\underline{\\mathbf{s}}}, \\widehat{\\underline{\\mathbf{z}}},\\widehat{\\underline{\\mathbf{\\theta}}},\\underline{\\mathbf{x }})\\) and \\(\\widehat{\\underline{\\mathbf{\\theta}}}\\sim p(\\underline{\\mathbf{\\theta}}|\\widehat{\\underline {\\mathbf{s}}},\\widehat{\\underline{\\mathbf{z}}},\\widehat{\\mathbf{A}},\\underline{\\mathbf{x}})\\)
In this algorithm, \\(\\sim\\) represents either \\(argmax\\) or _generate sample using_ or still _compute the Mean Field Approximation (MFA)_. To implement this algorithm, we need the following expressions:
* \\(p(\\underline{\\mathbf{s}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}}) \\propto p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{s}},\\mathbf{A},\\mathbf{\\Sigma}_{\\epsilon})\\, p(\\underline{\\mathbf{s}}|\\underline{\\mathbf{z}},\\underline{\\mathbf{\\theta}})\\). It is then easy to see that \\(p(\\underline{\\mathbf{s}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}})\\) is separable in \\(\\mathbf{r}\\): \\[p(\\underline{\\mathbf{s}}|\\mathbf{z},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}}) = \\prod_{\\mathbf{r}}p(\\mathbf{s}(\\mathbf{r})|z(\\mathbf{r}),\\mathbf{\\theta},\\mathbf{x}(\\mathbf{r}))\\] (10) \\[= \\prod_{\\mathbf{r}}\\mathcal{N}(\\overline{\\mathbf{s}}(\\mathbf{r}),\\mathbf{B}(\\mathbf{r}))\\]
Figure 1: Proposed hierarchical model for hyperspectral images: the sources \\(s_{j}(\\mathbf{r})\\) are hidden variables for the data \\(x_{i}(\\mathbf{r})\\) and the common classification and segmentation variables \\(z(\\mathbf{r})\\) is a hidden variable for the sources.
with
\\[\\left\\{\\begin{array}{l}\\mathbf{B}(\\mathbf{r})=\\left[\\mathbf{A}^{t}\\mathbf{\\Sigma}_{\\epsilon}^{-1} \\mathbf{A}+\\mathbf{\\Sigma}_{z(\\mathbf{r})}^{-1}\\right]^{-1}\\\\ \\bar{\\mathbf{s}}(\\mathbf{r})=\\mathbf{B}(\\mathbf{r})[\\mathbf{A}^{t}\\mathbf{\\Sigma}_{\\epsilon}^{-1}\\mathbf{x} (\\mathbf{r})+\\mathbf{\\Sigma}_{z(\\mathbf{r})}^{-1}\\mathbf{m}_{z(\\mathbf{r})}]\\end{array}\\right. \\tag{11}\\]
In this relation \\(\\mathbf{m}_{z(\\mathbf{r})}\\) is a vector of size \\(n\\) with all elements equal to zero except the \\(k\\)-th element where \\(k=z(\\mathbf{r})\\) and \\(\\mathbf{\\Sigma}_{z(\\mathbf{r})}\\) is a diagonal matrix of size \\(n\\times n\\) with all elements equal to zero except the \\(k\\)-th diagonal where \\(k=z(\\mathbf{r})\\).
\\[\\bullet\\quad p(\\mathbf{z}|\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}}) \\propto p(\\underline{\\mathbf{x}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}})\\ p(\\mathbf{z }),\\quad\\mbox{where}\\] \\[p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{x}},\\mathbf{A},\\underline{\\bm {\\theta}}) = \\prod_{\\mathbf{r}}p(\\mathbf{x}(\\mathbf{r})|z(\\mathbf{r}),\\mathbf{A},\\underline{\\mathbf{ \\theta}})\\] \\[= \\prod_{\\mathbf{r}}\\mathcal{N}(\\mathbf{A}\\mathbf{m}_{z(\\mathbf{r})},\\mathbf{A}\\mathbf{ \\Sigma}_{z(\\mathbf{r})}\\mathbf{A}^{t}+\\mathbf{\\Sigma}_{\\mathbf{\\epsilon}}).\\]
It is then easy to see that, even if \\(p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{z}},\\mathbf{A},\\underline{\\mathbf{\\theta}})\\) is separable in \\(\\mathbf{r},p(\\underline{\\mathbf{z}}|\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x }})\\) is not and it has the same markovian structure that \\(p(\\mathbf{z})\\).
\\(\\bullet\\quad p(\\mathbf{A}|\\underline{\\mathbf{z}},\\underline{\\mathbf{\\theta}},\\underline{ \\mathbf{x}})\\propto p(\\underline{\\mathbf{x}}|\\underline{\\mathbf{z}},\\mathbf{A},\\underline{ \\mathbf{\\theta}})\\ p(\\mathbf{A})\\).
It is easy to see that, with a Gaussian or uniform prior for \\(p(\\mathbf{A})\\) we obtain a Gaussian expression for this posterior law. Indeed, with an uniform prior, the posterior mean is equivalent to the posterior mode and equivalent to the Maximum Likelihood (ML) estimate \\(\\widehat{\\mathbf{A}}=\\arg\\max_{\\mathbf{A}}\\left\\{p(\\underline{\\mathbf{x}}|\\mathbf{z},\\mathbf{A}, \\underline{\\mathbf{\\theta}})\\right\\}\\) whose expression is:
\\[\\widehat{\\mathbf{A}}=\\left[\\sum_{\\mathbf{r}}\\mathbf{x}(\\mathbf{r})\\bar{\\mathbf{s}}^{\\prime}(\\bm {r})\\right]\\left[\\sum_{\\mathbf{r}}\\bar{\\mathbf{s}}(\\mathbf{r})\\bar{\\mathbf{s}}^{\\prime}(\\mathbf{ r})+\\mathbf{B}(\\mathbf{r})\\right]^{-1}\\]
where \\(\\bar{\\mathbf{s}}(\\mathbf{r})\\) and \\(\\mathbf{B}(\\mathbf{r})\\) are given by (11).
\\(\\bullet\\quad p(\\mathbf{R}_{\\epsilon}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}}, \\underline{\\mathbf{x}})\\propto p(\\underline{\\mathbf{x}}|\\mathbf{z},\\mathbf{A},\\underline{\\bm {\\theta}})\\ p(\\mathbf{R}_{\\epsilon})\\).
It is also easy to show that, with an uniform prior on the logarithmic scale or an inverse gamma prior for the noise variances, the posterior is also an inverse gamma.
\\(\\bullet\\quad p(\\underline{\\mathbf{\\theta}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{x}}) \\propto p(\\underline{\\mathbf{x}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}})\\ p( \\underline{\\mathbf{\\theta}})\\)
Again here, using the conjugate priors for the means \\(m_{jk}\\) and inverse gamma for the variances \\(\\sigma_{j\\,k}^{2}\\) we can obtain easily the expressions of the posterior laws for them.
Details of the expressions of \\(p(\\mathbf{A}|\\mathbf{z},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}})\\), \\(p(\\mathbf{R}_{\\epsilon}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x }})\\) and \\(p(\\underline{\\mathbf{\\theta}}|\\mathbf{z},\\mathbf{A},\\underline{\\mathbf{x}})\\) as well as their modes and means can be found in [4].
## 4 Computational Considerations and Mean Field Approximation
As we can see, the expression of the conditional posterior of the sources is separable in \\(\\mathbf{r}\\) but this is not the case for the conditional posterior of the hidden variable \\(z(\\mathbf{r})\\). So, even if it is possible to generate samples from this posterior using a Gibbs sampling scheme, the cost of the computation is very high for real applications. The Mean Field Approximation (MFA) then becomes a natural tool for obtaining approximate solutions with lower computational cost.
The mean field approximation is a general method for approximating the expectation of a Markov random variable. The idea consists in, when considering a pixel, to neglect the fluctuation of its neighbor pixels by fixing them to their mean values. [5, 6]. Another interpretation of the MFA is to approximate a non separable
\\[p(\\mathbf{z}) \\propto \\exp\\left[\\beta\\sum_{\\mathbf{r}}\\sum_{\\mathbf{r}^{\\prime}}\\delta(z(\\mathbf{r })-z(\\mathbf{r}^{\\prime}))\\right]\\] \\[\\propto \\prod_{\\mathbf{r}}p(z(\\mathbf{r})|z(\\mathbf{r}^{\\prime}),\\mathbf{r}^{\\prime}\\in \\mathcal{V}(\\mathbf{r}))\\]
with the following separable one:
\\[q(\\mathbf{z})\\propto\\prod_{\\mathbf{r}}q(z(\\mathbf{r})|\\bar{z}(\\mathbf{r}^{\\prime}),\\mathbf{r}^{ \\prime}\\in\\mathcal{V}(\\mathbf{r}))\\]
where \\(\\bar{z}(\\mathbf{r}^{\\prime})\\) is the expected value of \\(z(\\mathbf{r}^{\\prime})\\) computed using \\(q(zb)\\). This approximate separable expression is obtained in such a way to minimize \\(KL(p,q)\\) for a given class of separable distributions \\(q\\in\\mathbf{Q}\\).
Using now this approximation in the expression of the conditional posterior law \\(p(\\mathbf{z}|\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}})\\) gives the separable MFA
\\[q(\\mathbf{z}|\\mathbf{A},\\underline{\\mathbf{\\theta}},\\underline{\\mathbf{x}})=\\prod_{\\mathbf{r}}q(z( \\mathbf{r})|\\bar{z}(\\mathbf{r}^{\\prime}),\\mathbf{r}^{\\prime}\\in\\mathcal{V}(\\mathbf{r}),\\mathbf{A}, \\mathbf{\\theta},\\mathbf{x}(\\mathbf{r}))\\]
where \\(q(z(\\mathbf{r})|\\bar{z}(\\mathbf{r}^{\\prime}),\\mathbf{r}^{\\prime}\\in\\mathcal{V}(\\mathbf{r}),\\mathbf{A },\\mathbf{\\theta},\\mathbf{x}(\\mathbf{r}))=\\\\ p(\\mathbf{x}(\\mathbf{r})|z(\\mathbf{r}),\\mathbf{A},\\mathbf{\\theta})\\ q(z(\\mathbf{r})|\\bar{z}(\\mathbf{r}^{ \\prime}),\\mathbf{r}^{\\prime}\\in\\mathcal{V}(\\mathbf{r}))\\) and \\(\\bar{z}(\\mathbf{r})\\) can be computed by
\\[\\bar{z}(\\mathbf{r})=\\frac{\\sum_{z(\\mathbf{r})}z(\\mathbf{r})\\ q(z(\\mathbf{r})|\\bar{z}(\\mathbf{r}^{ \\prime}),\\mathbf{r}^{\\prime}\\in\\mathcal{V}(\\mathbf{r}),\\mathbf{A},\\mathbf{\\theta},\\mathbf{x}(\\mathbf{r}))}{ \\sum_{z(\\mathbf{r})}q(z(\\mathbf{r})|\\bar{z}(\\mathbf{r}^{\\prime}),\\mathbf{r}^{\\prime}\\in \\mathcal{V}(\\mathbf{r}),\\mathbf{A},\\mathbf{\\theta},\\mathbf{x}(\\mathbf{r}))}\\]
## 5 Simulation Results
The main objectives of these simulations are: first to show that the proposed algorithm gives the desired results, and second to compare its relative performances with respect to some classical methods. For this purpose, first we generated some simulated data according to the data generatin model, i.e.; starting by generating \\(z(\\mathbf{r})\\), then the sources \\(\\mathbf{s}(\\mathbf{r})\\), then using some given spectral signatures obtained from real materials construct the mixing matrix \\(\\mathbf{A}\\) and finally generate data \\(\\mathbf{x}(\\mathbf{r})\\). Fig. 2 shows an example of such data generated with the following parameters: \\(m=32,n=4,K=4\\) and SNR\\(=\\)20 dB and Fig. 3 shows a comparison of the results obtained by two classical spectral and image classification methods using the classical \\(K\\)-means with the results obtained by the proposed method. Some other simulated results as well as the results obtained on real data will be given in near future.
classify the images in \\(K\\) classes where \\(K\\) is, in general, much less than the number of spectra or the number of observed images. However, these methods neglect either the spatial organization of the spectra or the spectral property of the pixels along the spectral bands. In this paper, we considered the dimensionality reduction problem in hyperspectral images as a source separation and presented a Bayesian estimation approach with an appropriate hierarchical prior model for the observations and sources which accounts for both spectral and spatial structure of the data, and thus, gives the possibility to jointly do dimensionality reduction, classification of spectra and segmentation of the images.
## References
* [1] K. Sasaki, S. Kawata, and S. Minami, \"Component analysis of spatial and spectral patterns in multispectral images. I. basics,\" _Journal of the Optical Society of America. A_, vol. 4, no. 11, pp. 2101-2106, 1987.
* [2] L. Parra, C. Spence, A. Ziehe, K.-R. Mueller, and P. Sajda, \"Unmixing hyperspectral data,\" in _Advances in Neural Information Processing Systems 13, (NIPS'2000)_. 2000, pp. 848-854, MIT Press.
* [3] Nadia Bali and Ali Mohammad-Djafari, \"Mean Field Approximation for BSS of images with compound hierarchical Gauss-Markov-Potts model,\" in _MaxEnt05,San Jose CA,US_. Aug. 2005, American Institute of Physics (AIP).
* [4] Hichem Snoussi and Ali Mohammad-Djafari, \"Fast joint separation and segmentation of mixed images,\" _Journal of Electronic Imaging_, vol. 13, no. 2, pp. 349-361, Apr. 2004.
* [5] J. Zhang, \"The mean field theory in EM procedures for blind Markov random field image restoration,\" _IEEE Trans. Image Processing_, vol. 2, no. 1, pp. 27-40, Jan. 1993.
* [6] D. Landgrebe, \"Hyperspectral image data analysis,\" _IEEE Trans. Signal Processing_, vol. 19, pp. 17-28, 2002.
Figure 4: Real data: a) Spectral classification using \\(K\\)-means, b) Image classification using \\(K\\)-means, c) Proposed method. Upper row shows estimated \\(z(\\mathbf{r})\\) and lower row the estimated spectra.
Figure 3: Dimensionality reduction by different methods: a) Spectral classification using \\(K\\)-means, b) Image classification using \\(K\\)-means, c) Proposed method. Upper row shows estimated \\(z(\\mathbf{r})\\) and lower row the estimated spectra. These results have to be compared to the original \\(z(\\mathbf{r})\\) and spectra in previous figure.
Figure 2: Two examples of data generating process: a) \\(z(\\mathbf{r})\\) b) spectral signatures used to construct the mixing matrix \\(\\mathbf{A}\\) and c) \\(m=32\\) images. Upper row: \\(K=4\\) and image sizes (64x64). Lower row: \\(K=8\\) and image sizes (128x128). | Hyperspectral images can be represented either as a set of images or as a set of spectra. Spectral classification and segmentation and data reduction are the main problems in hyperspectral image analysis. In this paper we propose a Bayesian estimation approach with an appropriate hiearchical model with hidden markovian variables which gives the possibility to jointly do data reduction, spectral classification and image segmentation. In the proposed model, the desired independent components are piecewise homogeneous images which share the same common hidden segmentation variable. Thus, the joint Bayesian estimation of this hidden variable as well as the sources and the mixing matrix of the source separation problem gives a solution for all the three problems of dimensionality reduction, spectra classification and segmentation of hyperspectral images. A few simulation results illustrate the performances of the proposed method compared to other classical methods usually used in hyperspectral image processing.
Ali Mohammad-DjaFari, Adel Mohammadpoor and Nadia Bali Laboratoire des Signaux et Systemes,
Unite mixte de recherche 8506 (CNRS-Supelec-UPS)
Supelec, Plateau de Moulon, 3 rue Joliot Curie, 91192 Gif-sur-Yvette, France. | Provide a brief summary of the text. |
arxiv-format/0705_2690v3.md | # Hot QCD equations of state and relativistic heavy ion collisions
Vinod Chandra
[email protected] Department of Physics, IIT Kanpur, Kanpur-208 016, India.
Ravindra Kumar
[email protected] Department of Physics, IIT Kanpur, Kanpur-208 016, India.
V. Ravishankar
[email protected] Department of Physics, IIT Kanpur, Kanpur-208 016, India
Raman Research Institute, Bangalore, 560080, India
## I Introduction
Recent experimental results[1; 2; 3; 4] indicate that the quark gluon plasma has already been produced at RHIC, and that its behavior is not close to that of an ideal gas. Indeed, measurements of flow parameters [1], and observations of jet quenching[5] have stimulated the theoretical interpretation that the QGP behaves like a nearly perfect fluid[6], characterized by a small value of the viscosity to entropy density ratio, lying in the range \\(.1\\sim.3\\). [7; 8; 9]; this range may be contrasted with the corresponding value for liquid Helium (above superfluid transition temperature) which is close to ten [10]. These observations signal the fact that the deconfined phase is strongly interacting, and are consistent with the lattice simulations[11], which predict a strongly interacting behavior even at temperatures which are a few \\(T_{c}\\). In an attempt to appreciate this surprising result, interesting analogies have been drawn with \\(ADS/CFT\\) correspondence[10] and also with some strongly coupled classical systems[12]. In any case, the emergence of the strongly interacting behaviour puts into doubt the credibility of a large body of analyses which are based on ideal or nearly ideal behaviour of QGP.
In this context, there is an interesting attempt by Arnold and Zhai [13] and Zhai and Kastening[14] who have determined the equation of state (EOS) of interacting quarks and gluons upto \\(O(g^{5})\\) in the coupling constant. This strictly perturbative EOS, which we henceforth denote by EOS1, has been improved upon by Kajantie et al.[15; 16] who have incorporated the contributions from the nonperturbative scales _viz._\\(gT\\) and \\(g^{2}T\\) and determined the EOS upto \\(O(g^{6}ln(\\frac{1}{g}))\\)[18]. The latter will be denoted by EOS2. Subsequent studies [19; 20; 21] have emphasized the relevance of the above EOS to study quark gluon plasma. One would naturally wish to compare these EOS with (the fully non-perturbative) lattice results. EOS2 has been found [15] to be in qualitative agreement with the lattice results. It is not without interest to explore further whether this qualitative agreement can be further quantified, and whether HTL improved EOS can describe the QGP produced in heavy ion collisions. It is worthwhile noting the earlier attempts [22; 23; 24] that have been made to determine thermodynamic quantities such as entropy and the specific heat \\(c_{v}\\) in improved perturbative approaches to QGP.
On the other hand, it is by now well established that the semiclassical approach is a convenient way to study the bulk properties of QGP[25; 26; 27; 28; 29], since they automatically incorporate the HTL effects[28; 29]. There is a wealth of results which have been obtained within this framework[25; 26; 27], where the nonperturbative features manifest as effective mean color fields. These color fields have the dual role of producing the soft and sensorist partons, apart from modulating their interactions. The emergence of such effective field degrees of freedom, together with a classical transport has been indicated earlier by Blaizot and Iancu [30].
In this context, it is pertinent to ask if one could use heavy ion collisions to distinguish the various EOS and pick up the right one, by employing the semiclassicalframework involving an appropriate kinetic equation.The purpose of this paper is to explore such possibilities. As a first step in this direction, we shall show how the distribution functions underlying the proposed EOS can be extracted with a minimal ansatz, _viz_, effective chemical potentials for quarks and gluons. Once the distribution function is obtained, it can be used to study the bulk properties of the system such as chromo responses including the ubiquitous Debye Mass. Postponing all the other applications to a future work, we shall concentrate on determining the Debye mass through this procedure. As mentioned, we focus on EOS1 and EOS2. Both of them have been proposed for the case when the baryon number density vanishes. The corresponding chemical potentials are hence set to zero. There exist generalizations of the above EOS, proposed by Vuorinen [17] and more recently by Ipp et al [18], which allow for a finite baryon number. The two sets are applicable to distinct physical situations; the former (EOS1 and EOS2) are relevant to the QGP in the central midrapidity region of URHIC while the works of Ref. [17; 18] are applicable to peripheral collisions and /or when the so called nuclear transparency is only partial. An application of the above EOS to URHIC will be taken up separately.
This paper is organized as follows. In the next section, we extract the distribution functions for the gluons and the quarks from EOS1 and EOS2. We consider the pure gluonic case separately from the full QCD, by first setting \\(N_{f}=0\\). The (interacting) quark sector is then dealt with. In section III, the Debye mass is determined by employing the semiclassical method developed by Kelly et.al [28; 29]. In section III(B), we compare our results on screening length for EOS1 and EOS2 with the recent lattice results. We summarize the results and conclude in section IV. The appendix contains some details of calculations which are not explicitly given in the main text; it also lists some useful integrals.
## II Extraction of the distribution functions
Recently Arnold et. al [13] have derived an equation of state (EOS1) for high temperature QCD up to \\(O(g^{5})\\). EOS1 reads,
\\[P^{(1)} = \\frac{8\\pi^{2}}{45\\beta^{4}}\\bigg{\\{}(1+\\frac{21N_{f}}{32})-\\frac {15}{4}(1+\\frac{5N_{f}}{12})\\frac{\\alpha_{s}}{\\pi}+30(1+\\frac{N_{f}}{6})( \\frac{\\alpha_{s}}{\\pi})^{\\frac{3}{2}}\\] \\[+\\bigg{[}(237.2+15.97N_{f}-0.413N_{f}^{2}+\\frac{135}{2}(1+\\frac{N_ {f}}{6})\\ln(\\frac{\\alpha_{s}}{\\pi}(1+\\frac{N_{f}}{6}))\\] \\[-\\frac{165}{8}(1+\\frac{5N_{f}}{12})(1-\\frac{2N_{f}}{33})\\ln[ \\frac{\\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}]\\bigg{]}(\\frac{\\alpha_{s}}{\\pi})^{2}\\] \\[+(1+\\frac{N_{f}}{6})^{\\frac{1}{2}}\\bigg{[}-799.2-21.99N_{f}-1.926 N_{f}^{2}\\] \\[+\\frac{495}{2}(1+\\frac{N_{f}}{6})(1+\\frac{2N_{f}}{33})\\ln[\\frac{ \\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}]\\bigg{]}(\\frac{\\alpha_{s}}{\\pi})^{\\frac{3}{2} }\\bigg{\\}}+O(\\alpha_{s})^{3}\\ln(\\alpha_{s})).\\]
EOS1 has been subsequently improved by Kajantie et.al. [15; 16] who proposed another equation of state (EOS2) by improving the accuracy to the next order in the coupling constant, and also included the HTL effects; recall that the latter are essentially nonperturbative, and contain contributions from scales \\(T,~{}gT,~{}gT,~{}\\)and \\(g^{2}T\\). EOS2, which is thus determined upto \\(O(g^{6}ln\\frac{1}{g})\\), has the form
\\[P^{(2)} = P^{(1)}+\\frac{8\\pi^{2}}{45}T^{4}\\bigg{[}1134.8+65.89N_{f}+7.653N_ {f}^{2} \\tag{2}\\] \\[-\\frac{1485}{2}\\left(1+\\frac{1}{6}N_{f}\\right)\\left(1-\\frac{2}{ 33}N_{f}\\right)\\ln(\\frac{\\bar{\\mu}_{\\rm MS}}{2\\pi T})\\bigg{]}\\left(\\frac{ \\alpha_{s}}{\\pi}\\right)^{3}\\ln\\frac{1}{\\alpha_{s}}\\,.\\]
In the above expressions, \\(N_{f}\\) is the number of fermions, \\(\\alpha_{s}=g^{2}/(4\\pi)\\) is the strong coupling constant and \\(\\bar{\\mu}_{\\rm MS}\\) is the renormalization scale parameter in the \\(\\overline{\\rm MS}\\) scheme.
Note that \\(\\alpha_{s}\\) runs with \\(\\beta\\) and \\(\\bar{\\mu}_{\\rm MS}\\). As remarked, the utility of this EOS in the context of QGP thermodynamics has been discussed earlier by Rebhan[21].
We now set to determine equilibrium distribution functions \\(<n_{g,f}>\\) for the gluons and the quarks such that they would yield the EOS given above. The ansatz for the determination involves retaining the ideal distribution forms, with the chemical potentials \\(\\mu_{g}\\) and \\(\\mu_{f}\\) being free parameters. Note that for the massless quarks ( \\(u\\) and \\(d\\)) which we consider to constitute the bulk of the plasma, \\(\\mu\\equiv 0\\) if they were not interacting. This approach is of course not novel, since it underlies many of the ideas that attempt to describe the interaction effects in terms of the quasiparticle degrees of freedom. In the present context, we refer the reader to Ref. [31; 32], where an attempt is made to describe the lattice results in terms of effective mass for the partons.
We pause to note that the chemical potentials which we introduce are not the same as those which yield a nonzero baryon number density, as e.g., in [17; 18]. Here, the chemical poettnial merely serves to map the interacting quarks and gluons at zero baryon number chemical potential to noninteracting quasiparticles, _viz._, the dressed quarks and gluons. Their interpretation is, therefore, more akin to the effective mass, albeit as functions of the renormalization scale and temperature as we show below. Thus, the baryon number density of the plasma continues to vanish.
As the first step in our approach, we express the EOS in the form
\\[P=P_{g}^{I}+P_{q}^{I}+\\Delta P_{g}+\\Delta P_{f} \\tag{3}\\]
The first two terms in the RHS of Eq.(3) are identified with the distributions of an ideal gas of quarks and gluons. The effects of the interaction in pure QCD are represented by \\(\\Delta P_{g}\\) and the residual interaction effects, by \\(\\Delta P_{f}\\). For the EOS which we are interested in, the identification of the above terms is straight forward. \\(\\Delta P_{g}\\) can be identified by first setting \\(N_{f}=0\\) and then subtracting the ideal part. The residual term is naturally identified as \\(\\Delta P_{f}\\) after subtracting the ideal part for quarks. In general the form of EOS (see Eq.(1) and Eq.(2))
\\[P = \\frac{8\\pi^{2}}{45}\\;\\beta^{-4}\\;[1+A(\\alpha_{s})+B(\\alpha_{s}) \\ln\\frac{\\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}+\\frac{21}{32}N_{f} \\tag{4}\\] \\[+C(\\alpha_{s},N_{f})+D(\\alpha_{s},N_{f})\\ln\\frac{\\bar{\\mu}_{\\rm MS }\\beta}{2\\pi}]\\]
where
\\[P_{g}^{I} = \\frac{8\\pi^{2}}{45}\\beta^{-4}\\] \\[P_{f}^{I} = \\frac{8\\pi^{2}}{45}\\beta^{-4}\\frac{21}{32}N_{f}\\] \\[\\Delta P_{g} = A(\\alpha_{s})+B(\\alpha_{s})\\ln\\frac{\\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}\\] \\[\\Delta P_{f} = C(\\alpha_{s},N_{f})+D(\\alpha_{s},N_{f})\\ln\\frac{\\bar{\\mu}_{\\rm MS }\\beta}{2\\pi} \\tag{5}\\]
For EOS1 the coefficients \\(A,B,C,D\\) are denoted with a prime and are given by
\\[A^{\\prime}(\\alpha_{s}(N_{f})) = -\\frac{15}{4}\\frac{\\alpha_{s}}{\\pi}+30(\\frac{\\alpha_{s}}{\\pi})^{ \\frac{3}{2}}+(237.2+\\frac{135}{2}\\log(\\frac{\\alpha_{s}}{\\pi})(\\frac{\\alpha_{s }}{\\pi})^{2}-799.2(\\frac{\\alpha_{s}}{\\pi})^{\\frac{5}{2}}\\] \\[B^{\\prime}(\\alpha_{s}(N_{f})) = -\\frac{165}{8}(\\frac{\\alpha_{s}}{\\pi})^{2}+\\frac{495}{2}(\\frac{ \\alpha_{s}}{\\pi})^{\\frac{5}{2}}\\] \\[C^{\\prime}(\\alpha_{s}(N_{f}),N_{f}) = -\\frac{15}{4}(1+\\frac{5}{12}N_{f})\\frac{\\alpha_{s}}{\\pi}+30((1+ \\frac{1}{6}N_{f})(\\frac{\\alpha_{s}}{\\pi})^{\\frac{3}{2}}+[237.2+ \\tag{6}\\] \\[15.97N_{f}-0.413N_{f}^{2}+\\frac{135}{2}((1+\\frac{1}{6}N_{f})\\ln[ \\frac{\\alpha_{s}}{\\pi}(1+\\frac{1}{6}N_{f})](\\frac{\\alpha_{s}}{\\pi})^{2}+\\] \\[(1+\\frac{1}{6}N_{f})^{(}1/2)[-799.2-21.99N_{f}-1.926N_{f}^{2}]( \\frac{\\alpha_{s}}{\\pi})^{\\frac{5}{2}}-A^{\\prime}(\\alpha_{s}(N_{f}))\\] \\[D^{\\prime}(\\alpha_{s}(N_{f}),N_{f}) = -\\frac{165}{8}(1+\\frac{5}{12}N_{f})(1-\\frac{2}{33}N_{f})(\\frac{ \\alpha_{s}}{\\pi})^{2}\\] \\[+\\frac{495}{2}(1+\\frac{1}{6}N_{f})(1-\\frac{2}{33})(\\frac{\\alpha_ {s}}{\\pi})^{\\frac{5}{2}}-B^{\\prime}(\\alpha_{s}(N_{f}))\\]
whereas for EOS2 the coefficients can be written in terms of the above primed coefficients for EOS1, as
\\[A(\\alpha_{s}(N_{f})) = A^{\\prime}(\\alpha_{s}(N_{f}))+1134.8(\\frac{\\alpha_{s}}{\\pi})^{3} \\log(\\frac{1}{\\alpha_{s}})\\] \\[B(\\alpha_{s}(N_{f})) = B^{\\prime}(\\alpha_{s}(N_{f}))-\\frac{1485}{2}(\\frac{\\alpha_{s}}{ \\pi})^{3}\\log(\\frac{1}{\\alpha_{s}})\\] \\[C(\\alpha_{s}(N_{f}),N_{f}) = C^{\\prime}(\\alpha_{s}(N_{f}),N_{f})+(65.89N_{f}+7.653N_{f}^{2})( \\frac{\\alpha_{s}}{\\pi})^{3}\\log(\\frac{1}{\\alpha_{s}})\\]\\[D(\\alpha_{s}(N_{f}),N_{f}) = D^{\\prime}(\\alpha_{s}(N_{f}),N_{f})-\\frac{1485}{2}[(1+\\frac{1}{6}N _{f})(1-\\frac{2}{33}N_{f})-1](\\frac{\\alpha_{s}}{\\pi})^{3}\\log(\\frac{1}{\\alpha_{s }}).\\]
We seek to parametrize the contributions from all the non-ideal coefficients in terms the chemical potentials \\(\\mu_{g}\\) and \\(\\mu_{f}\\) for gluons and quarks respectively. Since the EOS have been proposed at high \\(T\\), with their validity being at temperatures greater than \\(2T_{c}\\)[33], we treat the dimensionless quantity \\(\\tilde{\\mu}_{g,f}\\equiv\\beta\\mu_{g,f}\\) perturbatively. This approximation needs to be implemented self consistently, and accordingly, we expand the the grand canonical partition functions for gluons and quarks as a Taylor series in \\(\\tilde{\\mu}_{g,f}\\). We obtain the following expressions:
\\[\\log(Z_{g}) = \\sum_{k=0}^{\\infty}(\\tilde{\\mu}_{g})^{k}\\partial^{k}_{\\tilde{\\mu }_{g}}\\log(Z_{g})|_{(}\\tilde{\\mu}_{g}=0)\\] \\[\\log(Z_{q}) = \\sum_{k=0}^{\\infty}(\\tilde{\\mu}_{f})^{k}\\partial^{k}_{\\tilde{\\mu }_{f}}\\log(Z_{f})|_{(\\tilde{\\mu}_{f}=0)} \\tag{8}\\]
where, \\(Z_{g}\\) and \\(Z_{f}\\) are given by
\\[Z_{g} = \\prod_{p}\\frac{1}{(1-\\exp(-\\beta\\epsilon_{p}+\\tilde{\\mu}_{g}))},\\] \\[Z_{q} = \\prod_{p}\\frac{1}{(1+\\exp(-\\beta\\epsilon_{p}+\\tilde{\\mu}_{f}))}. \\tag{9}\\]
We determine \\(Z_{g}\\) and \\(Z_{q}\\) (defined in Eq.(8)) up to \\(O(\\tilde{\\mu}_{g,f})^{3}\\). The truncation is seen to yield an accuracy of \\(\\sim\\) ten percent when we consider EOS1. However, for the more physical EOS2, the agreement is within one percent. The gluon chemical potential \\(\\mu_{g}\\) gets determined by
\\[\\frac{A_{g}^{(3)}}{3!}(\\tilde{\\mu}_{g})^{3}+\\frac{A_{g}^{(2)}}{2!}(\\tilde{\\mu} _{g})^{2}+A_{g}^{(1)}\\tilde{\\mu}_{g}-\\Delta P_{g}\\beta V=0. \\tag{10}\\]
Similarly, the equation determining \\(\\mu_{f}\\) reads:
\\[\\frac{A_{f}^{(3)}}{3!}(\\tilde{\\mu}_{f})^{3}+\\frac{A_{f}^{(2)}}{2!}(\\tilde{\\mu }_{f})^{2}+A_{f}^{(1)}\\tilde{\\mu}_{f}-\\Delta P_{f}\\beta V=0 \\tag{11}\\]
The coefficients \\(A_{g}^{(n)}\\) and \\(A_{f}^{(n)}\\) are given by
\\[A_{g}^{(n)}=\\partial^{n}_{\\tilde{\\mu}_{g}}\\log(Z_{g})|_{(\\tilde {\\mu}_{g}=0)}\\] \\[A_{f}^{(n)}=\\partial^{n}_{\\tilde{\\mu}_{f}}\\log(Z_{f})|_{(\\tilde {\\mu}_{f}=0)} \\tag{12}\\]
The explicit forms of \\(A^{n}\\) up to \\(n=3\\) are listed in the Appendix.
### Explicit evaluation of the chemical potential
Before we discuss the solution of Eq.(10) and Eq.(11), it is instructive to evaluate \\(\\mu_{g,f}\\) with just the linear and quadratic terms for the sake of comparison. In the linear order, the solutions read
\\[\\tilde{\\mu}_{g} = \\frac{8\\pi^{2}}{45(A_{g}^{(1)})}[A(\\alpha_{s})+B(\\alpha_{s})\\ln \\frac{\\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}] \\tag{13}\\] \\[\\tilde{\\mu}_{f} = \\frac{8\\pi^{4}}{45(A_{f}^{(1)})}[C(\\alpha_{s})+D(\\alpha_{s})\\ln \\frac{\\bar{\\mu}_{\\rm MS}\\beta}{2\\pi}] \\tag{14}\\]
while, in the next to the leading order, the solution has the form
\\[\\tilde{\\mu}_{g}=-\\frac{A_{g}^{(1)}}{A_{g}^{(2)}}\\pm\\sqrt{(}(\\frac{A_{g}^{(1)} }{A_{g}^{(2)}})^{2}+C_{g}) \\tag{15}\\]
\\[\\tilde{\\mu}_{f}=-\\frac{A_{f}^{(1)}}{A_{f}^{(2)}}\\pm\\sqrt{(}(\\frac{A_{f}^{(1)} }{A_{f}^{(2)}})^{2}+C_{f}), \\tag{16}\\]
where, \\(C_{g}=2\\Delta P_{g}\\beta V/A_{g}^{(2)}\\) and \\(C_{f}=2\\Delta P_{f}\\beta V/A_{f}^{(2)}\\). Finally, the exact solutions can be obtained by using well known algebraic techniques. Since the explicit algebraic solutions do not have an illuminating form, we show the solutions graphically instead in the next subsection.
The distribution functions for the gluons and the quarks get determined, in terms of the chemical potentials, through
\\[\\langle n_{g}\\rangle_{p}=\\frac{\\exp(-\\beta\\epsilon_{p}+\\tilde{\\mu }_{g})}{1-\\exp(-\\beta\\epsilon_{p}+\\tilde{\\mu}_{g})}\\] \\[\\langle n_{f}\\rangle_{p}=\\frac{\\exp(-\\beta\\epsilon_{p}+\\tilde{ \\mu}_{f})}{1+\\exp(-\\beta\\epsilon_{p}+\\tilde{\\mu}_{f})} \\tag{17}\\]
The extraction of the distribution functions is, nevertheless, incomplete. For, the EOS -and hence the chemical potentials - depend on the renormalization scale. On the other hand, the physical observables should be scale independent. We circumvent the problem by trading off the dependence on \\(\\bar{\\mu}_{\\rm MS}\\) to a dependence on the critical temperature \\(T_{c}\\). To that end, we exploit the temperature dependence of the coupling constant \\(\\alpha_{s}(T)\\)[33; 34] and of renormalization scale:
\\[\\bar{\\mu}_{\\rm MS}(T) = 4\\pi T\\exp(-(\\gamma_{E}+1/22)\\] \\[\\alpha_{s}(T) = \\frac{1}{8\\pi b_{0}\\log(T/\\lambda_{T})}=\\alpha_{s}(\\mu^{2})|_{ \\mu=\\tilde{\\mu}_{\\rm MS}(T)}\\] \\[\\lambda_{T} = \\frac{\\exp(\\gamma_{E}+1/22)}{4\\pi}\\lambda_{MS}\\]where, \\(b_{0}=33-2N_{f}/12\\pi\\) and \\(\\lambda_{MS}=1.14T_{c}\\). With this step, the distribution functions get determined completely, and are obtained as functions of \\(T/T_{c}\\).
We note that the results presented below, being valid for \\(T>2T_{c}\\), need to be supplemented by a similar analysis for EOS which are valid for \\(T\\sim T_{c}\\). Such an analysis does indeed exist, along the lines of this paper [32], who have considered the lattice EOS. They do not determine the Debye mass, but focus on the impact of the EOS on the flow parameters in heavy ion collisions.
Although, EOS1 and EOS2 have been computed within the framework of weak coupling technique but they give convergent results for the temperature ranges which are more than \\(5T_{c}\\). We shall see in the next Subsection that these equations of state are far away from their ideal behaviour up to the extent that they can be utilized to make definite predictions for QGP.
### Hot QCD EOS vs Ideal EOS
As a warm up, we compare EOS1 and EOS2 with the ideal EOS by plotting the ratio \\(R\\equiv\\frac{P}{P_{q}^{T}+P_{g}^{T}}\\), as functions of temperature, in Figs.1,2.
The most striking feature that we see is the large sensitivity to the inclusion of the \\(g^{6}ln(\\frac{1}{g})\\) contributions. It is most pronounced in the behaviour of pure QCD where major qualitative and quantitative differences appear: (i) For EOS1, \\(R\\) increases with \\(T\\), in contrast to EOS2 which where it decreases from above, approaching the same asymptotic value for large \\(T\\); (ii) interestingly, the non-perturbative (and higher order) corrections makes the system less non-ideal. Indeed, EOS1 yields values of \\(R\\) which are \\(10-45\\) percent away from the ideal value 1, in contrast to EOS2 (see Fig2), for which \\(R\\) is only \\(2-8\\) percent away from the ideal value. Incidentally, the above observation implies that the expansion in Eq.(8) works better for EOS2 than the former. This will be reflected later in the behavior of effective chemical potentials with temperature. We shall further see that, smaller the value of \\(|\\tilde{\\mu}_{g,f}|\\), better will be the approximation.
### The chemical potentials
The variation of the effective chemical potentails with renormalization scale at a fixed temperature has already been studied in reference [14]. We prefer to recast it into a dependence of \\(\\tilde{\\mu}_{g,f}\\) on \\(\\frac{T}{T_{c}}\\) (see the previous subsection) since it is more relevant to the study of QGP in heavy ion collisions. This is shown in Figs. \\(3-8\\), where the contributions coming from linear, quadratic and cubic approximations in the Taylor series(Eq.(8)) are individually displayed, for both EOS1 and EOS2. These figures exhibit, in essence, all the interaction effects.
A number of features emerge from examining Figs. \\(3-8\\). Consider EOS1 first. Here, the linear approximation does reasonably well for pure QCD, but fails badly in the quark sector. The chemical potential is negative in both the sectors, and approaches the ideal value asymptotically from below. In contrast, EOS2 leads to a different behaviour: \\(\\tilde{\\mu}_{g}\\) starts with a small positive value at \\(T\\sim 2T_{c}\\), and stays essentially so until \\(T\\sim 13T_{c}\\), and switches sign to acquire a small negative value. Since the magnitude remains less than 0.1 throughout, the deviation from the ideal behaviour is minimal. \\(\\tilde{\\mu}_{f}\\) remains negative (with the maximum magnitude \\(\\sim 0.25\\) at \\(T=2T_{c}\\)), which is about a factor four smaller in comparison with the corresponding value from EOS1. The interaction effects get manifestly stronger as we increase the number of flavors.
It is significant that the ideal value is not reached even at \\(T\\sim 10T_{c}\\), which indicates that the phase remains interacting. We also note that our method of extracting
Figure 1: (color online) Behaviour of R with temperature for EOS1
Figure 2: (color online) Behaviour of R with temperature for EOS2
the chemical potential works more efficiently for EOS2, as indicated by small corrections from higher order terms to the linear estimate.
## III The Debye mass and screening length
The extraction of the equilibrium distribution functions affords a determination of the Debye mass, via the semiclassical transport theory [29]. The Debye mass controls the number of bound states in heavy \\(q\\bar{q}\\) systems, yields the extent of \\(J/\\Psi\\) suppression in heavy ion collisions, provided that we have a reliable estimate of the temperature of the plasma. Even otherwise, the qualitative significance of the Debye mass cannot be over estimated since the deconfined phase remains strongly interacting even at large \\(T\\).
The determination of \\(m_{D}\\) is straightforward if we employ the classical transport theory [29]. It is simply given by
\\[M_{g,f}^{2}=g^{\\prime 2}C_{g,f}\\int\\frac{d}{dp_{0}}<n_{g,f}>d^{3}p. \\tag{19}\\]
The above expression has to be used cautiously, though. The coupling constant \\(g^{\\prime}\\) in eq.19 has a phenomenological character, and should not be confused with the fundamental constant \\(g\\) appearing in the EOS. Keeping this in mind, we recall that if the plasma were to be comprised of ideal massless partons, the Debye mass would be given by
Figure 4: (color online) Effective chemical potentials at the linear order for EOS2
Figure 5: (color online) Effective chemical potentials at the quadratic order for EOS1
Figure 3: (color online) Effective chemical potentials at the linear order for EOS1
Figure 6: (color online) Effective chemical potentials at the quadratic order for EOS2
\\[M_{id}^{2}=M_{g,id}^{2}+M_{f,id}^{2}\\equiv\\frac{(N+N_{f}/2)}{3}g^{\\prime 2} \\beta^{-2}. \\tag{20}\\]
The hot QCD EOS modify the above expression; It is easy to see, from Eqs.(19, 20) that the new Debye masses, scaled with respect to their respective ideal values get determined, in terms of the standard PolyLog functions[43] by
\\[\\frac{M_{g,hot}^{2}}{M_{g,id}^{2}}=\\frac{6}{\\pi^{2}}PolyLog[2,\\exp( \\tilde{\\mu}_{g})]\\equiv F_{1}(\\tilde{\\mu}_{g})\\] \\[\\frac{M_{f,hot}^{2}}{M_{f,id}^{2}}=-\\frac{12}{\\pi^{2}}PolyLog[2,- \\exp(\\tilde{\\mu}_{f})]\\equiv F_{2}(\\tilde{\\mu}_{f}). \\tag{21}\\]
Consequently, the expression for the total relative mass is obtained as
\\[\\frac{M_{hot}^{2}}{M_{id}^{2}}=\\frac{(\\frac{N}{3}F_{1}(\\tilde{\\mu}_{g})+\\frac{ N_{f}}{6}F_{2}(\\tilde{\\mu}_{f}))}{(N/3+N_{f}/6)}. \\tag{22}\\]
It is, however, more convenient to plot the inverse debye mass i.e, the screening length as a function of \\(T/T_{c}\\).
### Relative screening lengths
We first establish the notations. Let \\(\\lambda_{h}\\) denote the screening length generated by the hot EOS. Let \\(\\lambda_{id}\\) be the screening length of an ideal qgp. It is convenient to consider also the contribution coming from the pure QCD sector, whose screening lengths we denote by \\(\\lambda_{h}^{g}\\) and \\(\\lambda_{id}^{g}\\) respectively.
The behaviour of the screening lengths is shown in Figs.9 -12. As in the case of the chemical potentials, the dependence on the order of perturbation is striking here as well. For EOS1 where the contributions upto \\(O(g^{5})\\) are included, the screening lengths in the full QCD as well as pure QCD remain nonzero. The dominant contribution is from the gluonic sector, which dominates over the quark sector, as may be seen in Fig.9 where we plot the ratio \\({\\cal R}_{h/g}=\\lambda_{h}/\\lambda_{h}^{g}\\), which is in excess of 0.7 throughout. Note, however, that the relative dominance gets weaker as we increase the number of flavours. Fig.10 shows the variation of the ratio \\({\\cal R}_{h/id}=\\lambda_{h}/\\lambda_{id}\\) as a function of temperature. Interestingly, the interaction is seen to weaken the screening, and so does an increase in the number of flavors.
These results are in sharp contrast with the case of EOS2, which we recall has nonperturbative \\(O(g^{6}ln(\\frac{1}{g}))\\) contributions. These are shown in Figs.11 and 12. It is clear from Fig 11 that the contribution from the pure gluonic sector saturates the contribution to the screening all
Figure 8: (color online) Effective chemical potentials at the cubic order for EOS2
Figure 7: (color online) Effective chemical potentials at the cubic order for EOS1
Figure 9: (color online) The relative Debye Screening Length \\({\\cal R}_{h/g}\\) for EOS1 as a function of temperature. Note that it is \\(\\geq\\) 0.7.
the way upto temperatures \\(T\\sim 13T_{c}\\), and drops sharply thereof. This feature is reinforced by Fig.12 where the ratio \\({\\cal R}_{h/id}\\) stays at zero between \\(2T_{c}\\) and \\(12-13T_{c}\\). It is of a purely academic interest that the screening length should become non-zero beyond \\(12T_{c}\\).
It appears that the perfect screening is indeed the strongest prediction of EOS2 and must be most easily tested in heavy ion collisions, where temperatures upto \\(3T_{c}\\) are expected at LHC. This is in sharp contrast with the assumptions of a near ideal behaviour, and also some theoretical analyses which in fact propose an enhanced production of \\(J/\\Psi\\) at LHC energies [40]. We, therefore, attempt to compare these predictions with the lattice results below.
### Comparison with the lattice results
In this subsection, we compare our results on screening length of EOS1 and EOS2 with the lattice results. Lattice compuattions extract the screening lengths from the quark-antiquark free energies. To be concrete, we make the comparison with three distinct values of the coupling constant, \\(g^{\\prime}=.3,.5,.8\\).
Further, we consider three cases, (i) pure QCD, (ii) \\(N_{F}=2\\) and (iii) \\(N_{F}=3\\). To facilitate a proper comparison, we take the respective transition temperatures to be \\(T_{c}=270MeV,\\ \\ 203MeV\\) and \\(195MeV\\), as given by lattice computations. The comparison is shown only with EOS1 since EOS2 predicts absolute screening in the range \\(2T_{c}<T<12T_{c}\\) that we are interested in. The results are shown in Figs. 13-15. As observed, the screen
Figure 11: (color online) The relative Debye Screening Length \\({\\cal R}_{h/g}\\) for EOS2 as a function of temperature. Note that it stays at 1 all the way upto \\(13T_{c}\\).
Figure 10: (color online) The relative screening length\\({\\cal R}_{h/id}\\) for EOS1 as a function of temperature.
Figure 13: (color online) Behaviour of \\(2/M_{D}\\) with \\(T/T_{c}\\) for \\(g^{\\prime}=.3\\) for EOS1. Note that \\(2/M_{D}\\) is measured in _fm_
Figure 12: (color online) The relative screening length \\({\\cal R}_{h/id}\\) for EOS2 as a function of temperature.
ing weakens with increasing \\(g^{\\prime}\\); moreover, the screening weakens with the increase in the number of flavours as well. For an explicit comparison, we consider the results reported by Kaczmarek and Zantow[41] who detemine the screening length by identifying it essentially with the first moment of the \\(q\\bar{q}\\) free energy. Their results are displayed in Fig. 2 of [41] to which we refer henceforth. Interestingly, the same qualitative features are exhibited EOS1 and the lattice results, in both the aspects, _viz_, the dependence on the coupling constant as well as on the number of flavours. However, the agreement fails to get quantitative. The lattice results predict screening lengths which are smaller in value, except for \\(N_{F}=3\\) than the EOS1 results. Indeed, the lattice screening length is \\(\\sim 0.7fm\\) in the vicinity of \\(T_{c}\\), and drops to \\(\\sim 0.4fm\\) close to \\(2T_{c}\\). It is evident from Figs. 13-15 that the results of EOS1 are \\(3-10\\) times higher in value. Any better agreement with a further increase in the value of \\(g^{\\prime}\\) is ruled out since\\(g^{\\prime}\\leq 1\\) necessarily.
## IV Conclusions and outlook
In conclusion, we have extracted the distribution functions for gluons and quarks from two equations of state, in terms of effective chemical potentials for the partons. The chemical potentials are shown to be highly sensitive to the inclusion of \\(O(g^{6}ln(1/g))\\) contributions, and exhibited most vividly by the screening length. Surprisingly, EOS2 which has interactions upto \\(O(g^{6}ln(\\frac{1}{g}))\\) shows less nonideal behaviour compared to EOS1 (which has contributions upto \\(O(g^{5})\\). Equally strikingly, the plasma corresponding to EOS2 is predominantly gluonic, in the sense that the Debye mass from the gluonic sector diverges in the range \\(2T_{c}\\leq T\\leq 12T_{c}\\). This result is in contrast with the less precise EOS1, where the gluonic contribution is not that overwhelming.
To place our analysis in perspective, we note that our analysis is based on but two equations of state, neither of which has full non-perturbtive contributions. Nevertheless, it may not be without merit since EOS2, for instance, makes rather strong predictions which may be tested; EOS1 is, on the other hand, seen to be in qualitative agreement with the lattice results. Indeed,the work does provide a platform to study quantitatively the import of the EOS to heavy ion collisions in a quantitative manner. Experiments at LHC may be able to probe these EOS since a temperature in the range \\(T\\sim 2-3T_{c}\\) is expected to be achieved there. More importantly, the method developed here can be easily employed to study more precise EOS (as from lattice computations), or more general EOS (as the inclusion of baryonic chemical potentials). To be sure, an incisive analysis is possible only after studying other quantities such as the viscosity, its anomalous component [38; 39], the viscosity to entropy ratio, and the specific heat. Finally,the insertion of the appropriate equilibrium distribution functions in the semiclassical transport equations allow for studying (i) the production and the equilibration rates for the QGP in heavy ion collisions [25; 26; 27], and (ii) the color response functions[42], of which the Debye mass is but one limiting parameter. These will be taken up in subsequent publications. These investigations will be taken up separately.
## V Acknowledgments
We thank DDB Rao for assistance with numerical work. Two of us, VC and RK, acknowledge CSIR (India) for financial support through the award of a fellowship.
Figure 14: (color online) Behaviour of \\(2/M_{D}\\) with \\(T/T_{c}\\) for \\(g^{\\prime}=.5\\) for EOS1. Note that \\(2/M_{D}\\) is measured in _fm_
Figure 15: (color online) Behaviour of \\(2/M_{D}\\) with \\(T/T_{c}\\) for \\(g^{\\prime}=.8\\) for EOS1. Note that \\(2/M_{D}\\) is measured in _fm_
Appendix
We use the following standard integrals while extracting effective chemical potential:
\\[\\int_{0}^{\\infty}p^{2}\\frac{\\exp(-p)}{(1-\\exp(-p))^{3}}dp = 2(1+\\sum_{n=1}^{\\infty}(\\frac{1}{(n+1)^{3}}\\prod_{k=1}^{n}\\frac{3+ k-1}{k!}))\\] \\[\\int_{0}^{\\infty}p^{2}\\frac{\\exp(2p)}{(1+\\exp(p))^{3}}dp = \\frac{\\pi^{2}}{12}+\\log(2) \\tag{23}\\] \\[\\int_{0}^{\\infty}p^{2}\\frac{\\exp(p)}{(1-z\\exp(p))^{2}}dp = \\frac{PolyLog[2,z]}{z}\\] \\[\\int_{0}^{\\infty}p^{2}\\frac{\\exp(p)}{(1+z\\exp(p))^{2}}dp = -\\frac{PolyLog[2,-z]}{z} \\tag{24}\\]
The coefficients in the perturbative expansion of \\(\\log(Z_{g})\\) and \\(\\log(Z_{f})\\),are as follows;
\\[A_{g}^{(1)} = \\frac{V}{2\\pi^{2}g_{b}}2\\zeta(3)\\] \\[A_{f}^{(1)} = \\frac{V}{2\\pi^{2}g_{f}}\\frac{3}{2}\\zeta(3)\\] \\[A_{g}^{(2)} = \\frac{V}{2\\pi^{2}g_{b}}\\frac{\\pi^{3}}{3}\\] \\[A_{f}^{(2)} = \\frac{V}{2\\pi^{2}g_{f}}\\frac{\\pi^{3}}{6}\\] \\[A_{f}^{(3)} = \\frac{V}{2\\pi^{2}g_{f}}2\\log(2)\\] \\[A_{g}^{(3)} = \\frac{V}{2\\pi^{2}g_{b}}[4(1+\\sum_{n=1}^{\\infty}(\\frac{1}{(n+1)^{3 }}\\prod_{k=1}^{n}\\frac{3+k-1}{k!}))-\\frac{\\pi^{2}}{3}] \\tag{25}\\]
where \\(g_{b}=8\\times 2\\) and \\(g_{f}=6N_{f}\\) are the degeneracy factors for gluons and quarks.
## References
* (1) STAR Collaboration, Nucl. Phys. A 757, 102 (2005).
* (2) PHENIX Collaboration, Nucl. Phys. A 757, 184 (2005).
* (3) PHOBOS Collaboration, Nucl. Phys. A 757, 28 (2005).
* (4) BRAHMS Collaboration, Nucl. Phys. A 757, 01 (2005).
* (5) K Adcox et al. (PHENIX), Phys. Rev. Lett.**88**, 022301 (2002) C. Adler et ai. (STAR), Phys. Rev. Lett.**89**, 202301 (2002) S. S. Adler et al. (PHENIX), Phys. Rev. Lett.**91**, 072301 (2003) J. Adams et al. (STAR), Phys. Rev. Lett.**91**, 172302 (2003).
* (6) M. J Tannenbaum, Rep. Prog. Physics. 69 (2006) 2005 (arXiv: nucl-ex/0603003).
* (7) D. Teaney, Phys. Rev. **C 68**, 034913 (2003).
* (8) R. Baier and P. Romatschke, arXiv: nucl-th/0610108
* (9) Hans-Joachim Drescher, Adrian Dumitru, Clement Gombeaud, Jean-Yves Ollitrault, arXiv:0704.3553 (nucl-th)
* (10) P.Kovtun, D.T.Son, A.O.Starinets, Phys. Rev. Lett.**94**, 111601 (2005).
* (11) G. Boyd, J. Engels, F. Karsch, E. Laermann, C. Legeland, M. Luetgemeier, B. Petersson, Nucl. Phys. B469 (1996) 419-444, Phys. Rev. Lett. 75 (1995) 4169 R. Gavai, Pramana, **67** (2006) 885; Frithjof Karsch, hep-ph/0103314; Y. Aoki, Z. Fodor, S.D. Katz, K.K. Szabo, Wuppertal U., Eotvos U., hep-lat/0510084.
* (12) E. V. Shuryak, Nucl.Phys. A, 774 (2006) 387, hep-ph/0608177.
* (13) P. Arnold and Chengxing Zhai, Phys. Rev. **D50** 7603 (1994); Phys. Rev. **D51** 1906 (1995).
* (14) Chengxing Zhai and Boris Kastening, Phys. Rev. **D52** 7232 (1995).
* (15) K. Kajantie, M. Laine, K. Rummukainen, Y. Schroder, Phys.Rev. D **67**, 105008 (2003).
* (16) K. Kajantie, M. Laine, K. Rummukainen, Y. Schroder, Phys.Rev.Lett.**86**, 10 (2001).
* (17) A. Vuorinen, Phys. Rev. **D68**, 054017 (2003). We thank the referee for bringing this work to our notice.
* (18) More recently, A. Ipp, K. Kajantie, A. Rebhan, A. Vuorinen, Phys.Rev. D **74**, 045016 (2006), have determined the EOS upto order \\(g^{4}\\) at all chemical potentials. They also claim the validity of their EOS for all temperatures.
* (19) J P Blaizot, E Iancu, A Rebhan, Phys. Rev. **D63** 065003 (1996).
* (20) J P Blaizot, E Iancu, A Rebhan, Phys. Lett. **B523** 143 (2001).
* (21) Anton Rebhan, arXiv:hep-ph/0504023 and references therein.
* (22) Anton Rebhan, JHEP 0306 032, (2003).
* (23) Ulrike Kraemmer, Anton Rebhan, Rept.Prog.Phys. **67**, 351, (2004).
* (24) J.-P. Blaizot, E. Iancu, A. Rebhan, Phys.Rev.Lett.**83**, 2906 (1999).
* (25) Gouranga C Nayak, V. Ravishankar, Phys. Rev.**C58**, 356 (1998); Phys. Rev. **D55** 6877 (1997).
* (26) R. S. Bhalerao, V. Ravishankar, Phys. Lett.**B409** 38 (1997).
* (27) Ambar Jain, V.Ravishankar, Phys. Rev. Lett. **91**, 112301 (2003).
* (28) P.F. Kelly, Q. Liu, C. Lucchesi and C. Manuel, Phys. Rev. Lett. **72**, 3461 (1994).
* (29) P.F. Kelly, Q.Liu, C. Lucchesi and C. Manuel, Phys. Rev. D 50, 4209 (1994).
* (30) Jean-Paul Blaizot and Edmond Iancu, Phys. Rev. Lett. **70**, 3376 (1993).
* (31) Peshier et. al, Phys. Letts. **B337**, 235 (1994), Phys. Rev. **D54** 2399 (1996), Phys. Rev. **C61** 045203 (2000), Phys. Rev. **D66** 094003 (2002)
* (32) Allton et. al, Phys. Rev. **D68**, 014507 (2003), Phys. Rev. **D71**, 054508 (2005). For a very recent work, see M. Bluhm, B. Kampher, R. Schulze, D. Seipt and U. Heinz, arXiv:hep-ph/0705.039.
* (33) Suzhou Haung, Marcello Lissia, hep-ph/9411293
* (34) The expression for the QCD running coupling constant displayed in Eq.(18) is allowed in the region where the weak perturbative techniques are valid. Since, the weak coupling techniques give convergent results for more than \\(2T_{c}\\), where \\(T_{c}\\) is QCD transition temperature and a free parameter here. In this region, we can consider the asymptotic limit of the running coupling constant. Employing this, we shall see that the hot QCD EOS relative to ideal EOS and Effective Chemical Potentials scales with \\(T/T_{c}\\).
* (35) T. Matsui, H. Satz, Nucl. Phys. **B 178**, 416, (1996).
* (36) H. Satz, Rept. Prog. Phys. **63**, 1511, (2000).
* (37) R. Vogt, Nucl. Phys. **A 752**, 447 (2005).
* (38) P. Arnold, G. D. Moore and L. G. Yaffe, JHEP 011, 01 (2000).
* (39) M. Asakawa, S. A. Bass and B. Muller, hep-ph/0603092; hep-ph/0608270 A. Majumdar, B. Muller, hep-ph/0703082.
* (40) Gert Aarts, Chris Allton, Mehmet Bugrahan Oktay, Mike Peardon, Jon-Ivar Skullerud; arXiv:0705.2198[hep-lat].
* (41) Olaf Kaczmarek and Felix Zantow, Pos(LAT 2005) 177 (_arXiv:hep-lat/0510093_).
* (42) Akhilesh Ranjan and V. Ravishankar, arXiv:0707.3697 [nucl-th]. | We study two recently proposed equations of state (EOS) which are obtained from high temperature QCD, and show how they can be adapted to use them for making predictions for relativistic heavy ion collisions. The method involves extracting equilibrium distribution functions for quarks and gluons from the EOS, which in turn will allow a determination of the transport and other bulk properties of the quark gluon plasma. Simultaneously, the method also yields a quasi particle description of interacting quarks and gluons. The first EOS is perturbative in the QCD coupling constant and has contributions of \\(O(g^{5})\\). The second EOS is an improvement over the first, with contributions upto \\(O(g^{6}ln(\\frac{1}{g}))\\); it incorporates the nonperturbative hard thermal contributions. The interaction effects are shown to be captured entirely by the effective chemical potentials for the gluons and the quarks, in both the cases. The chemical potential is seen to be highly sensitive to the EOS. As an application, we determine the screening lengths which are, indeed the most important diagnostics for QGP. The screening lengths are seen to behave drastically differently depending on the EOS considered., and yield, therefore, a way to distinguish the two equations of state in heavy ion collisions.
PACS: 12.75.-q, 24.85.+p, 12.38.Mh | Summarize the following text. |
arxiv-format/0705_2780v1.md | # Determining the Magnetic Field Orientation of Coronal Mass Ejections from Faraday Rotation
Y. Liu12, W. B. Manchester IV3, J. C. Kasper1, J. D. Richardson12, and J. W. Belcher1
Footnote 1: affiliation: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; [email protected].
Footnote 2: affiliation: State Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100080, China.
Footnote 3: affiliation: Center for Space Environment Modeling, University of Michigan, Ann Arbor, MI 48109, USA.
## 1 Introduction
Coronal mass ejections (CMEs) are recognized as primary drivers of interplanetary disturbances. The ejected materials are often associated with large southward magnetic fields which can reconnect with geomagnetic fields and produce storms in the terrestrial environment (Dungey, 1961; Gosling et al., 1991). Determination of the CME magnetic field orientation is thus of crucial importance for space weather forecasting. However, nearly allatoms are ionized at the coronal temperature \\(\\sim 2\\times 10^{6}\\) K, making it difficult to detect the coronal magnetic field through Zeeman splitting of spectral lines as is routinely done for the photospheric field. A typical way to estimate the coronal magnetic field above 2 \\(R_{\\odot}\\) (\\(R_{\\odot}\\) being the solar radius) is theoretical extrapolation using the photospheric fields as boundary conditions, which can only be checked by comparison to the field strength measured from radio bursts and the orientation determined from soft X-ray observations. The field orientation is also hard to infer from white-light coronagraph images. Spacecraft near the first Lagrangian point (L1) measure the local fields but can only give a warning time for arrival at Earth of \\(\\sim 30\\) minutes (Vogt et al., 2006; Weimer et al., 2002).
A possible method to measure the coronal magnetic field is Faraday rotation (FR), the rotation of the polarization plane of a radio wave as it traverses a magnetized plasma. The first FR experiment was conducted in 1968 by Pioneer 6 during its superior solar conjunction (Levy et al., 1969). The observed FR curve features a \"W\"-shaped profile over a time period of 2-3 hours with rotation angles up to 40\\({}^{\\circ}\\) from the quiescent baseline. This FR event was interpreted as a coronal streamer stalk of angular size 1-2\\({}^{\\circ}\\)(Woo, 1997), but Patzold & Bird (1998) argue that the FR curve is produced by the passage of a series of CMEs. Joint coronagraph observations are needed to determine whether an FR transient is caused by CMEs. Subsequent FR observations by the Pioneer and Helios spacecraft reveal important information on the quiet coronal field (Stelzried et al., 1970; Patzold et al., 1987) and magnetic fluctuations (Hollweg et al., 1982; Efimov et al., 1996; Andreev et al., 1997; Chashei et al., 1999, 2000). FR fluctuations are currently the only source of information for the coronal field fluctuations. Independent knowledge of the electron density, however, is needed in order to study the background field and fluctuations.
Joint coronagraph and FR measurements of CMEs were also conducted when the Helios spacecraft, with a downlink signal at a wavelength \\(\\lambda=13\\) cm, was occulted by CME plasma. Bird et al. (1985) establish a one-to-one correspondence between the SOLWIND white-light transients and FR disturbances for 5 CMEs. Figure 1 displays the time histories of FR and spectral broadening for two CMEs. Note that the spectral broadening is proportional to the plasma density fluctuations; the increased spectral broadening is consistent with the enhanced density fluctuations within CMEs and their sheath regions (Liu et al., 2006b). The FR through the 23 October 1979 CME shows a curve (note a data gap) which seems not to change sign during the CME passage; a single sign in FR indicates a monopolar magnetic field. The 24 October 1979 CME displays an FR curve which is roughly \"N\"-like across the zero rotation angle, indicative of a dipolar field. Other CMEs in the work of Bird et al. (1985) give similar FR curves, either an \"N\"-type or a waved shape around the zero level. Based on a simple slab model for CMEs, the mean transient field magnitude is estimated to be 10 - 100 mG scaled to \\(2.5R_{\\odot}\\), which seems larger than the mean background field. The CME field geometry, as implied by these FR curves, will be discussed below. These features demonstrate why radio occultation measurements are effective in detecting CMEs.
FR experiments using natural radio sources, such as pulsars and quasars, have also been performed. FR observations of this class were first conducted by Bird et al. (1980) during the solar occultation of a pulsar. The advantage of using natural radio sources is that many of these sources are present in the vicinity of the Sun and provide multiple lines of sight which can be simultaneously probed by a radio array. We can thus make a two-dimensional (2-D) mapping of the solar corona and the inner heliosphere with an extended distribution of background radio sources.
In this paper, we show a method to determine the magnetic field orientation of CMEs using FR. This method enables us to acquire the field orientation 2 - 3 days before CMEs reach Earth, which will greatly improve our ability to forecast space weather. The data needed to implement this technique will be available from the Mileura Widefield Array (MWA) (Salah et al., 2005). The magnetic structure obtained from MWA measurements with this method will fill the missing link in coronal observations of the CME magnetic field and also place strong constraints on CME initiation theories.
## 2 Modeling the Helios Observations
The FR technique uses the fact that a linearly polarized radio wave propagating through a magnetized plasma will undergo a rotation in its plane of polarization. The rotation angle is given by \\(\\Omega=\\lambda^{2}RM\\), where \\(\\lambda\\) is the wavelength of the radio wave. The rotation measure, \\(RM\\), is expressed as
\\[RM=\\frac{e^{3}}{8\\pi^{2}\\epsilon_{0}m_{e}^{2}c^{3}}\\int n_{e}{\\bf B}\\cdot d{ \\bf s}, \\tag{1}\\]
where \\(e\\) is the electron charge, \\(\\epsilon_{0}\\) is the permittivity of free space, \\(m_{e}\\) is the electron mass, \\(c\\) is the speed of light, \\(n_{e}\\) is the electron density, \\({\\bf B}\\) is the magnetic field, and \\(d{\\bf s}\\) is the vector incremental path defined to be positive toward the observer. FR responds to the magnetic field, making it a useful tool to probe the coronal transient and quiet magnetic fields. Note that the polarization vector may undergo several rotations across the coronal plasma. Measurements at several frequencies are needed to break the degeneracy; observations as a function of time can also help to trace the rotation through its cycles.
In situ observations of CMEs from interplanetary space indicate that CMEs are often threaded by magnetic fields in the form of a helical flux rope (Burlaga et al., 1981; Burlaga, 1988; Lepping et al., 1990). This helical structure either exists before the eruption (Chen, 1996; Kumar & Rust, 1996; Gibson & Low, 1998; Lin & Forbes, 2000), as needed for support ing prominence material, or is produced by magnetic reconnection during the eruption (e.g., Mikic & Linker 1994). The flux rope configuration reproduces the white-light appearance of CMEs (Chen 1996; Gibson & Low 1998). This well-organized structure will display a specific FR signature easily discernible from the ambient medium, but direct proof of the flux-rope geometry of CMEs at the Sun has been lacking.
### Force-Free Flux Ropes
Here we model the Helios observations using a cylindrically symmetric force-free flux rope (Lundquist 1950) with
\\[{\\bf B}=B_{0}J_{0}(\\alpha r)\\hat{z}+B_{0}HJ_{1}(\\alpha r)\\hat{\\phi} \\tag{2}\\]
in axis-centered cylindrical coordinates \\((\\hat{r},\\hat{\\phi},\\hat{z})\\) in terms of the zeroth and first order Bessel functions \\(J_{0}\\) and \\(J_{1}\\) respectively, where \\(B_{0}\\) is the field magnitude at the rope axis, \\(r\\) is the radial distance from the axis, and \\(H\\) specifies the left-handed (-1) or right-handed (+1) helicity. We take \\(\\alpha r_{0}=2.405\\), the first root of the \\(J_{0}\\) function, so \\(\\alpha\\) determines the scale of the flux-rope radius \\(r_{0}\\). The electron density is obtained by assuming a plasma beta \\(\\beta=0.1\\) and temperature \\(T=10^{5}\\) K, as implied by the extrapolation of in situ measurements (e.g., Liu et al. 2005, 2006a). Combining equations (1) and (2) with a radio wave path gives the FR.
For simplicity, we consider a frame with the \\(x\\)-\\(y\\) plane aligned with the flux-rope cross section at its center and the \\(z\\) axis along the axial field. Figure 2 shows the diagram of the flux rope with the projected line of sight. The flux rope, initially at \\(4R_{\\odot}\\) away from the Sun with a constant radius \\(r_{0}=3.6R_{\\odot}\\) and length \\(20R_{\\odot}\\), moves with a speed \\(v=500\\) km s\\({}^{-1}\\) in the \\(x\\) direction across a radio ray path. The radio signal path makes an angle \\(\\theta\\) with respect to the plane and \\(\\phi\\) with the motion direction when projected onto the plane. The magnetic field strength at the rope axis is adopted to be \\(B_{0}=25\\) mG, well within the range estimated from the Helios observations (Bird et al. 1985).
The resulting FR curves are displayed in Figure 3. A radio source occulted by the moving flux rope gives two basic types of FR curves, Gaussian-shaped and \"N\"-shaped (or inverted \"N\") depending on the orientation of the radio wave path with respect to the flux rope. When the radio signal path is roughly along the flux rope (say, for \\(\\phi=45^{\\circ}\\) and \\(\\theta=60^{\\circ}\\) as shown in the right panel), the axial field overwhelms the azimuthal field along the signal path, so the FR curve would be Gaussian-like, indicative of a monopolar field. For a signal path generally perpendicular to the flux rope, the azimuthal field dominates and changes sign along the path, so the rotation curve would be \"N\" or inverted \"N\" shaped with a sign change (left panel), suggestive of a dipolar field. These basic curves are consistent with the Helios measurements. Two adjacent flux ropes with evolving fields could yield a \"W\"-shaped curve as observed by Pioneer 6 (Levy et al., 1969; Patzold & Bird, 1998). The time scale and magnitude of the observed FR curves are also reproduced. When \\(\\theta=0^{\\circ}\\), the line of sight is within the plane. Varying \\(\\phi\\) gives a variety of time scales of FR, ranging from \\(\\sim 3\\) to more than 10 hours, but the peak value of FR is fixed at \\(\\sim 57^{\\circ}\\). These numbers are consistent with the Helios data shown in the right panel of Figure 1. When \\(\\theta\\) is close to \\(90^{\\circ}\\), the observer would be looking along the flux rope. The axial field produces a strong FR, but decreasing \\(\\theta\\) will diminish the rotation angle and make the curve more and more \"N\"-like. The time scale, however, remains at 4 hours. For \\(\\phi=45^{\\circ}\\) and \\(\\theta=40^{\\circ}\\), the rotation angle is up to \\(140^{\\circ}\\), in agreement with the Helios data shown in the left panel of Figure 1.
### Non-Force-Free Flux Ropes
A non-force-free flux rope could give more flexibility in the field configuration. Consider a magnetic field that is uniform in the \\(z\\) direction in terms of rectangular coordinates. Since \\(\
abla\\cdot{\\bf B}=0\\), the magnetic field can be expressed as
\\[{\\bf B}=\\left(\\frac{\\partial A}{\\partial y},-\\frac{\\partial A}{\\partial x},B_ {z}\\right), \\tag{3}\\]
where the vector potential is defined as \\({\\bf A}=A(x,y)\\hat{\\bf z}\\). The MHD equilibrium, \\({\\bf j}\\times{\\bf B}-\
abla p=0\\), gives (e.g., Sturrock, 1994)
\\[\\frac{\\partial^{2}A}{\\partial x^{2}}+\\frac{\\partial^{2}A}{\\partial y^{2}}=- \\mu_{0}\\frac{d}{dA}\\left(p+\\frac{B_{z}^{2}}{2\\mu_{0}}\\right)=-\\mu_{0}j_{z}, \\tag{4}\\]
where \\(\\mu_{0}\\) is the permeability of free space, \\(p\\) is the plasma thermal pressure, and \\(j_{z}\\) is the \\(z\\) component of the current density. Equation (4) is known as the Grad-Shafranov equation. We see from this equation that \\(p\\), \\(B_{z}\\) and hence \\(j_{z}\\) are a function of \\(A\\) alone. A special form of this equation \\(\
abla^{2}\\tilde{A}=\\exp(-2\\tilde{A})\\) (in properly scaled units) has the solution (e.g., Schindler et al., 1973)
\\[\\tilde{A}=\\ln\\left[\\alpha\\cos\\tilde{x}+\\sqrt{1+\\alpha^{2}}\\cosh\\tilde{y} \\right]. \\tag{5}\\]
This nonlinear solution has been called the periodic pinch since it has the form of a 2-D neutral sheet perturbed by a periodic chain of magnetic islands centered in the current sheet. Here \\(\\tilde{A}\\), \\(\\tilde{x}\\) and \\(\\tilde{y}\\) are dimensionless quantities, and \\(\\alpha\\) is a free parameter that can be used to control the aspect ratio of the magnetic islands.
From equations (3)-(5) we obtain
\\[j_{z}=-\\frac{B_{0}}{\\mu_{0}L_{0}}\\exp\\left(\\frac{-2A}{B_{0}L_{0}}\\right), \\tag{6}\\]
\\[B_{x}=B_{0}\\frac{\\sqrt{1+\\alpha^{2}}\\sinh(y/L_{0})}{\\alpha\\cos(x/L_{0})+\\sqrt{1+ \\alpha^{2}}\\cosh(y/L_{0})}, \\tag{7}\\]
\\[B_{y}=B_{0}\\frac{\\alpha\\sin(x/L_{0})}{\\alpha\\cos(x/L_{0})+\\sqrt{1+\\alpha^{2}} \\cosh(y/L_{0})}, \\tag{8}\\]
where \\(B_{0}\\) and \\(L_{0}\\) are scales of the field magnitude and length, respectively. The axial field \\(B_{z}\\) and the thermal pressure can be obtained from \\(\\frac{d}{dA}(p+\\frac{B_{z}^{2}}{2\\mu_{0}})=j_{z}\\), which gives
\\[p+\\frac{B_{z}^{2}}{2\\mu_{0}}=\\frac{B_{0}^{2}}{2\\mu_{0}}\\exp\\left(\\frac{-2A}{B_ {0}L_{0}}\\right)+\\frac{B_{1}^{2}}{2\\mu_{0}},\\]
where \\(B_{1}\\) is an arbitrary constant. Assuming a factor \\(\\varepsilon\\) in the partition of the total pressure, we have
\\[p=\\varepsilon\\frac{B_{0}^{2}}{2\\mu_{0}}\\left[\\left(\\alpha\\cos\\frac{x}{L_{0}}+ \\sqrt{1+\\alpha^{2}}\\cosh\\frac{y}{L_{0}}\\right)^{-2}+\\frac{B_{1}^{2}}{B_{0}^{2 }}\\right], \\tag{9}\\]
\\[B_{z}=\\pm\\sqrt{1-\\varepsilon}B_{0}\\left[\\left(\\alpha\\cos\\frac{x}{L_{0}}+\\sqrt {1+\\alpha^{2}}\\cosh\\frac{y}{L_{0}}\\right)^{-2}+\\frac{B_{1}^{2}}{B_{0}^{2}} \\right]^{1/2}. \\tag{10}\\]
Adjusting the parameters \\(\\alpha\\) and \\(\\varepsilon\\) gives a variety of flux rope configurations, circular and non-circular, force-free and non-force-free.
A flux rope of this kind is displayed in Figure 4. As can be seen, this flux rope lies within a current sheet. To single out the flux rope, we require \\(0\\leq x\\leq 2\\pi L_{0}\\) and \\(-\\pi L_{0}/2\\leq y\\leq\\pi L_{0}/2\\) initially, where \\(L_{0}=1.5R_{\\odot}\\). The flux rope is still \\(20R_{\\odot}\\) long, moving with \\(v=500\\) km s\\({}^{-1}\\) across the line of sight. Other parameters are assumed to be \\(B_{0}=10\\) mG, \\(B_{1}=0\\), \\(\\alpha=2\\), \\(\\varepsilon=0.1\\), and the temperature \\(T=10^{5}\\) K. Figure 5 shows the calculated FR. These curves are generally similar to those for a cylindrically symmetric force-free flux rope. Unlike the force-free flux-rope counterpart, the FR curves show a smooth transition from the zero angle to peak values. In addition, they are narrower in width, which may result from fields and densities which are more concentrated close to the axis. Note that the field magnitude is \\(\\sim 40\\) mG at the axis of the non-force-free flux rope. These profiles can also qualitatively explain the Helios observations.
The above results suggest that CMEs at the Sun manifest as flux ropes, confirming what previously could only be inferred from in situ data (Burlaga, 1988; Lepping et al., 1990). They also reinforce the connection of CMEs observed by coronagraphs with magnetic clouds identified from in situ measurements.
2-D Mapping of CMEs
As demonstrated above, even a single radio signal path can give hints on the magnetic structure of CMEs. Ambiguities in the flux rope orientation cannot be removed based on only one radio ray path. The power of the FR technique lies in having multiple radio sources, especially when a 2-D mapping of CMEs onto the sky is possible.
### A Single Flux Rope
For a flux-rope configuration, the magnetic field is azimuthal close to the rope edge and purely axial at the axis. The rotation measure would be positive through the part of the rope with fields coming toward an observer and negative through the part with fields leaving the observer, so the azimuthal field orientation can be easily recognized with data from multiple lines of sight (radio ray paths). A key role is played by the axial component, which tells us the helicity of the flux rope. Consider a force-free flux rope for simplicity. For points on a line parallel to the rope axis within the flux rope, the field direction as well as the magnitude is the same. The fields on this line would make different angles with a variety of radio signal paths since the signal path is always toward the observer. As long as the axial field component is strong enough, these different angles will lead to a gradient in the rotation measure along the rope.
Assuming an observer sitting at Earth, we calculate the FR pattern projected onto the sky for a force-free flux rope viewed from many radio sources. A flux rope has two possibilities for the axial field direction, with each one accompanied by either a left-handed or right-handed helicity. Plotted in Figure 6 are the four possible configurations as well as their rotation measure patterns. The angle \\(\\theta_{y}\\) defines the azimuthal angle of a line of sight with respect to the Sun-Earth (observer) direction in the solar ecliptic plane, while \\(\\theta_{z}\\) is the elevation angle of the line of sight with respect to the ecliptic plane. The flux rope, with axis in the ecliptic plane and perpendicular to the Sun-Earth direction, is centered at \\(10R_{\\odot}\\) from the Sun and has a radius of \\(r_{0}=8R_{\\odot}\\) and length \\(50R_{\\odot}\\). The magnetic field magnitude is assumed to be 10 mG at the rope axis. The gradient effect in the rotation measure along the flux rope is apparent in Figure 6 and it produces a one-to-one correspondence between the flux-rope configuration and the rotation measure pattern. The four configurations of a flux rope can thus be uniquely determined from the global behavior of the rotation measure, which gives the axial field orientation and the helicity. In order to fully resolve the flux rope, we have assumed \\(\\sim 80\\) radio sources per square degree on the sky, but in practice a resolution of 250 times lower can give enough information for the field orientation and helicity (see Figure 7).
The FR mapping obtained from multiple radio sources can also help to determine the speed and orientation of CMEs as they move away from the Sun. This mapping is similar to coronagraph observations. While the polarized brightness (Thomson-scattered, polarized component of the coronal brightness) is sensitive to the electron density, FR reacts to the magnetic field as well as the electron density and thus may be able to track CMEs to a larger distance than white light imaging. Figure 7 gives snapshots at different times of a tilted flux rope moving outward from the Sun. A Sun-centered coordinate system is defined such that the \\(x\\) axis extends from the Sun to Earth, the \\(z\\) axis is normal to and northward from the solar ecliptic plane, and the \\(y\\) axis lies in the ecliptic plane and completes the right handed set. A force-free flux rope, initially centered at (2, 2, 2)\\(R_{\\odot}\\) in this frame and oriented at 30\\({}^{\\circ}\\) from the ecliptic plane and 70\\({}^{\\circ}\\) from the Sun-Earth line, moves at a speed 500 km s\\({}^{-1}\\) from the Sun along the direction with elevation angle 10\\({}^{\\circ}\\) and azimuthal angle 20\\({}^{\\circ}\\). The flux rope evolution is constructed by assuming a power law dependence with distance \\(R\\) (in units of AU) for the rope size and physical parameters, i.e.,
\\[r_{0}=0.2\\times R^{0.78}\\ {\\rm AU}\\]
for the rope radius,
\\[B_{0}=15\\times R^{-1.5}\\ {\\rm nT}\\]
for the field magnitude at the axis, and
\\[T=3\\times 10^{4}\\times R^{-0.72}\\ {\\rm K}\\]
for the temperature. The rope length is kept at 3 times the rope diameter, and the plasma \\(\\beta\\) is kept at 0.1. Similar power-law dependences have been identified by a statistical study of CME evolution in the solar wind (Liu et al. 2005, 2006a), but note that the transverse size of the flux-rope cross section could be much larger than the radial width (Liu et al. 2006c).
The 2-D mapping has a pixel size of about 3.2 degrees. Even at such a low resolution, the flux rope can be recognized several hours after appearance at the Sun. The orientation of the flux rope with respect to the ecliptic plane is apparent in the first few snapshots, but note that this elevation angle may be falsified by the projection effect. The gradient effect in the rotation measure along the flux rope is discernable at 10 hours and becomes clearer around 20 hours. A right-handed helicity with axial fields skewed upward can be obtained from this gradient after a comparison with Figure 6 (top left). When the flux rope is closer to Earth, its appearance projected onto the sky becomes more and more deformed. Finally, when Earth is within the flux rope (around 80 hours), an observer would see two spots with opposite polarities produced by the ends of the flux rope.
Note that the above conclusions are not restricted to cylindrically symmetric force-free flux ropes. We have also used the non-force-free solutions of the steady state Vlasov-Maxwellequations (see SS2.2), which unambiguously give the same picture. The FR technique takes advantage of an axial magnetic field coupled with the azimuthal component, which is the general geometry of a flux rope. This robust feature makes possible a precise determination of the CME field orientation. A curved flux rope with turbulent fields, however, may need caution in determining the axial field direction (see below).
### MHD Simulations with Background Heliosphere
The above FR calculation does not take into account the background heliosphere. In this sense, the 2-D mapping may be considered as a difference imaging between the transient and background heliospheres. Here we use for the FR calculation 3-D ideal MHD simulations of a CME propagating into a background heliosphere (Manchester et al. 2004). The simulations are performed using the so-called Block Adaptive Tree Solar-Wind Roe Upwind Scheme (BATS-R-US). A specific heating function is assumed to produce a global steady-state model of the corona that has high-latitude coronal holes (where fast winds come from) and a helmet streamer with a current sheet at the equator. A twisted flux rope with both ends anchored in the photosphere is then inserted into the helmet streamer. Removal of some plasma in the flux rope destabilizes the flux rope and launches a CME. The numerical simulation with adaptive mesh refinement captures the CME evolution from the solar corona to Earth. A 3-D view of the flux rope resulting from the simulations is displayed in Figure 8. The magnetic field, as represented by colored solid lines extending from the Sun, winds to form a helical structure within the simulated CME. The field has a strong toroidal (axial) component close to the axis but is nearly poloidal (azimuthal) at the surface of the rope.
A fundamental problem in CME studies which remains to be resolved is whether CMEs are magnetically connected to the Sun as they propagate through interplanetary medium. Most theoretical modeling assumes a twisted flux rope with two ends anchored to the Sun (Chen 1996; Kumar & Rust 1996; Gibson & Low 1998). This scenario is suggested by energetic particles of solar origin observed within a magnetic cloud (Kahler & Reames 1991). An isolated plasmoid is also a possible structure for CMEs (Vandas et al. 1993a; Vandas 1993b). The FR mapping is capable of removing this ambiguity in that it can easily capture a flux-rope geometry bent toward the Sun. To show this capability, we calculate the FR mapping of the simulated CME in a background heliosphere. The MHD model gives a time series of data cubes of \\(300R_{\\odot}\\) in length. We subtract the background from the rotation measure of the CME data to avoid possible effects brought about by the finite domain. Figure 9 shows the difference mapping of the rotation measure at a resolution of \\(\\sim 3.2\\) degrees when the CME propagates a day (\\(\\sim 70R_{\\odot}\\)) away from the Sun. The simulation data are rotated such that the observer (projected onto the origin) can see the flux rope curved to the Sun. The coordinates, \\(\\theta_{y}\\) and \\(\\theta_{z}\\), are defined with respect to the observer. A flux rope extending back to the Sun is apparent in the difference image. The outer arc with positive rotation measures is formed by the azimuthal magnetic field pointing to the observer while the inner arc with negative rotation measures originates from the field with the opposite polarity. The rotation measure difference is positive near the Sun, which is due to a pre-existing negative rotation measure that becomes less negative after the CME eruption.
A closer look at the image would also reveal asymmetric legs of the flux rope. This effect, indicative of a right-handed helicity, is created by the different view angles as described above. The nose of the flux rope does not show a clear gradient in the rotation measure because the view angles of this part are similar. In the case of the two legs directed to the observer, two spots with contrary magnetic polarities will be seen, so the curved geometry may also help to clarify the field helicity.
## 4 Summary and Discussion
We have presented a method to determine the magnetic field orientation of CMEs based on FR. Our FR calculations, either with a simple flux rope or global MHD modeling, demonstrate the exciting result that the CME field orientation can be obtained 2-3 days before CMEs arrive at Earth, substantially longer than the warning time achieved by local spacecraft measurements at L1.
The FR curves through the CME plasma observed by Helios can be reproduced by a flux rope moving across a radio signal path. Two basic FR profiles, Gaussian-shaped with a single polarity or \"N\"-like with polarity reversals, indicate the orientation of the flux rope with respect to the signal path. Force-free and non-force-free flux ropes generally give the same picture, except some trivial differences reflecting the field and density distributions within a flux rope. The FR calculation with a radio signal path, combined with the Helios observations, shows that CMEs at the Sun appear as flux ropes.
2-D FR mapping of a flux rope using many radio sources gives the field orientation as well as the helicity. The orientation of azimuthal fields can be readily obtained since they yield rotation measures with opposite polarities. The axial component of the magnetic field creates a gradient in rotation measure along the flux rope, with which the flux rope configurations can be disentangled. Time-dependent FR mapping is also calculated for a tilted flux rope propagating away from the Sun. The orientation of the flux rope as a whole and its projected speed onto the sky can be determined from the snapshots of the flux rope mapped in FR. We further compute the FR mapping for a curved flux rope moving into a background heliosphere obtained from 3-D ideal MHD simulations. It is shown that the FR mapping can resolve a CME curved back to the Sun in addition to the field orientation. Difference imaging is needed to remove the FR contribution from the background medium.
The global FR map is a new technique for measuring the CME magnetic field. This method can determine the magnetic field orientation of CMEs without knowledge of the electron density. The electron density could be inferred from Thomson scattering measurements made by the SECCHI instrument (suite of wide angle coronagraphs) on STEREO which has stereoscopic fields of view (Howard, 2000). With the joint measurements of the electron density, the magnetic field strength can be estimated.
Note that the above results are a first-order attempt to predict what may be seen in FR. An actual CME likely shows a turbulent behavior and may have multiple structures along the line of sight; the rotation measure, an integral quantity along the line of sight, could display similar signatures for different structures. Therefore, interpretation of the FR measurements will be more complex than suggested here. However, having an instantaneous, global map of the rotation measure that evolves in time will be vastly superior to a time profile along a single line of sight, and comparison with coronagraph observations and actual measures of geoeffectiveness (e.g., the \\(D_{st}\\) index) for a series of real events will eventually lead to the predictive capability proposed in this paper.
The present results also pave the way for interpreting future FR observations of CMEs by large radio arrays, particularly those operating at low frequencies (Oberoi & Kasper, 2004; Salah et al., 2005). The MWA - Low Frequency Demonstrator, specially designed for this purpose at 80-300 MHz, will feature wide fields of view, high sensitivity and multi-beaming capabilities (Salah et al., 2005). This array will be installed in Western Australia (\\(26.4^{\\circ}\\)S, \\(117.3^{\\circ}\\)E), a radio quiet region. It will spread out \\(\\sim 1.5\\) km in diameter, achieving \\(\\sim 8000\\) m\\({}^{2}\\) of collecting area at 150 MHz and a field of view from \\(15^{\\circ}\\) at 300 MHz to \\(50^{\\circ}\\) at 80 MHz. The point source sensitivity will be about 20 mJy for an integration time of 1 s. The array is expected to monitor \\(\\sim 300\\) background radio sources within \\(13^{\\circ}\\) elongation (\\(\\sim 50R_{\\odot}\\)) from the Sun, providing a sufficient spatial sampling of the inner heliosphere. In addition, this array will be able to capture a rotation measure of \\(\\sim 10^{-2}\\) rad m\\({}^{-2}\\) and thus is remarkably sensitive to the magnetic field. Science operations of the array will start in 2009. Implementation of our method by such an array would imply a coming era when the impact of the solar storm on Earth can be predicted with small ambiguities. It could also fill the missing link in coronal observations of the CME magnetic field, thus providing strong constraints on CME initiation theories.
The research was supported by NASA contract 959203 from JPL to MIT and NASA grant NAG5-11623. This work was also supported by the CAS International Partnership Program for Creative Research Teams.
## References
* (1)
* (2) Andreev, V. E., Efimov, A. I., Samoznaev, L. N., Chashei, I. V., & Bird, M. K. 1997, Sol. Phys., 176, 387
* (3) Bird, M. K., Schrufer, E., Volland, H., & Sieber, W. 1980, Nature, 283, 459
* (4) Bird, M. K., et al. 1985, Sol. Phys., 98, 341
* (5) Burlaga, L. F., Sittler, E., Mariani, F., & Schwenn, R. 1981, J. Geophys. Res., 86, 6673
* (6) Burlaga, L. F. 1988, J. Geophys. Res., 93, 7217
* (7) Chashei, I. V., Bird, M. K., Efimov, A. I., Andreev, V. E., & Samoznaev, L. N. 1999, Sol. Phys., 189, 399
* (8) Chashei, I. V., Efimov, A. I., Samoznaev, L. N., Bird, M. K., & Patzold, M. 2000, Adv. Space Res., 25, 1973
* (9) Chen, J. 1996, J. Geophys. Res., 101, 27499
* (10) Dungey, J. W. 1961, Phys. Rev. Lett., 6, 47
* (11) Efimov, A. I., Bird, M. K., Andreev, V. E., & Samoznaev, L. N. 1996, Astron. Lett., 22, 785
* (12) Gibson, S. E., & Low, B. C. 1998, ApJ, 493, 460
* (13) Gosling, J. T., McComas, D. J., Philips, J. L., & Bame, S. J. 1991, J. Geophys. Res., 96, 7831
* (14) Hollweg, J. V., et al. 1982, J. Geophys. Res., 87, 1
* (15) Howard, R. A., Moses, J. D., & Socker, D. G. 2000, Proc. SPIE, 4139, 259
* (16) Kahler, S. W., & Reames, D. V. 1991, J. Geophys. Res., 96, 9419
* (17) Kumar, A., & Rust, D. 1996, J. Geophys. Res., 101, 15667
* (18) Lepping, R. P., Jones, J. A., & Burlaga, L. F. 1990, J. Geophys. Res., 95, 11957
* (19)* () Levy, G. S., et al. 1969, Science, 166, 596
* () Lin, J., & Forbes, T. G. 2000, J. Geophys. Res., 105, 2375
* () Liu, Y., Richardson, J. D., & Belcher, J. W. 2005, Plan. Space Sci., 53, 3
* () Liu, Y., Richardson, J. D., Belcher, J. W., Kasper, J. C., & Elliott, H. A. 2006a, J. Geophys. Res., 111, A01102, doi:10.1029/2005JA011329
* () Liu, Y., Richardson, J. D., Belcher, J. W., Kasper, J. C., & Skoug, R. M. 2006b, J. Geophys. Res., 111, A09108, doi:10.1029/2006JA011723
* () Liu, Y., et al. 2006c, J. Geophys. Res., 111, A12S03, doi:10.1029/2006JA011890
* () Lundquist, S. 1950, Ark. Fys., 2, 361
* () Manchester IV, W. B., et al. 2004, J. Geophys. Res., 109, A02107
* () Mikic, Z., & Linker, J. A. 1994, ApJ, 430, 898
* () Oberoi, D., & Kasper, J. C. 2004, Plan. Space Sci., 52, 1415
* () Patzold, M., et al. 1987, Sol. Phys., 109, 91
* () Patzold, M., & Bird, M. K. 1998, Geophys. Res. Lett., 25, 2105
* () Salah, J. E., et al. 2005, Proc. SPIE, 5901, 124
* () Schindler, K., Pfirsch, D., & Wobig, H. 1973, Plasma Phys., 15, 1165
* () Stelzried, C. T., et al. 1970, Sol. Phys., 14, 440
* () Sturrock, P. A. 1994, Plasma Physics: An Introduction to the Theory of Astrophysical, Geophysical and Laboratory Plasmas (New York: Cambridge Univ. Press), 209
* () Vandas, M., Fischer, S., Pelant, P., & Geranios, A. 1993a, J. Geophys. Res., 98, 11467
* () Vandas, M., Fischer, S., Pelant, P., & Geranios, A. 1993b, J. Geophys. Res., 98, 21061
* () Vogt, M. F., et al. 2006, Space Wea., 4, S09001
* () Weimer, D. R., et al. 2002, J. Geophys. Res., 107, A81210
* () Woo, R. 1997, Geophys. Res. Lett., 24, 97Figure 1: Time profiles of FR (bottom) and spectral broadening (top) of the Helios 2 signal during the CMEs of 23 October 1979 (left) and 24 October 1979 (right) recorded at the Madrid station DSS 63. The apparent solar offset of Helios 2 is given at the top. The dashed vertical line indicates the arrival time of the CME leading edge with uncertainties given by the width of the box “LE”. Large deviations in FR following the leading edge indicate the arrival of the CME’s bright core. Reproduced from Bird et al. (1985).
Figure 2: Schematic diagram of a force-free flux rope and the line of sight from a radio source to an observer projected onto the plane of the flux-rope cross section. The flux rope moves at a speed \\(v\\) across the line of sight which makes an angle \\(\\phi\\) with the motion direction.
Figure 3: FR at \\(\\lambda=13\\) cm through the force-free flux rope as a function of time. Left is the rotation angle with \\(\\theta\\) fixed to \\(0^{\\circ}\\) and \\(\\phi=[10^{\\circ},20^{\\circ},30^{\\circ},90^{\\circ}]\\), and right is the rotation angle with \\(\\phi\\) fixed to \\(45^{\\circ}\\) and \\(\\theta=[0^{\\circ},20^{\\circ},40^{\\circ},60^{\\circ}]\\).
Figure 4: Same format as Figure 2, but for a non-force-free flux rope embedded in a current sheet.
Figure 5: Same format as Figure 3, but for crossings of the non-force-free flux rope.
Figure 6: Mapping of the rotation measure corresponding to the four configurations of a flux rope onto the sky. The color shading indicates the value of the rotation measure. The arrows show the directions of the azimuthal and axial magnetic fields, from which a left-handed (LH) or right-handed (RH) helicity is apparent. Each configuration of the flux rope has a distinct rotation measure pattern.
Figure 7: FR mapping of the whole sky at a resolution of \\(\\sim 3.2\\) degrees as a tilted flux rope moves away from the Sun. Note that the motion direction of the flux-rope center is not directly toward Earth. Values of the rotation measure for each panel are indicated by the color bar within the panel. Also shown is the time at the top for each snapshot.
Figure 8: A 3-D rendering of the CME magnetic field lines at 4.5 hours after initiation. The color shading indicates the field magnitude and the white sphere represents the Sun.
Figure 9: Mapping of the rotation measure difference between the MHD simulation at 24 hours and the steady state heliosphere. The two color bars indicate the logarithmic scale of the absolute value of the negative (-) and positive (+) rotation measure, respectively. | We describe a method to measure the magnetic field orientation of coronal mass ejections (CMEs) using Faraday rotation (FR). Two basic FR profiles, Gaussian-shaped with a single polarity or \"N\"-like with polarity reversals, are produced by a radio source occulted by a moving flux rope depending on its orientation. These curves are consistent with the Helios observations, providing evidence for the flux-rope geometry of CMEs. Many background radio sources can map CMEs in FR onto the sky. We demonstrate with a simple flux rope that the magnetic field orientation and helicity of the flux rope can be determined 2-3 days before it reaches Earth, which is of crucial importance for space weather forecasting. An FR calculation based on global magnetohydrodynamic (MHD) simulations of CMEs in a background heliosphere shows that FR mapping can also resolve a CME geometry curved back to the Sun. We discuss implementation of the method using data from the Mileura Widefield Array (MWA).
Faraday rotation -- magnetic fields -- Sun: coronal mass ejections | Write a summary of the passage below. |
arxiv-format/0706_0130v1.md | Recent progress constraining the nuclear equation of state from astrophysics and heavy ion reactions
Christian Fuchs
Institute of Theoretical Physics, University of Tubingen, Germany [email protected]
## 1 Introduction
The isospin dependence of the nuclear forces which is at present only little constrained by data will be explored by the forthcoming radioactive beam facilities at FAIR/GSI, SPIRAL2/GANIL and RIA. Since the knowledge of the nuclear equation-of-state (EoS) at supra-normal densities and extreme isospin is essential for our understanding of the nuclear forces as well as for astrophysical purposes, the determination of the EoS was already one of the primary goals when first relativistic heavy ion beams started to operate. A major result of the SIS100 program at the GSI is the observation of a soft EoS for symmetric matter in the explored density range up to 2-3 times saturation density. These accelerator based experiments are complemented by astrophysical observations. The recently observed most massive neutron star with \\(2.1\\pm 0.2\\left({}^{+0.4}_{-0.5}\\right)\\mathrm{M}_{\\odot}\\) excludes exotic phases of high density matter and requires a relatively stiff EoS. Contrary to a naive expectation, the astrophysical observations do, however, not stand in contradiction with those from heavy ion reactions. Moreover, we are in the fortunate situation that ab initio calculations of the nuclear many-body problem predict a density and isospin behavior of the EoS which is in agreement with both observations.
## 2 The EoS from ab initio calculations
In _ab initio_ calculations based on many-body techniques one derives the EoS from first principles, i.e. treating short-range and many-body correlations explicitly. This allowsto make prediction for the high density behavior, at least in a range where hadrons are still the relevant degrees of freedom. A typical example for a successful many-body approach is Brueckner theory (for a recent review see [1]). In the following we consider non-relativistic Brueckner and variational calculations [2] as well as relativistic Brueckner calculations [3]. It is a well known fact that non-relativistic approaches require the inclusion of - in net repulsive - three-body forces in order to obtain reasonable saturation properties. In relativistic treatments part of such diagrams, e.g. virtual excitations of nucleon-antinucleon pairs are already effectively included.
Fig. 1 compares now the predictions for nuclear and neutron matter from microscopic many-body calculations - DBHF [5] and the 'best' variational calculation with 3-BFs and boost corrections [2] - to phenomenological approaches (NL3 and DD-TW from [6]) and an approach based on chiral pion-nucleon dynamics [7] (ChPT+corr.). As expected the phenomenological functionals agree well at and below saturation density where they are constrained by finite nuclei, but start to deviate substantially at supra-normal densities. In neutron matter the situation is even worse since the isospin dependence of the phenomenological functionals is less constrained. The predictive power of such density functionals at supra-normal densities is restricted. _Ab initio_ calculations predict throughout a soft EoS in the density range relevant for heavy ion reactions at intermediate and low energies, i.e. up to about 3 \\(\\rho_{0}\\). Since the \\(nn\\) scattering length is large, neutron matter at subnuclear densities is less model dependent. The microscopic calculations (BHF/DBHF, variational) agree well and results are consistent with 'exact' Quantum-Monte-Carlo calculations [8].
Fig. 2 compares the symmetry energy predicted from the DBHF and variational
Figure 1: EoS in nuclear matter and neutron matter. BHF/DBHF and variational calculations are compared to phenomenological density functionals (NL3, DD-TW) and ChPT+corr.. The left panel zooms the low density range. The Figure is taken from Ref. [4].
calculations to that of the empirical density functionals already shown in Fig. 1 In addition the relativistic DD-\\(\\rho\\delta\\) RMF functional [11] is included. Two Skyrme functionals, SkM\\({}^{*}\\) and the more recent Skyrme-Lyon force SkLya represent non-relativistic models. The left panel zooms the low density region while the right panel shows the high density behavior of \\(E_{\\rm sym}\\).
The low density part of the symmetry energy is in the meantime relatively well constraint by data. Recent NSCL-MSU heavy ion data in combination with transport calculations are consistent with a value of \\(E_{\\rm sym}\\approx 31\\) at \\(\\rho_{0}\\) and rule out extremely \"stiff\" and \"soft\" density dependences of the symmetry energy [12] The same value has been extracted [9] from low energy elastic and (p,n) charge exchange reactions on isobaric analog states, i.e. p(\\({}^{6}He\\),\\({}^{6}Li^{*}\\))n measured at the HMI. At sub-normal densities \\(\\rho_{0}\\) recent data points have been extracted from the isoscaling behavior of fragment formation in low-energy heavy ion reactions where the corresponding experiments have been carried out at Texas A&M and NSCL-MSU [10].
However, theoretical extrapolations to supra-normal densities diverge dramatically. This is crucial since the high density behavior of \\(E_{\\rm sym}\\) is essential for the structure and the stability of neutron stars. The microscopic models show a density dependence which can still be considered as _asy-stiff_. DBHF [5] is thereby stiffer than the variational results of Ref. [2]. The density dependence is generally more complex than in RMF theory, in particular at high densities where \\(E_{\\rm sym}\\) shows a non-linear and more pronounced increase. Fig. 2 clearly demonstrates the necessity to constrain the symmetry energy at supra-normal densities with the help of heavy ion reactions.
Figure 2: Symmetry energy as a function of density as predicted by different models. The left panel shows the low density region while the right panel displays the high density range. Data are taken from [9] and [10].
## 3 Constraints from heavy ion reactions
Experimental data which put constraints on the symmetry energy have already been shown in Fig. 2. The problem of multi-fragmentation data from low and intermediate energy reactions is that they are restricted to sub-normal densities up to maximally saturation density. However, from low energetic isospin diffusion measurements at least the slope of the symmetry around saturation density could be extracted [16]. This puts already an important constraint on the models when extrapolated to higher densities. It is important to notice that the slopes predicted by the ab initio approaches (variational, DBHF) shown in Fig. 2 are consistent with the empirical values. Further going attempts to constrain the symmetry energy at supra-normal densities from particle production in relativistic heavy ion reactions [11, 17, 18] have so far not yet led to firm conclusions since the corresponding signals are too small, e.g. the isospin dependence of kaon production [19].
Firm conclusions could only be drawn on the symmetric part of the nuclear bulk properties. To explore supra-normal densities one has to increase the bombarding energy up to relativistic energies. This was one of the major motivation of the SIS100 project at the GSI where - according to transport calculation - densities between \\(1\\div 3\\)\\(\\rho_{0}\\) are reached at bombarding energies between \\(0.1\\div 2\\) AGeV. Sensitive observables are the collective nucleon flow and subthreshold \\(K^{+}\\) meson production. In contrast to the flow signal which can be biased by surface effects and the momentum dependence of the optical potential, \\(K^{+}\\) mesons turned out to an excellent probe for the high density
Figure 3: Excitation function of the \\(K^{+}\\) multiplicities in \\(Au+Au\\) and \\(C+C\\) reactions. RQMD [13] and IQMD [14] with in-medium kaon potential and using a hard/soft nuclear EoS are compared to data from the KaoS Collaboration [15].
phase of the reactions. At subthreshold energies the necessary energy has to be provided by multiple scattering processes which are highly collective effects. This ensures that the majority of the \\(K^{+}\\) mesons is indeed produced at supra-normal densities. In the following I will concentrate on the kaon observable.
Subthreshold particles are rare probes. However, within the last decade the KaoS Collaboration has performed systematic high statistics measurements of the \\(K^{+}\\) production far below threshold [15, 21]. Based on this data situation, in Ref. [13] the question if valuable information on the nuclear EoS can be extracted has been revisited and it has been shown that subthreshold \\(K^{+}\\) production provides indeed a suitable and reliable tool for this purpose. In subsequent investigations the stability of the EoS dependence has been proven [20, 14].
Excitation functions from KaoS [15, 22] are shown in Fig. 3 and compared to RQMD [13, 20] and IQMD [14] calculations. In both cases a soft (K=200 MeV) and a hard (K=380 MeV) EoS have been used within the transport approaches. The forces where Skyrme type forces supplemented with an empirical momentum dependence. As expected the EoS dependence is pronounced in the heavy Au+Au system while the light C+C system serves as a calibration. The effects become even more evident when the ratio \\(R\\) of the kaon multiplicities obtained in Au+Au over C+C reactions (normalized to the corresponding mass numbers) is built [13, 15]. Such a ratio has the advantage that possible uncertainties which might still exist in the theoretical calculations should cancel out to large extent. This ratio is shown in Fig. 4. The comparison to the experimental data from KaoS [15], where the increase of \\(R\\) is even more pronounced, strongly favors a soft equation of state. This conclusion is in agreement with the conclusion drawn from
Figure 4: Excitation function of the ratio \\(R\\) of \\(K^{+}\\) multiplicities obtained in inclusive Au+Au over C+C reactions. RQMD [13] and IQMD [14] calculations are compared to KaoS data [15]. Figure is taken from [20].
the alternative flow observable [23, 24, 25, 26].
## 4 Constraints from neutron stars
Measurements of \"extreme\" values, like large masses or radii, huge luminosities etc. as provided by compact stars offer good opportunities to gain deeper insight into the physics of matter under extreme conditions. There has been substantial progress in recent time from the astrophysical side.
The most spectacular observation was probably the recent measurement [28] on PSR J0751+1807, a millisecond pulsar in a binary system with a helium white dwarf secondary, which implies a pulsar mass of \\(2.1\\pm 0.2\\left({}^{+0.4}_{-0.5}\\right)\\mathrm{M}_{\\odot}\\) with \\(1\\sigma\\) (\\(2\\sigma\\)) confidence. Therefore, a reliable EoS has to describe neutron star (NS) masses of at least 1.9 \\(\\mathrm{M}_{\\odot}\\) (\\(1\\sigma\\)) in a strong, or 1.6 \\(\\mathrm{M}_{\\odot}\\) (\\(2\\sigma\\)) in a weak interpretation. This condition limits the softness of the EoS in neutron star (NS) matter. One might therefore be worried about an apparent contradiction between the constraints derived from neutron stars and those from heavy ion reactions. While heavy ion reactions favor a soft EoS, PSR J0751+1807 requires a stiff EoS. The corresponding constraints are, however, complementary rather than contradictory. Intermediate energy heavy-ion reactions, e.g. subthreshold kaon production, constrains the EoS at densities up to \\(2\\div 3\\)\\(\\rho_{0}\\) while the maximum NS
Figure 5: Mass versus central density for compact star configurations obtained for various relativistic hadronic EsoS. Crosses denote the maximum mass configurations, filled dots mark the critical mass and central density values where the DU cooling process becomes possible. According to the DU constraint, it should not occur in “typical NSs” for which masses are expected from population synthesis [27] to lie in the lower grey horizontal band. The dark and light grey horizontal bands around 2.1 \\(M_{\\odot}\\) denote the \\(1\\sigma\\) and \\(2\\sigma\\) confidence levels, respectively, for the mass measurement of PSR J0751+1807 [28]. Figure is taken from [29].
mass is more sensitive to the high density behavior of the EoS. Combining the two constraints implies that the EoS should be _soft at moderate densities and stiff at high densities._ Such a behavior is predicted by microscopic many-body calculations (see Fig. 6). DBHF, BHF or variational calculations, typically, lead to maximum NS masses between \\(2.1\\div 2.3\\;M_{\\odot}\\) and are therefore in accordance with PSR J0751+1807, as can be seen from Fig. 5 and Fig. 6 which combines the results from heavy ion collisions and the maximal mass constraint.
There exist several other constraints on the nuclear EoS which can be derived from observations of compact stars, see e.g. Refs. [29, 30]. Among these, the most promising one is the Direct Urca (DU) process which is essentially driven by the proton fraction inside the NS [31]. DU processes, e.g. the neutron \\(\\beta\\)-decay \\(n\\to p+e^{-}+\\bar{\
u}_{e}\\), are very efficient regarding their neutrino production, even in super-fluid NM and cool NSs too fast to be in accordance with data from thermally observable NSs. Therefore, one can suppose that no DU processes should occur below the upper mass limit for \"typical\" NSs, i.e. \\(M_{DU}\\geq 1.5\\;M_{\\odot}\\) (\\(1.35\\;M_{\\odot}\\) in a weak interpretation). These limits come from a population synthesis of young, nearby NSs [27] and masses of NS binaries [28]. While the present DBHF EoS leads to too fast neutrino cooling this behavior can be avoided if a phase transition to quark matter is assumed [32]. Thus a quark phase is not ruled out by the maximum NS mass. However, corresponding quark EsoS have to be almost as stiff as typical hadronic EsoS [32].
Figure 6: Combination of the constraints on the EoS derived from the maximal neutron star mass criterium and the heavy ion collisions constraining the compression modulus. Values of various microscopic BHF and DBHF many-body calculations are show.
## 5 Summary
Heavy ion reactions provide in the meantime reliable constraints on the ispspin dependence of the nuclear EoS at sub-normal densities up to saturation density and for the symmetric part up to - as an conservative estimate - two times saturation density. These are complemented by astrophysical constraints at high densities. The present situation is in fair agreement with the predictions from nuclear many-body theory.
## References
* [1] Baldo M and Maieron C 2007 _J. Phys._ G **34** R243
* [2] A. Akmal A, Pandharipande V R, Ravenhall D G 1998 _Phys. Rev._ C **58** 1804
* [3] Gross-Boelting T, Fuchs C, Faessler A 1999 _Nucl. Phys._ A **648** 105
* [4] Fuchs C, Wolter H H 2006 _Euro. Phys. J._ A **30** 5
* [5] van Dalen E, Fuchs C, Faessler A 2004 _Nucl. Phys._ A **744** 227; 2005 _Phys. Rev._ C 065803; 2005 _Phys. Rev. Lett._**95** 022302; 2007 _Eur. Phys. J._ A **31** 29
* [6] Typel S, Wolter H H 1999 _Nucl. Phys._ A **656** 331
* [7] Finelli P, Kaiser N, Vretenar D, Weise W 2004 _Nucl. Phys._ A **735**
* [8] Carlson J, Morales J, Pandharipande V R, Ravenhall D G 2003 _Phys. Rev._ C **68** 025802
* [9] Khoa D T, von Oertzen W, Bohlen H G and Ohkubo S 2007 _J. Phys._ G **33** R111; Khoa D T _et al._ 2005 _Nucl. Phys._ A **759** 3
* [10] Shetty D V, Yennello S J and Souliotis G A 2007 _Preprint_ arXiv:0704.0471 [nucl-ex]
* [11] Baran V, Colonna M, Greco V, Di Toro M 2005 _Phys. Rep._**410** 335
* [12] Shetty D V, Yennello S J and Souliotis G A 2007 _Phys. Rev._ C **75** 034602
* [13] Fuchs C, Faessler A, Zabrodin E, Zheng Y M 2001 _Phys. Rev. Lett._**86** 1974
* [14] Hartnack Ch, Oeschler H, Aichelin J 2006 _Phys. Rev. Lett._**96** 012302
* [15] Sturm C et al. [KaoS Collaboration] 2001 _Phys. Rev. Lett._**86** 39
* [16] Chen L W, Ko C M and Li B A 2005 _Phys. Rev. Lett._**94** 032701
* [17] Ferini G _et al._ 2006 _Phys. Rev. Lett._**97** 202301
* [18] Gaitanos T et al. 2004 _Nucl. Phys._ A **732** 24
* [19] Lopez X _et al._ [FOPI Collaboration] 2007 _Phys. Rev._ C **75** 011901
* [20] Fuchs C 2006 _Prog. Part. Nucl. Phys._**56** 1
* [21] Schmah A _et al._ [KaoS Collaboraton] 2005 _Phys. Rev._ C **71** 064907
* [22] Laue F _et al._ [KaoS Collaboration] 1999 _Phys. Rev. Lett._**82** 1640
* [23] Danielewicz P 2000 _Nucl. Phys._ A **673** 275
* [24] Gaitanos T _et al._ 2001 _Eur. Phys. J._ A **12** 421; Fuchs C, Gaitanos T 2003 _Nucl. Phys._ A **714** 643
* [25] Andronic A _et al._ [FOPI Collaboration] 2001 _Phys. Rev._ C **64** 041604; 2003 _Phys. Rev._ C **67** 034907
* [26] Stoicea G _et al._ [FOPI Collaboration] 2004 _Phys. Rev. Lett._**92** 072303
* [27] Popov S, Grigorian H, Turolla R and Blaschke D 2006 _Astron. Astrophys._**448** 327
* [28] Nice D J _et al._ 2005 _Astrophys. J._**634** 1242
* [29] Kahn T _et al._ 2006 _Phys. Rev._ C **74** 035802
* [30] Steiner A W, Prakash M,Lattimer J M, Ellis P J 2005 _Phys. Rep._**411** 325
* [31] Lattimer J M, Pethick C J, Prakash M and Haensel P 1991 _Phys. Rev. Lett._**66** 2701
* [32] Kahn T _et al._ 2006 _Preprint_ nucl-th/0609067 | The quest for the nuclear equation of state (EoS) at high densities and/or extreme isospin is one of the longstanding problems of nuclear physics. Ab initio calculations for the nuclear many-body problem make predictions for the density and isospin dependence of the EoS far away from the saturation point of nuclear matter. On the other hand, in recent years substantial progress has been mode to constrain the EoS both, from the astrophysical side and from accelerator based experiments. Heavy ion experiments support a soft EoS at moderate densities while recent neutron star observations require a \"stiff\" high density behavior. Both constraints are discussed and shown to be in agreement with the predictions from many-body theory. | Write a summary of the passage below. |
arxiv-format/0706_0394v3.md | # How state preparation can affect a quantum experiment: Quantum process tomography for open systems
Aik-meng Kuah
Kavan Modi
[email protected] Center for Complex Quantum Systems, The University of Texas at Austin, Austin Texas 78712
Cesar A. Rodriguez-Rosario
E.C.G. Sudarshan
Center for Complex Quantum Systems, The University of Texas at Austin, Austin Texas 78712
November 3, 2021
## I Introduction
Quantum process tomography [1; 2] is the experimental tool that determines the open evolution of a system that interacts with the surrounding environment. It is an important tool in the fields of quantum computation and quantum information that allows an experimenter to determine the action of a quantum process on the system of interest.
The standard tomography procedure and some variations of it (namely entanglement assisted tomography [3; 4], ancilla assisted tomography [5; 6], and direct characterization of quantum dynamics [7; 8]) have been verified experimentally [9; 10; 11; 12; 13; 14; 15].
In many of these experiments, the maps that characterize the quantum process have been plagued with negative eigenvalues and sometimes non-linear behavior. It was pointed out previously that the negative eigenvalues may be due to the initial correlations between the system and the environment [16; 17; 18; 19; 20].
Quantum process tomography is thought of as a procedure that allows us to experimentally determine the dynamical map describing the quantum process. However, all experiments require a method to prepare the initial states of the system at the beginning of the experiment. This act of preparation has been neglected from the theory of quantum process tomography.
We will show in this paper how an open system is prepared into different initial states can fundamentally change the outcome. A consequence of our study will force us to incorporate state preparation into the map that describes the process. We will have to distinguish between maps determined from a quantum process tomography experiment, which we will call _process maps_, from the well know _dynamical maps_[21; 22]. The key difference between dynamical maps and process maps is that process maps include the initial step of state preparation, while dynamical maps are not restricted in that sense (see Appendix B).
We will study two methods for preparing states for quantum experiments, the stochastic preparation method and preparations using von Neumann measurements. In the former case, the process is given by a completely positive linear map (also see [20] for an independent, but related discussion). However for the measurement method, we will show that the outcome of the experiment cannot be consistently described by a linear map. We propose a bi-linear process map to describe such a quantum process, and we will show that this bi-linear map can be experimentally determined by developing a procedure for bi-linear quantum process tomography.
Based on our results existing quantum process tomography experiments should be reanalyzed. Any experiment which obtained a process map that had negative eigenvalues or behaved in non-linear fashion may suffer from poor preparation of input states, and should be analyzed for bi-linearities.
## II Linear quantum process tomography
The objective of quantum process tomography is to determine how a quantum process acts on different states of the system. In very basic terms, a quantum process takes different quantum input states to different output states:
Input states \\(\\rightarrow\\) PROCESS \\(\\rightarrow\\) Output states.
The complete behavior of the process is known if the output state for any given input state can be predicted.
The tomography aspect of quantum process tomography, is to use a finite number of carefully selected input states instead of all possible states, to determine the process. If the quantum process is described by a linear map (see [23] for detailed discussion), then the necessary input states should linearly span the state space of the system. For a finite dimensional state space, this requires a finite number of input states. Once the evolution of these input states is known, then by linearity the evolution of any input state is known.
For example for a qubit only the following four projections as input states are necessary
\\[P^{(1,-)}=\\frac{1}{2}(\\openone-\\sigma_{1}),\\;P^{(1,+)}=\\frac{1} {2}(\\openone+\\sigma_{1}),\\] \\[P^{(2,+)}=\\frac{1}{2}(\\openone+\\sigma_{2}),\\;\\;\\text{and}\\;\\;P^ {(3,+)}=\\frac{1}{2}(\\openone+\\sigma_{3}) \\tag{1}\\]
to linearly span the whole state space, i.e. any state of a qubit can be written as a unique linear combination of these four projections. Above, \\(\\openone\\) is the \\(2\\times 2\\) identity matrix and \\(\\sigma_{j}\\) are the Pauli spin matrices.
Using the set linearly independent input states \\(P^{(j)}\\), and measuring the corresponding output states \\(Q^{(j)}\\), the evolution of an arbitrary input state can be determined. Let the linear map describing the process be given by \\(\\Lambda\\), and the arbitrary input state be expressed (uniquely) as a linear combination \\(\\sum_{j}p_{j}P^{(j)}\\). Then the action of the map in terms of the matrix elements is as follows:
\\[\\sum_{r^{\\prime}s^{\\prime}}\\Lambda_{rr^{\\prime},ss^{\\prime}}\\left(\\sum_{j}p_{ j}P^{(j)}_{r^{\\prime}s^{\\prime}}\\right)\\;=\\;\\sum_{j}p_{j}Q^{(j)}_{rs}.\\]
The map itself can be expressed as
\\[\\Lambda_{rr^{\\prime},ss^{\\prime}}=\\sum_{n}Q^{(n)}_{rs}\\tilde{P}^{(n)}_{r^{ \\prime}s^{\\prime}}{}^{*}, \\tag{2}\\]
where \\(\\tilde{P}^{(n)}\\) are the duals of the input states satisfying the scalar product
\\[\\tilde{P}^{(m)}{}^{\\dagger}P^{(n)}=\\sum_{rs}\\tilde{P}^{(m)}_{rs}{}^{*}P^{(n)} _{rs}=\\delta_{mn}.\\]
The duals for the projections in Eq. (1) are
\\[\\tilde{P}^{(1,-)} = \\frac{1}{2}(\\openone-\\sigma_{1}-\\sigma_{2}-\\sigma_{3}),\\] \\[\\tilde{P}^{(1,+)} = \\frac{1}{2}(\\openone+\\sigma_{1}-\\sigma_{2}-\\sigma_{3}), \\tag{3}\\] \\[\\tilde{P}^{(2,+)} = \\sigma_{2},\\;\\;\\;\\text{and}\\;\\;\\;\\tilde{P}^{(3,+)}=\\sigma_{3}.\\]
## III Preparation of input states
It is important to note the assumption made at the beginning of the last section, that is, the process is given by a linear map. This assumption is made without any consideration to how the input states are prepared. We will show that for the same input state, evolving through the same Hamiltonian, the output state would be different depending on whether the input state is prepared stochastically or by a measurement.
The basic steps in a quantum process tomography experiment are broken down below:
1. Just before the experiment begins, the system and environment is in an unknown state, which we will write as \\(\\gamma_{0}\\). The system and environment could be entangled or correlated. For our discussions we will label the system as \\(\\mathbb{A}\\) and the environment as \\(\\mathbb{B}\\).
2. The system is prepared into a known input state. Let \\(\\mathscr{P}^{(n)}\\) be the map that prepares the system into the \\(n^{th}\\) input state. The system and environment state after preparation is therefore given by \\((\\mathscr{P}^{(n)}\\otimes\\mathcal{I})(\\gamma_{0})\\), where \\(\\mathcal{I}\\) is the identity map acting on the space of the environment.
3. The system is then sent through an unknown quantum process. We consider the evolution to be a unitary transformation \\(U\\) in the space of the system _and_ environment: \\[U(\\mathscr{P}^{(n)}\\otimes\\mathcal{I})(\\gamma_{0})U^{\\dagger}.\\]
4. The trace with respect to the environment is taken to obtain the output state of the system \\[Q^{(n)}=\\text{Tr}_{\\mathbb{B}}\\left[U(\\mathscr{P}^{(n)}\\otimes\\mathcal{I})( \\gamma_{0})U^{\\dagger}\\right],\\] (4) where \\(Q^{(n)}\\) is the output state corresponding to the input state prepared by \\(\\mathscr{P}^{(n)}\\).
5. Finally using the input and the output states, a map describing the process is constructed.
It is important to keep in mind that these basic steps (excluding the last step) also describe most quantum experiments, not just specifically quantum process tomography experiments. Therefore the following results may be applicable to many quantum experiments, not just to quantum process tomography experiments.
## IV Stochastic preparation
There are two methods in quantum theory to prepare an unknown state into a known state; stochastic preparations and preparations by measurements. In this section, we will discuss the method of using stochastic maps for preparation of states.
Let us consider preparation by a stochastic pin map which maps all states to a fixed single state [24]. Using a pin map for preparation is a common initial step in various experiments. For example, in quantum dot experiments the system is cooled very close to absolute zero temperature. This ensures the probability of the system being in the ground energy state is nearly one. This is effectively a pin map to the ground energy state.
The experiment procedure will begin with a pin map \\(\\Theta\\), which takes any density matrix to a fixed pure state \\(\\left|\\Phi\\right\\rangle\\). Then the state of the system and environment after the pin map is:
\\[\\left(\\Theta\\otimes\\mathcal{I}\\right)\\left(\\gamma_{0}\\right)=\\left|\\Phi \\right\\rangle\\left\\langle\\Phi\\right|\\otimes\\tau(\\Theta), \\tag{5}\\]
where \\(\\tau(\\Theta)=\\mathrm{Tr}_{\\mathrm{\\Lambda}}[\\left(\\Theta\\otimes\\mathcal{I} \\right)\\left(\\gamma_{0}\\right)]\\). The pin map fixes the system into a single pure state, which in turn means that the state of the environment is fixed into a single state as well. The purpose of the pin map is to decouple the system from the environment, to eliminate any correlation between the system state and the environment state. Note that the state of the environment does depend on the choice of the pin map \\(\\Theta\\).
Once the pin map is applied, the system has to be prepared into the various different input states for the tomography experiment. This can be expressed in the most general way with stochastic maps:
\\[\\Omega^{(n)}\\left(\\left|\\Phi\\right\\rangle\\left\\langle\\Phi\\right|\\right)=P^{(n)}, \\tag{6}\\]
where \\(P^{(n)}\\) are the desired input states.
The preparation procedure can be summarized as \\(\\mathscr{P}^{(n)}=\\Omega^{(n)}\\circ\\Theta\\). The overall experiment can be written in a single equation by combining Eqs. (4\\(-\\)6):
\\[Q^{(n)} = \\mathrm{Tr}_{\\mathrm{B}}\\!\\left[U\\left(\\left[\\Omega^{(n)}\\circ \\Theta\\right]\\otimes\\mathcal{I}\\right)\\left(\\gamma_{0}\\right)U^{\\dagger}\\right] \\tag{7}\\] \\[= \\mathrm{Tr}_{\\mathrm{B}}\\left[UP^{(n)}\\otimes\\tau(\\Theta)U^{ \\dagger}\\right].\\]
We will call this equation 'the process equation'. Notice that this process equation is linear on \\(P^{(n)}\\). Once the input states are prepared, the procedure for quantum process tomography is the same as given in section II. This is a generalization of linear quantum process tomography.
It should be emphasized that the initial pin map \\(\\Theta\\) is critical; because for the process to be linear the state of the environment must be independent of the input state. It may be tempting to simply use a set of pin maps, \\(\\Theta^{(n)}\\), to prepare the various input states \\(P^{(n)}\\). However, the process equation in this case yields:
\\[Q^{(n)} = \\mathrm{Tr}_{\\mathrm{B}}\\!\\left[U\\;\\Theta^{(n)}\\otimes\\mathcal{I }\\left(\\gamma_{0}\\right)U^{\\dagger}\\right] \\tag{8}\\] \\[= \\mathrm{Tr}_{\\mathrm{B}}\\left[UP^{(n)}\\otimes\\tau(\\Theta^{(n)})U ^{\\dagger}\\right].\\]
This is no longer a linear equation on \\(P^{(n)}\\), since the state of the environment, \\(\\tau(\\Theta^{(n)})\\), is effectively dependent on \\(P^{(n)}\\).
### Example of Stochastic Preparation
It may instructive to look at a simple example involving two qubits at this point. We will treat one qubit as the system of interest and the other as the unknowable environment.
Consider the preparation by a pin map \\(\\Theta\\)
\\[\\Theta\\otimes\\mathcal{I}\\left(\\gamma_{0}\\right)=\\left|\\Phi\\right\\rangle \\left\\langle\\Phi\\right|\\otimes\\frac{1}{2}\\openone. \\tag{9}\\]
that yields a pure state \\(\\left|\\Phi\\right\\rangle\\) for the system qubit and a completely mixed state for the environment qubit.
The next step is to create different input states using different maps \\(\\Omega^{(n)}\\). In this case, the fixed state \\(\\left|\\Phi\\right\\rangle\\left\\langle\\Phi\\right|\\) can simply be locally rotated to get the desired input state \\(P^{(n)}\\) given in Eq. (1)
\\[\\Omega^{(n)}\\left(\\left|\\Phi\\right\\rangle\\left\\langle\\Phi \\right|\\right)\\otimes\\frac{1}{2}\\openone = V^{(n)}\\left|\\Phi\\right\\rangle\\left\\langle\\Phi\\right|V^{(n)^{ \\dagger}}\\otimes\\frac{1}{2}\\openone \\tag{10}\\] \\[= P^{(n)}\\otimes\\frac{1}{2}\\openone,\\]
where \\(n=\\{(1,-),(1,+),(2,+),(3,+)\\}\\) and \\(V^{(n)}\\) are the unitary operators acting on the space of the system. Now each input state is sent through the quantum process. The output states can be calculated in a straight forward manner using the process equation Eq. (7). For this example we chose the unitary transformations \\(U\\) to be
\\[U=e^{-iHt}=e^{-i\\sum_{j}\\sigma_{j}\\otimes\\sigma_{j}\\;t}. \\tag{11}\\]
The output states are
\\[Q^{(1,-)} =\\frac{1}{2}(\\openone-C^{2}\\sigma_{1}), Q^{(1,+)} =\\frac{1}{2}(\\openone+C^{2}\\sigma_{1}),\\] \\[Q^{(2,+)} =\\frac{1}{2}(\\openone+C^{2}\\sigma_{2}),\\text{ and }\\;Q^{(3,+)} =\\frac{1}{2}(\\openone+C^{2}\\sigma_{3}).\\]
The linear process map is constructed using Eq. (2), the duals in Eq. (3), and the output states:
\\[\\Lambda_{s}=\\frac{1}{2}\\left(\\begin{array}{cccc}1+C^{2}&0&0&2C^{2}\\\\ 0&1-C^{2}&&0\\\\ 0&0&1-C^{2}&0\\\\ 2C^{2}&0&0&1+C^{2}\\end{array}\\right),\\]
where \\(C=\\cos(2t)\\).
We can verify that the process is correctly represented by this linear map by calculating \\(\\Lambda_{s}(\\rho)\\) and comparing it to a direct calculation \\(\\mathrm{Tr}_{\\mathrm{B}}[U\\rho\\otimes\\frac{1}{2}\\openone U^{\\dagger}]\\), for any \\(\\rho\\).
## V Preparation by Measurements
Making a quantum measurement is another effective method of preparing input states for an experiment. For our discussion, we will use the von Neumann model for the measurement process. With the von Neumann model, a measurement is given by a set of orthonormal projections. If a particular outcome is observed from the measurement, the state of the system collapses to that corresponding projection. Therefore, the input states can be prepared for an experiment by suitably fixing our measurement basis. With the knowledge of the basis and the outcomes, the exact input state is known.
For experiments dealing with polarization states of photons this is a very practical method for preparation. Sending a randomly polarized beam of photons through a polarizer will yield in a perfectly polarized beam of light. This polarized beam can e used as one of the input state for a quantum tomography experiment.
Suppose the \\(n^{th}\\) input state is prepared, given by the projection \\(P^{(n)}\\), by measurement. For the open system, the state becomes
\\[\\mathscr{P}^{(n)}\\otimes\\mathcal{I}(\\gamma_{0})=\\frac{1}{\\Gamma_{n}}\\left(P^{( n)}\\otimes\\mathds{1}\\right)\\gamma_{0}\\left(P^{(n)}\\otimes\\mathds{1}\\right), \\tag{12}\\]
where \\(\\Gamma^{(n)}=\\mathrm{Tr}[\\left(P^{(n)}\\otimes\\mathds{1}\\right)\\gamma_{0}]\\) is the normalization factor (it is the probability of obtaining that particular input state from a von Neumann measurement). For simplicity, we will from this point on, simply write \\(P^{(n)}\\) instead of \\(P^{(n)}\\otimes\\mathds{1}\\).
Combining Eq. (4) and Eq. (12) leads to the process equation that will relate the output states to the input states. For the \\(n^{th}\\) input state the process equation is
\\[Q^{(n)}=\\frac{1}{\\Gamma^{(n)}}\\mathrm{Tr}_{\\mathbb{B}}\\left[UP^{(n)}\\gamma_{0} P^{(n)}U^{\\dagger}\\right]. \\tag{13}\\]
Is this process given by a linear map? Dynamically, the evolution of the total state \\(\\gamma_{0}\\) is linear. However, for the purpose of tomography, the dynamics of the prepared input states \\(P^{(n)}\\) is of interest, but \\(P^{(n)}\\) appears twice in the process equation, therefore the output states \\(Q^{(n)}\\) depend bi-linearly on \\(P^{(n)}\\). That is suppose \\(P=\\alpha A+\\beta B\\), where \\(A\\) and \\(B\\) are some operators and \\(\\alpha\\) and \\(\\beta\\) are constants then
\\[P\\gamma_{0}P = (\\alpha A+\\beta B)\\gamma_{0}(\\alpha A+\\beta B)\\] \\[= \\alpha^{2}A\\gamma_{0}A+\\alpha\\beta A\\gamma_{0}B+\\alpha\\beta B \\gamma_{0}A+\\beta^{2}B\\gamma_{0}B.\\]
The last result shows that the \\(Q\\) has a bi-linear dependence on \\(P\\). This bi-linearity can also be seen from the dependence of the environment state on the state of the system. To see this expanding Eq. (13) as:
\\[Q^{(n)} = \\mathrm{Tr}_{\\mathbb{B}}\\left[UP^{(n)}\\otimes\\tau^{(n)}U^{ \\dagger}\\right],\\] \\[\\mathrm{with}\\] \\[\\tau^{(n)} = \\frac{1}{\\Gamma^{(n)}}\\mathrm{Tr}_{\\mathbb{A}}\\left[P^{(n)}\\gamma _{0}\\right].\\]
The last equation clearly shows that the environment state \\(\\tau^{(n)}\\) is in effect a function of \\(P^{(n)}\\). It is well known [19; 20; 25] that if the initial state of the system is related to the state of the environment by some function \\(f\\):
\\[\\rho^{\\mathbb{A}\\mathbb{B}}=\\sum_{j}\\rho_{j}^{\\mathbb{A}}\\otimes f(\\rho_{j}^{ \\mathbb{A}})^{\\mathbb{B}},\\]
where where \\(\\rho^{\\mathbb{A}}\\) is the state of the system and \\(f(\\rho^{\\mathbb{A}})^{\\mathbb{B}}\\) are the density matrix in the space of the environment. Then the evolution of the reduced matrices \\(\\rho^{\\mathbb{A}}\\) cannot be consistently described by a single linear map. In this case, the function \\(f\\) is of a specific form that gives us a bi-linear dependence.
We now look at a simple example to demonstrate that when input states are prepared by measurement, the results cannot be consistently described by a linear map.
### Example of Preparation by Measurements
Our example demonstrates when the output states are bi-linearly related to the input states the standard quantum process tomography procedure will give incorrect results. Suppose we are unaware of the bi-linear dependence that arises from preparing input states by measurements, and assume that the process is given by a linear map. We would prepare a set of linearly independent input states, then construct the linear process map from the measured output states and the duals of input states. We will show that this linear map will not give the correct prediction for certain inputs.
Consider the following two qubit state as the available state to the experimenter at \\(t=0_{-}\\):
\\[\\gamma_{0}=\\frac{1}{4}\\left(\\mathds{1}\\otimes\\mathds{1}+a_{j}\\sigma_{j} \\otimes\\mathds{1}+c_{23}\\sigma_{2}\\otimes\\sigma_{3}\\right). \\tag{14}\\]
We will treat the first qubit as the system and the second qubit as the environment. Note that the the total state is separable, but it is correlated with the environment.
Once again we will use the input states given in Eq. (1). The state of the system plus the environment after each measurement takes the following form:
\\[P^{(n)}\\gamma_{0}P^{(n)}\\to P^{(n)}\\otimes\\frac{1}{2}\\mathds{1} \\ \\ (\\mathrm{for}\\ n=\\{(1,\\pm),(3,+)\\}),\\] \\[P^{(2,+)}\\gamma_{0}P^{(2,+)}\\to P^{(2,+)}\\otimes\\frac{1}{2} \\left(\\mathds{1}+\\frac{c_{23}}{1+a_{2}}\\sigma_{3}\\right). \\tag{15}\\]
Following the same recipe as before the output states are obtained using Eq. (13). We will use the same unitary \\(U\\) as in the last example given in Eq. (11). The output states are as follow:
\\[Q^{(1,-)} = \\frac{1}{2}\\left(\\mathds{1}-C^{2}\\sigma_{1}\\right),\\ \\ Q^{(1,+)}=\\frac{1}{2}\\left(\\mathds{1}+C^{2}\\sigma_{1}\\right),\\] \\[Q^{(2,+)} = \\frac{1}{2}\\left(\\mathds{1}-c_{23}^{+}CS\\sigma_{1}+C^{2}\\sigma_{2 }+c_{23}^{+}S^{2}\\sigma_{3}\\right),\\] \\[\\mathrm{and}\\ \\ \\ \\ \\ Q^{(3,+)}=\\frac{1}{2}\\left(\\mathds{1}+S^{2} \\sigma_{3}\\right),\\]
where \\(C=\\cos(2t)\\), \\(S=\\sin(2t)\\), and \\(c_{23}^{+}=\\frac{c_{23}}{1+a_{2}}\\).
The linear process map is constructed using Eq. (2), the duals in Eq. (3), and the output states:
\\[\\Lambda_{m}=\\frac{1}{2}\\left(\\begin{array}{cccc}1+C^{2}&ic_{23}^{+}S^{2}&0&2 C^{2}-ic_{23}^{+}CS\\\\ -ic_{23}^{+}S^{2}&1-C^{2}&ic_{23}^{+}CS&0\\\\ 0&-ic_{23}^{+}CS&1-C^{2}&-ic_{23}^{+}S^{2}\\\\ 2C^{2}+ic_{23}^{+}CS&0&ic_{23}^{+}S^{2}&1+C^{2}\\end{array}\\right).\\]Now consider the action of \\(\\Lambda_{m}\\) on state \\(P^{(2,-)}=\\frac{1}{2}(\\openone-\\sigma_{2})\\). \\(P^{(2,-)}\\) is a linear combination of the input states as \\(P^{(2,-)}=P^{(1,-)}+P^{(1,+)}-P^{(2,+)}\\). If the action of \\(\\Lambda_{m}\\) is linear then the output state corresponding to the input state \\(P^{(2,-)}\\), should be
\\[\\Lambda_{m}\\left(P^{(2,-)}\\right) = \\Lambda_{m}\\left(P^{(1,-)}+P^{(1,+)}-P^{(2,+)}\\right)\\] \\[= Q^{(1,-)}+Q^{(1,+)}-Q^{(2,+)}\\] \\[= \\frac{1}{2}\\left(\\openone+c_{23}^{+}CS\\sigma_{1}-C^{2}\\sigma_{2 }-c_{23}^{+}S^{2}\\sigma_{3}\\right).\\]
Is this the same as if the global state was prepared in the state \\(P^{(2,-)}\\) by a measurement? The output state for input \\(P^{(2,-)}\\) can be calculated using Eq. (13),
\\[Q^{(2,-)} = \\frac{1}{\\Gamma_{(2,-)}}{\\rm Tr}_{\\rm S}\\left[UP^{(2,-)}\\gamma_{ 0}P^{(2,-)}U^{\\dagger}\\right] \\tag{17}\\] \\[= \\frac{1}{2}\\left(\\openone-c_{23}^{-}CS\\sigma_{1}-C^{2}\\sigma_{2 }-c_{23}^{-}S^{2}\\sigma_{3}\\right),\\]
where \\(c_{23}^{-}=\\frac{c_{23}}{1-a_{2}}\\).
Clearly the output state predicted by the linear process map in Eq. (16) is not the same as the real output state calculated dynamically in Eq. (17), hence the linear process map \\(\\Lambda_{m}\\) does not describe the process correctly. This is not surprising, the state of the environment in Eq. (15) depends on \\(a_{2}\\), and subsequently the linear process map depends on \\(a_{2}\\), hence on the state of the system. This is where the non-linearity of the process arises.
In next section we will show that this process can be correctly described using a bi-linear process map. We will also develop a new tomography procedure for bi-linear process maps.
## VI Characterization of bi-linear quantum processes
Let us expand Eq. (13) with matrix indices:
\\[Q^{(n)}_{r,s} = \\frac{1}{\\Gamma^{(n)}}U_{re,r^{\\prime}\\alpha}P^{(n)}_{r^{\\prime} r^{\\prime\\prime}}\\gamma_{0}r^{\\prime\\prime}\\alpha,s^{\\prime\\prime}\\beta}P^{(n)}_{s^ {\\prime\\prime}s^{\\prime}}U^{*}_{se,s^{\\prime}\\beta} \\tag{18}\\] \\[= \\frac{1}{\\Gamma^{(n)}}P^{(n)^{*}}_{r^{\\prime\\prime}}\\left(U_{re,r ^{\\prime}\\alpha}\\gamma_{0}r^{\\prime\\alpha,s^{\\prime\\prime}\\beta}U^{*}_{se,s^ {\\prime}\\beta}\\right)P^{(n)}_{s^{\\prime\\prime}s^{\\prime}}\\] \\[= \\frac{1}{\\Gamma^{(n)}}P^{(n)^{*}}_{r^{\\prime\\prime}r^{\\prime}}M^ {(r,s)}_{r^{\\prime\\prime}r^{\\prime}s^{\\prime\\prime}s^{\\prime}}P^{(n)}_{s^{ \\prime\\prime}s^{\\prime}}\\;.\\]
In the last equation, the matrix \\(M\\) is defined as:
\\[M^{(r,s)}_{r^{\\prime\\prime}r^{\\prime},s^{\\prime\\prime}s^{\\prime}}=\\sum_{ \\alpha\\beta\\epsilon}U_{re,r^{\\prime}\\alpha}\\gamma_{0}r^{\\prime\\prime}\\alpha, s^{\\prime\\prime}\\beta}U^{*}_{se,s^{\\prime}\\beta}. \\tag{19}\\]
Note that in Eq. 18 the superscript indices on \\(M\\) match the elements on the left hand side of the equation, while the subscript indices are summed on the right hand of the equation.
The output state \\(Q^{(n)}\\) is given by the matrix \\(M\\) acting bi-linearly on the input state \\(P^{(n)}\\). Therefore the matrix \\(M\\) fully describes the process, and we will call \\(M\\)_the bi-linear process map_.
\\(M\\) contains both \\(U\\) and \\(\\gamma_{0}\\), however knowing \\(M\\), is not sufficient to determine \\(U\\) and \\(\\gamma_{0}\\). As expected, it should not be possible to determine \\(U\\) and \\(\\gamma_{0}\\) through measurements and preparations on the system alone, without access to the environment. Conversely, \\(M\\) contains all the information that is necessary to fully determine the output state for any prepared input state.
Before proceeding to determine the matrix \\(M\\), let us look at some basic properties of \\(M\\).
### Some Basic Properties of \\(M\\)
Let us start with the trace of \\(M\\). To take the trace equate indices \\(r\\) with \\(s\\), \\(r^{\\prime}\\) with \\(s^{\\prime}\\), and \\(r^{\\prime\\prime}\\) with \\(s^{\\prime\\prime}\\) to get:
\\[{\\rm Tr}[M] = \\sum_{rr^{\\prime\\prime}s^{\\prime\\prime}}M^{(r,r)}_{r^{\\prime\\prime \\prime}r^{\\prime},s^{\\prime\\prime\\prime}r^{\\prime}}\\] \\[= \\sum_{\\alpha\\beta\\epsilon}\\sum_{rr^{\\prime}r^{\\prime\\prime}}U_{re,r ^{\\prime}\\alpha}\\gamma_{0}r^{\\prime\\prime}\\beta}U^{*}_{re,r^{\\prime}\\beta}.\\]
Since \\(U^{\\dagger}U=I\\Rightarrow U^{*}_{re,r^{\\prime}\\beta}U_{re,r^{\\prime}\\alpha}= \\delta_{\\alpha\\beta}\\), then
\\[{\\rm Tr}[M] = \\sum_{\\alpha r^{\\prime\\prime}}\\gamma_{0}r^{\\prime\\prime}\\alpha,r^{ \\prime\\prime}\\alpha}=1.\\]
As with the case of linear process maps, matrix \\(M\\) is hermitian. This is easy to see by taking the complex conjugate of \\(M\\),
\\[\\left(M^{(r,s)}_{r^{\\prime\\prime}r^{\\prime},s^{\\prime\\prime}s^{ \\prime}}\\right)^{*} = \\left(\\sum_{\\alpha\\beta\\epsilon}U_{re,r^{\\prime}\\alpha}\\gamma_{0}r^{ \\prime\\prime}\\alpha,s^{\\prime\\prime}\\beta}U^{*}_{se,s^{\\prime}\\beta}\\right)^{*}\\] \\[= \\sum_{\\alpha\\beta\\epsilon}U_{se,s^{\\prime}\\beta}\\gamma_{0}s^{ \\prime\\prime}\\beta,r^{\\prime\\prime}\\alpha}U^{*}_{re,r^{\\prime}\\alpha},\\] \\[= M^{(s,r)}_{s^{\\prime\\prime}s^{\\prime},r^{\\prime\\prime}r^{\\prime}}.\\]
The complex conjugate of \\(M\\) is not only the transpose of \\(M\\) but each element of \\(M\\) is also transposed.
### Bi-linear map for a qubit
We now have to develop a new tomography procedure to deal with the bi-linear process map \\(M\\). We will need to figure out a finite set of input states and the corresponding output states that will allow us to determine \\(M\\). Afterall, this is the objective of tomography - by performing measurements on a small number of select input states a complete description of a process can be obtained and predict the output state for _any_ arbitrary input state.
\\(M\\) is a large (\\(N^{3}\\times N^{3}\\)) matrix, where \\(N\\) is the dimension of the space of system. To make it more manageable, Eq. (18) can be interpreted as:
\\[\\left\\langle P^{(n)}\\right|M\\left|P^{(n)}\\right\\rangle=\\Gamma^{(n)}Q^{(n)}, \\tag{20}\\]where \\(P^{(n)}\\) is now treated as a vector. In this form, \\(M\\) is a \\(N^{2}\\times N^{2}\\) matrix with each element being a \\(N\\times N\\) matrix. We will call Eq (20) the '_bi-linear process equation_'.
Since \\(M\\) is a Hermitian matrix, it has \\(\\frac{1}{2}(N^{4}+N^{2})\\) independent block elements. Therefore \\(\\frac{1}{2}(N^{4}+N^{2})\\) independent equations in the form of Eq. (20) are necessary to fully determine \\(M\\). It is clear that neither an orthonormal set of \\(N\\) or linearly independent set of \\(N^{2}\\) input states would provide sufficient number of equations to resolve the bi-linear process map \\(M\\).
Consider a qubit system. In that case the projections \\(P^{(n)}\\) can be written in terms its coherence vector \\(a_{j}\\) and Pauli spin matrices \\(\\sigma_{j}\\):
\\[P^{(n)}=\\frac{1}{2}\\left(\\openone+\\sum_{j=1}^{3}a_{j}^{(n)}\\sigma_{j}\\right). \\tag{21}\\]
Since \\(P^{(n)}\\) is a projection then there is an additional constrain \\(\\sum_{j}(a_{j}^{(n)})^{2}=1\\).
The matrices \\(\\openone\\) and \\(\\sigma_{j}\\) together forms a vector basis for this space. Therefore Eq. (21) is simply a vector decomposition of \\(P^{(n)}\\) in a fixed basis. Taking this form for \\(P^{(n)}\\) and substituting into Eq. (20) gives:
\\[\\Gamma^{(n)}Q^{(n)} = \\langle P^{(n)}|\\,M\\,|P^{(n)}\\rangle \\tag{22}\\] \\[= \\frac{1}{4}\\,\\langle\\openone|M|\\openone\\rangle+\\frac{a_{j}^{(n) }}{4}\\,(\\langle\\openone|M|\\sigma_{j}\\rangle+\\langle\\sigma_{j}|M|\\openone\\rangle)\\] \\[+\\frac{a_{j}^{(n)}a_{k}^{(n)}}{4}\\,\\langle\\sigma_{j}|M|\\sigma_{k }\\rangle\\,.\\]
Observe that the terms \\(\\langle\\openone|M|\\openone\\rangle\\), \\(\\langle\\openone|M|\\sigma_{j}\\rangle\\), \\(\\langle\\sigma_{j}|M|\\openone\\rangle\\) and \\(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle\\), are simply the matrix elements of \\(M\\) in \\(\\{\\openone,\\sigma_{j}\\}\\) basis. We just need to find a set of projections \\(P^{(n)}\\) that will allow us to solve for these matrix elements.
Consider the following specific projections defined as \\(P^{(j,\\pm)}=\\frac{1}{2}(\\openone\\pm\\sigma_{j})\\) with \\(j=\\{1,2,3\\}\\).
\\[\\Gamma^{(j,\\pm)}Q^{(j,\\pm)} = \\langle P^{(j,\\pm)}|M|P^{(j,\\pm)}\\rangle\\] \\[= \\frac{1}{4}(\\langle\\openone|M|\\openone\\rangle+\\langle\\sigma_{j} |M|\\sigma_{j}\\rangle)\\] \\[\\quad\\pm\\frac{1}{4}(\\langle\\sigma_{j}|M|\\openone\\rangle+\\langle \\openone|M|\\sigma_{j}\\rangle).\\]
Simultaneously solving the \\((+)\\) and \\((-)\\) equations above give the following unknowns:
\\[\\langle\\openone|M|\\openone\\rangle+\\langle\\sigma_{j}|M|\\sigma_{j} \\rangle=2\\left(\\Gamma^{(j,+)}Q^{(j,+)}+\\Gamma^{(j,-)}Q^{(j,-)}\\right)\\] \\[\\langle\\openone|M|\\sigma_{j}\\rangle+\\langle\\sigma_{j}|M| \\openone\\rangle=2\\left(\\Gamma^{(j,+)}Q^{(j,-)}-\\Gamma^{(j,-)}Q^{(j,-)}\\right).\\]
To obtain the cross terms \\(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle\\) consider projections such as \\(P^{(j+k+1,+)}=\\frac{1}{2}\\left(\\openone+\\frac{1}{\\sqrt{2}}\\sigma_{j}+\\frac {1}{\\sqrt{2}}\\sigma_{k}\\right)\\) for \\(k>j\\) which gives:
\\[\\Gamma^{(j+k+1,+)}Q^{(j+k+1,+)} = \\langle P^{(j+k+1,+)}|\\,M\\,|P^{(j+k+1,+)}\\rangle\\] \\[= \\frac{1}{8}\\left(\\langle\\openone|M|\\openone\\rangle+\\langle \\sigma_{j}|M|\\sigma_{j}\\rangle\\right)\\] \\[+\\frac{1}{8}\\left(\\langle\\openone|M|\\openone\\rangle+\\langle \\sigma_{k}|M|\\sigma_{k}\\rangle\\right)\\] \\[+\\frac{1}{4\\sqrt{2}}\\left(\\langle\\openone|M|\\sigma_{j}\\rangle+ \\langle\\sigma_{j}|M|\\openone\\rangle\\right)\\] \\[+\\frac{1}{4\\sqrt{2}}\\left(\\langle\\openone|M|\\sigma_{k}\\rangle+ \\langle\\sigma_{k}|M|\\openone\\rangle\\right)\\] \\[+\\frac{1}{8}\\left(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle+\\langle \\sigma_{j}|M|\\sigma_{k}\\rangle\\right).\\]
Substitute the known terms and solve for the desired cross terms,
\\[\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle+\\langle\\sigma_{k}|M|\\sigma_ {j}\\rangle = -2\\left(1+\\sqrt{2}\\right)\\Gamma^{(j,+)}Q^{(j,+)}\\] \\[-2\\left(1-\\sqrt{2}\\right)\\Gamma^{(j,-)}Q^{(j,-)}\\] \\[-2\\left(1+\\sqrt{2}\\right)\\Gamma^{(k,+)}Q^{(k,+)}\\] \\[-2\\left(1-\\sqrt{2}\\right)\\Gamma^{(k,-)}Q^{(k,-)}\\] \\[+8\\Gamma^{(j+k+1,+)}Q^{(j+k+1,+)}.\\]
In summary using the following nine projections,
\\[P^{(j,+)} = \\frac{1}{2}\\left(\\openone+\\sigma_{j}\\right),\\,\\,\\,P^{(j,-)}=\\frac {1}{2}\\left(\\openone-\\sigma_{j}\\right),\\] \\[P^{(4,+)} = \\frac{1}{2}\\left(\\openone+\\frac{1}{\\sqrt{2}}\\sigma_{1}+\\frac{1}{ \\sqrt{2}}\\sigma_{2}\\right),\\] \\[P^{(5,+)} = \\frac{1}{2}\\left(\\openone+\\frac{1}{\\sqrt{2}}\\sigma_{1}+\\frac{1}{ \\sqrt{2}}\\sigma_{3}\\right), \\tag{24}\\] \\[P^{(6,+)} = \\frac{1}{2}\\left(\\openone+\\frac{1}{\\sqrt{2}}\\sigma_{2}+\\frac{1}{ \\sqrt{2}}\\sigma_{3}\\right),\\]
and solving them simultaneously yields all desired matrix elements: \\(\\langle\\openone|M|\\openone\\rangle+\\langle\\sigma_{j}|M|\\sigma_{j}\\rangle\\), \\(\\langle\\openone|M|\\sigma_{j}\\rangle+\\langle\\sigma_{j}|M|\\openone\\rangle\\), and \\(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle+\\langle\\sigma_{k}|M|\\sigma_{j}\\rangle\\).
Though this is not enough to fully determine \\(M\\), these elements are sufficient to determine the output state for any input state. Using the property \\(\\sum_{j}(a_{j}^{(n)})^{2}=1\\), Eq. (22) can be rewritten as:
\\[4\\Gamma^{(n)}Q^{(n)} = \\sum_{j}\\left(a_{j}^{(n)}\\right)^{2}\\left(\\langle\\openone|M| \\openone\\rangle+\\langle\\sigma_{j}|M|\\sigma_{j}\\rangle\\right)\\] \\[+\\sum_{j}a_{j}^{(n)}\\left(\\langle\\openone|M|\\sigma_{j}\\rangle+ \\langle\\sigma_{j}|M|\\openone\\rangle\\right)\\] \\[+\\sum_{k>j}a_{j}^{(n)}a_{k}^{(n)}\\left(\\langle\\sigma_{j}|M|\\sigma_{k }\\rangle+\\langle\\sigma_{k}|M|\\sigma_{j}\\rangle\\right).\\]
Observe that the sums of the cross terms \\(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle+\\langle\\sigma_{k}|M|\\sigma_{j}\\rangle\\) can appear together because the coefficients \\(a_{j}^{(n)}\\) are real. Also, the element \\(\\langle\\openone|M|\\openone\\rangle\\) can always be paired with a diagonal element \\(\\langle\\sigma_{j}|M|\\sigma_{j}\\rangle\\) as long as the state is a pure projection satisfying \\(\\sum_{j}(a_{j}^{(n)})^{2}=1\\). The diagonal element \\(\\langle\\openone|M|\\openone\\rangle\\) only has to be known if the system can be prepared directly to a mixed state such that \\(\\sum_{j}(a_{j}^{(n)})^{2}<1\\). This may be accomplished by a generalized measurement [26] (see appendix A). If generalized measurements are allowed, then just one more input state is needed, for example \\(\\frac{1}{2}(\\openone+\\frac{1}{2}\\sigma_{1})\\), which gives another independent equation that can be solved to obtain \\(\\langle\\openone|M|\\openone\\rangle\\).
Therefore the elements of \\(M\\) found in Eq. (25) are all that are needed to describe the process. By measuring the outputs for the nine specified input states, the matrix \\(M\\) can be calculated. We now have a good quantum process tomography procedure for an open two-level system.
This procedure can be generalized for an \\(n\\)-level system in the same spirit as above. However care must be taken, since the constraints on the coherence vector of an \\(n\\)-level system are more complicated (see [27] for a detailed discussion).
## VII Bi-linear vs. linear process map verification procedure
In practice it is not easy to tell if a process is linear or bi-linear. The bi-linear process map is incompatible with the behavior of a linear process map and conversely a linear process map will not adequately describe a bi-linear process. We have developed a procedure below that will differentiate between a linear process and a bi-linear process.
Consider what happens to a state that is a linear combination of a set of projections:
\\[X=\\sum_{n}c_{n}P^{(n)}. \\tag{26}\\]
If the evolution of \\(X\\) though the process is bi-linear then the output is written in terms of the bi-linear process map as:
\\[\\langle X|M|X\\rangle=\\sum_{mn}c_{m}c_{n}\\,\\langle P^{(m)}|\\,M\\,|P^{(n)}\\rangle\\,. \\tag{27}\\]
But if the bi-linear process map is compatible in some way with a linear process, then Eq. (27) can be simplified to
\\[\\langle X|M|X\\rangle=\\sum_{n}c_{n}^{2}\\,\\langle P^{(n)}|\\,M\\,|P^{(n)}\\rangle\\,. \\tag{28}\\]
In that case the two equations above are equal:
\\[\\sum_{mn}c_{m}c_{n}\\,\\langle P^{(m)}|\\,M\\,|P^{(n)}\\rangle=\\sum_{n}c_{n}^{2}\\, \\langle P^{(n)}|M|P^{(n)}\\rangle\\,. \\tag{29}\\]
It is clear that no non-trivial conditions exists for \\(M\\) that will allow this equality for arbitrary coefficients \\(c_{m}\\). Therefore the bi-linear process map gives different predictions from a linear process map. In that case, we should be able to distinguish between whether a process is given by a linear map or a bi-linear map.
Consider the example of the tomography procedure proposed for a two-level system. If the process is given by a linear process map, then the nine input states in Eq. (24) are over complete; only four input states are needed to determine a linear process map. This discrepancy is summarized by the following linear sum rules:
\\[P^{(2,-)} = P^{(1,+)}+P^{(1,-)}-P^{(2,+)},\\] \\[P^{(3,-)} = P^{(1,+)}+P^{(1,-)}-P^{(3,+)},\\] \\[P^{(4,\\pm)} = \\frac{1}{2}\\left(P^{(1,+)}+P^{(1,-)}\\right)\\pm\\frac{1}{\\sqrt{2}} \\left(P^{(2,+)}-P^{(1,-)}\\right),\\] \\[P^{(5,\\pm)} = \\frac{1}{2}\\left(P^{(1,+)}+P^{(1,-)}\\right)\\pm\\frac{1}{\\sqrt{2}} \\left(P^{(3,+)}-P^{(1,-)}\\right),\\] \\[P^{(6,\\pm)} = \\frac{1}{2}\\left(P^{(1,+)}+P^{(1,-)}\\right)\\] \\[\\pm\\frac{1}{\\sqrt{2}}\\left(P^{(2,+)}+P^{(3,+)}-P^{(1,+)}-P^{(1,-) }\\right).\\]
If the process is linear, then the output states must satisfy the same sum rules, which are obtained from the above equations by suitably writing \\(Q\\) in place of \\(P\\). If these sum rules are not satisfied, then the process is not linear.
However satisfying the sum rules is necessary but not sufficient to determine if the process is linear; a bi-linear process map can still be constructed from this set of input and output states without contradiction. Therefore an additional input state, distinct from the above nine input states, should be tested to determine which of Eq. (27) and Eq. (28) is satisfied.
More explicitly, if the output state corresponding to an additional input state of the form of Eq. (26) is found to be \\(\\sum_{n}c_{n}^{2}T^{(n)}Q^{(n)}\\), then the process is linear. However if the corresponding output state is given by \\(\\sum_{mn}c_{m}c_{n}\\,\\langle P^{(m)}|M|P^{(n)}\\rangle\\) then the process is bi-linear.
## VIII Fundamental issues of state preparation beyond quantum process tomography
Although our discussion has largely focused on quantum process tomography, we would like to emphasize that the process equations describe _any_ quantum experiment. Before any experiment begins, the quantum system or particle would exist in an unknown state that could be (and most likely is) correlated with a quantum environment. Preparation of the system or particle into a known state is a necessary part of any experiment.
As we have shown, when the system or particle is imperfectly isolated, in other words, it interacts non-trivially with a quantum environment during the experiment, the initial step of state preparation in an experiment is of fundamental importance.
We described two methods of state preparation, the stochastic preparation and the preparation by measurements. For preparation by measurements, the outcomes of the experiment will be non-linearly related to the prepared states. So it would seem that the stochastic method is preferable, since with the stochastic method, the evolution of the _prepared states_ to the final states is linear. However, how the stochastic maps are actually performed may need to be carefully considered.
Stochastic maps can be equivalently performed by a unitary transformation with the addition of ancillary quantum systems. In particular, the way the stochastic preparation is constructed in section IV is equivalent to a generalized measurement (see appendix A).
Recall that with the stochastic preparation, different maps \\(\\mathscr{P}^{(n)}=\\Omega^{(n)}\\circ\\Theta\\) prepare different input states. Let \\(K\\) be the number of input states. Assume that for the experiment, an equal number of each input state is prepared, so the expectation map is:
\\[\\sum_{n=1}^{K}\\frac{1}{K}\\mathscr{P}^{(n)}.\\]
This overall, expectation map is a trace preserving map since each \\(\\mathscr{P}^{(n)}\\) preserves trace. Therefore this can be considered to be a generalized measurement, where each \\(\\frac{1}{K}\\mathscr{P}^{(n)}\\) represents a measurement outcome.
A generalized measurement can be equivalently performed by a unitary transformation and a von Neumann measurement by using a suitable ancillary system. How does this implementation of the stochastic preparation effect our results? Let the ancillary system be labeled as \\(\\mathbb{C}\\), and the generalized measurement is implemented with a unitary transformation \\(W\\) and von Neumann measurement given by the projections \\(J^{(n)}\\); where the \\(J^{(n)}\\) outcome corresponds with the \\(n^{th}\\) preparation map \\(\\frac{1}{K}\\mathscr{P}^{(n)}\\):
\\[\\mathscr{P}^{(n)}(\\rho^{\\mathbb{A}})=\\operatorname{Tr}_{\\mathbb{C}}\\left[{J^ {(n)}}^{\\mathbb{C}}W^{\\mathbb{A}\\mathbb{C}}\\rho^{\\mathbb{A}}\\otimes\\epsilon ^{\\mathbb{C}}{W^{\\mathbb{A}\\mathbb{C}}}^{\\dagger}{J^{(n)}}^{\\mathbb{C}} \\right]. \\tag{30}\\]
where \\(\\epsilon^{\\mathbb{C}}\\) is the initial state of the ancillary system.
Now include the above equation within the overall process equation Eq. (7)
\\[Q^{(n)}=\\operatorname{Tr}_{\\mathbb{BC}}\\left[{UJ^{(n)}}^{\\mathbb{C}}W^{ \\mathbb{A}\\mathbb{C}}\\gamma_{0}^{\\mathbb{A}\\mathbb{B}}\\otimes\\epsilon^{ \\mathbb{C}}{W^{\\mathbb{A}\\mathbb{C}}}^{\\dagger}{J^{(n)}}^{\\mathbb{C}}U^{ \\dagger}\\right].\\]
If \\(U\\) acts only on the system \\(\\mathbb{A}\\) and environment \\(\\mathbb{B}\\), and not on the ancillary \\(\\mathbb{C}\\), then the above equation can be simplified:
\\[Q^{(n)} = \\operatorname{Tr}_{\\mathbb{B}}\\left[U\\operatorname{Tr}_{\\mathbb{ C}}\\left[{J^{(n)}}^{\\mathbb{C}}W^{\\mathbb{A}\\mathbb{C}}\\gamma_{0}^{ \\mathbb{A}\\mathbb{B}}\\otimes\\epsilon^{\\mathbb{C}}{W^{\\mathbb{A}\\mathbb{C}}} ^{\\dagger}{J^{(n)}}^{\\mathbb{C}}\\right]U^{\\dagger}\\right]\\] \\[= \\operatorname{Tr}_{\\mathbb{B}}\\left[U\\mathscr{P}^{(n)^{\\mathbb{A }}}\\otimes\\mathcal{I}^{\\mathbb{B}}\\left(\\gamma_{0}^{\\mathbb{A}\\mathbb{B}} \\right)U^{\\dagger}\\right]\\] \\[= \\operatorname{Tr}_{\\mathbb{B}}\\left[{UP^{(n)}}^{\\mathbb{A}} \\otimes\\tau^{\\mathbb{B}}U^{\\dagger}\\right].\\]
And the result is the linear process map as before. However, if \\(U\\) acts non-trivially on our system \\(\\mathbb{A}\\), environment \\(\\mathbb{B}\\), and the ancillary \\(\\mathbb{C}\\) then we cannot make this simplification, and \\(Q^{(n)}\\) would have a bi-linear dependence on \\(J^{(n)}\\), and in turn a bi-linear dependence on the prepared input states.
We can make two important observations from this. First, the quantum environment should be defined as any quantum system that interacts with the primary system \\(\\mathbb{A}\\)_during_ the experiment. The quantum environment _does not_ have to include any quantum systems that the primary system is entangled or correlated with, if there is no interaction between the two during the experiment.
The second observation is that if the stochastic preparation method is used, any ancillary systems used to implement the stochastic maps, must be perfectly isolated from the primary system during the experiment. In other words, they must not be part of the quantum environment. If the ancillary systems used are not properly isolated, then the process may have bi-linear dependence on the input states.
This is an important result - simply deciding on a stochastic preparation method for an experiment may not in practice guarantee that the process will be linear. This can have implications in many areas, in particular, quantum error correction protocols are designed to correct for linear noise, correcting for bi-linear effects may be a more difficult challenge.
The verification procedure discussed in the last section may therefore be important, since it can be used as a tool to confirm the proper isolation of the apparatus and ancillary systems during the experiment. We can verify in practice if a process is linear instead of simply making that assumption. In the following section, we will describe a complete experiment that may be performed with qubits, including verification steps to check if the process is linear or bi-linear.
## IX A complete recipe for an experiment
Although we have established the theory, let us make the ideas more concrete by developing a complete recipe for an experiment that can be used to determine whether a process is bi-linear or linear. We will also show specifically how the corresponding bi-linear map or linear map can be calculated from the measurement results.
For bi-linear process tomography, nine input states are necessary. For the nine states derived in section VI.2 the first six states are three pairs of orthonormal projections, but the last three are not. Now consider twelve projections, nine from Eq. (24) and three orthogonal to the last three in that equation:
\\[P^{(4,-)} = \\frac{1}{2}\\left(\\openone-\\frac{1}{\\sqrt{2}}\\sigma_{1}-\\frac{1}{ \\sqrt{2}}\\sigma_{2}\\right),\\] \\[P^{(5,-)} = \\frac{1}{2}\\left(\\openone-\\frac{1}{\\sqrt{2}}\\sigma_{1}-\\frac{1}{ \\sqrt{2}}\\sigma_{3}\\right), \\tag{32}\\] \\[P^{(6,-)} = \\frac{1}{2}\\left(\\openone-\\frac{1}{\\sqrt{2}}\\sigma_{2}-\\frac{1}{ \\sqrt{2}}\\sigma_{3}\\right).\\]
These twelve projections are neatly grouped into six different sets of orthonormal pairs. If the states are prepared using von Neumann measurements, these would correspond to measurements in the \\(\\sigma_{1}\\), \\(\\sigma_{2}\\), \\(\\sigma_{3}\\), \\(\\sigma_{1}+\\sigma_{2}\\), \\(\\sigma_{1}+\\sigma_{3}\\) and \\(\\sigma_{2}+\\sigma_{3}\\) directions. Twelve states are more than necessary for bi-linear process tomography, but the extra states can utilize as consistency checks.
After recording the corresponding output states for all twelve input states, the linearity of the process can be verified. If the process is linear then only four linearly independent input states are necessary to determine it. The other eight input states can be written in linear combination of these four. If the states from Eq. (1) are used as the four linearly independent states then the following eight linear sum rules (that the input states satisfy) have to be satisfied:
\\[Q^{(2,-)} = Q^{(1,+)}+Q^{(1,-)}-Q^{(2,+)},\\] \\[Q^{(3,-)} = Q^{(1,+)}+Q^{(1,-)}-Q^{(3,+)},\\] \\[Q^{(4,\\pm)} = \\frac{1}{2}\\left(Q^{(1,+)}+Q^{(1,-)}\\right)\\pm\\frac{1}{\\sqrt{2}} \\left(Q^{(2,+)}-Q^{(1,-)}\\right),\\] \\[Q^{(5,\\pm)} = \\frac{1}{2}\\left(Q^{(1,+)}+Q^{(1,-)}\\right)\\pm\\frac{1}{\\sqrt{2}} \\left(Q^{(3,+)}-Q^{(1,-)}\\right),\\] \\[Q^{(6,\\pm)} = \\frac{1}{2}\\left(Q^{(1,+)}+Q^{(1,-)}\\right)\\] \\[\\pm\\frac{1}{\\sqrt{2}}\\left(Q^{(2,+)}+Q^{(3,+)}-Q^{(1,+)}-Q^{(1,- )}\\right).\\]
If the eight sum rules are satisfied, then the process is not bi-linear, and should be described by a linear process map. The linear process map can then be computed following the recipe in section II.
If the eight sum rules are not satisfied, then the process may be given by a bi-linear map. We will attempt to verify that the process is bi-linear and calculate the bi-linear process map.
Note that the probabilities \\(\\Gamma^{(n)}=\\mathrm{Tr}[\\gamma_{0}P^{(n)}]\\) associated with each preparation should be found experimentally. The probabilities should be complete for an orthonormal set of projections, in other words: \\(\\Gamma^{(j,+)}+\\Gamma^{(j,-)}=1\\). Therefore the probabilities can be calculated from the fraction of the \\((+)\\) states as compared to the \\((-)\\) states, for all preparations made in the same direction.
If the process is bi-linear then three additional states should evolve in a way that is consistent with a bi-linear map derived from the other nine states. The following equations are derived from this condition and using Eq. (25). If these equations are satisfied then the process is bi-linear:
\\[\\Gamma^{(4,-)}Q^{(4,-)} = \\frac{1}{\\sqrt{2}}\\left(\\Gamma^{(1,-)}Q^{(1,-)}-\\Gamma^{(1,+)}Q^{ (1,+)}\\right.\\] \\[\\left.+\\Gamma^{(2,-)}Q^{(2,-)}-\\Gamma^{(2,+)}Q^{(2,+)}\\right)\\] \\[\\ \\ \\ \\ \\ +\\Gamma^{(4,+)}Q^{(4,+)},\\] \\[\\Gamma^{(5,-)}Q^{(5,-)} = \\frac{1}{\\sqrt{2}}\\left(\\Gamma^{(1,-)}Q^{(1,-)}-\\Gamma^{(1,+)}Q^{ (1,+)}\\right.\\] \\[\\ \\ \\ \\ \\ +\\Gamma^{(3,-)}Q^{(3,-)}-\\Gamma^{(3,+)}Q^{(3,+)}\\Big{)}\\] \\[\\ \\ \\ \\ \\ +\\Gamma^{(5,+)}Q^{(5,+)},\\] \\[\\Gamma^{(6,-)}Q^{(6,-)} = \\frac{1}{\\sqrt{2}}\\left(\\Gamma^{(2,-)}Q^{(2,-)}-\\Gamma^{(2,+)}Q^{ (2,+)}\\right.\\] \\[\\ \\ \\ \\ \\ \\ +\\Gamma^{(3,-)}Q^{(3,-)}-\\Gamma^{(3,+)}Q^{(3,+)}\\Big{)}\\] \\[\\ \\ \\ \\ \\ \\ +\\Gamma^{(6,+)}Q^{(6,+)}.\\]
If the conditions above are satisfied then the process is bi-linear, and the bi-linear process map can be computed by following the recipe in section VI.2.
Once the matrix elements of \\(M\\) are determined in this fashion, the evolution of any state \\(X=\\frac{1}{2}(\\openone+\\sum_{j}p_{j}\\sigma_{j})\\) is given by:
\\[4\\Gamma Q = 4\\left\\langle X|M|X\\right\\rangle\\] \\[= \\sum_{j}p_{j}^{2}\\left(\\langle\\openone|M|\\openone\\rangle+\\langle \\sigma_{j}|M|\\sigma_{j}\\rangle\\right)\\] \\[+\\sum_{j}p_{j}\\left(\\langle\\openone|M|\\sigma_{i}\\rangle+\\langle \\sigma_{j}|M|\\openone\\rangle\\right)\\] \\[+\\sum_{k>j}p_{j}p_{k}\\left(\\langle\\sigma_{j}|M|\\sigma_{k}\\rangle+ \\langle\\sigma_{k}|M|\\sigma_{j}\\rangle\\right).\\]
Note that since \\(Q\\) is a normalized state, the normalization constant \\(\\Gamma\\) is the measurement probability \\(\\Gamma=\\mathrm{Tr}[X\\gamma_{0}]\\). Although we had not explicitly mentioned this before, the matrix \\(M\\) contains all information about the measurement probabilities, that is why we needed the measurement probabilities \\(\\Gamma^{(n)}\\) to calculate the matrix \\(M\\).
Finally note that if both the test for linearity and bi-linearity fails, then the process cannot be consistently described by either a linear map or bi-linear map. The experiment then should be carefully analyzed for problems such as any non-linear dependence that may have been introduced if the input states are not accurately prepared, or if there is some dependence of \\(\\gamma_{0}\\) on the prepared state.
## X Analysis of a quantum process tomography experiment
In this section we will analyze a quantum process tomography experiment performed by M. Howard et al. Howard et al. (2007). Our critique will emphasize the importance of having a consistent theory of state preparation.
In this experiment, the system that is studied is an electron configuration formed in a nitrogen vacancy defect in a diamond lattice. The quantum state of the system is given by a spin triplet (S=1). Again we will write the initial state of the system and environment as \\(\\gamma_{0}\\).
The system is prepared by optical pumping, which results in a strong spin polarization. The state of the system is said to have 70% chance of being in a pure state \\(\\ket{\\phi}\\). Or more mathematically, the probability of \\(\\ket{\\phi}\\) is \\(\\mathrm{Tr}[\\ket{\\phi}\\bra{\\phi}\\gamma_{0}]=0.7\\).
Since the population probability is high, an assumption was made that the state of the system can be simply approximated as a pure state \\(\\ket{\\phi}\\bra{\\phi}\\). From this initial state, different input states can be prepared by suitably applying microwave pulses resonant with the transition levels. After preparation, the system is allowed to evolve, and the final states (density matrices) are measured. With the knowledge of the initial state and the measured final states, the linear process map that should describe this process is determined.
It was found that the liner process map has negative eigenvalues, so the map was \"corrected\" using a least squares fit between the experimentally determined map and a theoretical map based on Hermitian parametrization [28], while enforcing complete positivity.
However, if we do not regard the negative eigenvalues of the map as aberrations, then we should consider the assumptions about the preparation of the system more carefully. The assumption about the initial state of the system is:
\\[\\gamma_{0}\\rightarrow\\ket{\\phi}\\bra{\\phi}\\otimes\\tau. \\tag{33}\\]
This is in effect a pin map. Together with the stochastic transformation of the initial state into the various input states, this is identical to the stochastic preparation method discussed in section IV.
It is clear that the pure initial state assumption is unreasonable, given our knowledge now of how the process is sensitive to the initial correlations between the system and the environment. In effect the action of the pin map in this experiment is not perfect, and therefore the pin map can be ignored. Then the process equation is:
\\[Q^{(n)}=\\mathrm{Tr}_{\\mathbb{B}}[U\\Omega^{(n)}\\otimes\\openone(\\gamma_{0})U^{ \\dagger}] \\tag{34}\\]
where \\(\\Omega^{(n)}\\) is the stochastic mapping corresponding to preparing the \\(n^{th}\\)input state. We will assume that the stochastic process does not involve any ancillary systems that interact with the primary system during the experiment.
In this experiment, \\(\\Omega^{(n)}\\) is nothing more than a unitary transformation \\(V^{(n)}\\) satisfying \\(V^{(n)}\\ket{\\phi}=\\ket{\\psi^{(n)}}\\), where \\(\\ket{\\psi^{(n)}}\\) is the desired pure \\(n^{th}\\) input state to the process.
We can write the unitary transformation for a two-level system as:
\\[V^{(n)}=\\ket{\\psi^{(n)}}\\bra{\\phi}+\\ket{\\psi^{(n)}_{\\perp}}\\bra{\\phi_{\\perp}} \\tag{35}\\]
where \\(\\bra{\\psi^{(n)}}\\ket{\\psi^{(n)}_{\\perp}}=0\\) and \\(\\bra{\\phi}\\phi_{\\perp}=0\\). This basically defines \\(V^{(n)}\\) as a transformation from the basis \\(\\{\\ket{\\phi}\\}\\) to the basis \\(\\{\\ket{\\psi^{(n)}_{i}}\\}\\). The equation for the process becomes:
\\[Q^{(n)} = \\mathrm{Tr}_{\\mathbb{B}}\\left[U\\ket{\\psi^{(n)}}\\bra{\\phi}\\gamma_{ 0}\\ket{\\phi}\\bra{\\psi^{(n)}}U^{\\dagger}\\right]\\] \\[+\\mathrm{Tr}_{\\mathbb{B}}\\left[U\\ket{\\psi^{(n)}_{\\perp}}\\bra{\\phi_ {\\perp}}\\gamma_{0}\\ket{\\phi}\\bra{\\psi^{(n)}}U^{\\dagger}\\right]\\] \\[+\\mathrm{Tr}_{\\mathbb{B}}\\left[U\\ket{\\psi^{(n)}}\\bra{\\phi}\\gamma_ {0}\\ket{\\phi_{\\perp}}\\bra{\\psi^{(n)}_{\\perp}}U^{\\dagger}\\right]\\] \\[+\\mathrm{Tr}_{\\mathbb{B}}\\left[U\\ket{\\psi^{(n)}_{\\perp}}\\bra{\\phi_ {\\perp}}\\gamma_{0}\\ket{\\phi_{\\perp}}\\bra{\\psi^{(n)}_{\\perp}}U^{\\dagger}\\right].\\]
Therefore, since \\(\\bra{\\phi}\\gamma_{0}\\ket{\\phi}=0.7\\), to first approximation the process is a linear mapping on the states \\(\\ket{\\psi^{(n)}}\\bra{\\psi^{(n)}}\\). However it is clear that if all terms are included, the process is not truly a linear map of the states \\(\\ket{\\psi^{(n)}}\\bra{\\psi^{(n)}}\\). The negative eigenvalues are therefore a result of fitting results into a linear process map when the process is not truly linear.
## XI Conclusions
Preparation of the system or particle into a known state is a necessary part of any experiment. We have shown that with open systems, some care has to be taken to define the method of state preparation.
We described two methods, the stochastic method and the measurement method. The stochastic method leads linear processes that can be described by linear process maps. The advantage of the measurement method is that the primary system does not have to be perfectly isolated. This method only requires a good measurement apparatus.
With the stochastic method, the initial state can be made simply separable, effectively de-coupling the system from the environment. The evolution of the system is then given by a linear process map. However we find that the isolation of the apparatus from the system during the experiment is of greater importance with this method. Any apparatus or ancillary systems used for the stochastic preparation must not be contained in the quantum environment, the quantum environment being defined as everything that interacts with the quantum system during the experiment.
The stochastic method is more consistent with the traditional method of performing quantum experiments, and its advantage is that the process map is effectively equivalent to the linear dynamical map. However the disadvantage to the stochastic method is that the apparatus and any ancillary systems employed to perform the stochastic transformations or generalized measurements, must be perfectly isolated from the primary system for the duration of the experiment.
With the measurement method, if the initial state is correlated with the environment then the evolution of such a system is given by a bi-linear process map. The determination of this bi-linear process map by process tomography is more difficult, but we developed a procedure that works for qubit systems.
If interaction occurs between the system and this environment, then non-linear noise can be introduced into the experiment. This could have fundamental consequences, for example quantum error correction schemes proposed so far are based on correcting linear noise. If bi-linear errors occur, error correction becomes a more difficult challenge.
We proposed a protocol to distinguish between a bi-linear process and linear process. This protocol can be used to verify the assumptions made about state preparation in the experiment, by adding some additional inputs to the experiment and making consistency checks, the correctness of a stochastic preparation can be _verified_. The protocol we proposed can be an practical experimental tool.
###### Acknowledgements.
We would like to thank Anil Shaji and Thomas Jordan for useful discussions that lead us to consider the problem of quantum process tomography for open systems. We thank Ken Shih and Pablo Bianucci for many discussions about quantum process tomography experiments.
## Appendix A Generalized measurements
### Overview of measurements
Measurements in quantum theory begin with the von Neumann measurement [29], which can be quickly summarized as follows: if the system being measured is in the state \\(\\rho\\) and the measurement is given by a set of orthonormal projections \\(\\Pi_{j}\\), then the probability of obtaining the \\(j^{th}\\) outcome is \\(\\mathrm{Tr}[\\rho\\Pi_{j}]\\). If the \\(j^{th}\\) outcome is observed, then the state collapses to \\(\\Pi_{j}\\).
An early generalization of the von Neumann measurement was given [30; 31], with the introduction of POVMs. With POVMs, the measurement is still given by a set of operators \\(\\Xi_{j}\\), however the operators need not be orthonormal projections, but are in general positive operators satisfying \\(\\sum_{j}\\Xi_{j}=1\\). The probability of obtaining the \\(j^{th}\\) outcome is still \\(\\mathrm{Tr}[\\rho\\Xi_{j}]\\) and the state collapses to \\(\\frac{\\Xi_{j}}{\\mathrm{Tr}[\\Xi_{j}]}\\).
However, measurements can be generalized beyond POVMs (see [26] for more discussions). Suppose a very elaborate measurement apparatus is constructed, one that involves many transformations and POVMs. Rather than to look at all the intricate details of this apparatus the apparatus can be treated as a black box. When a state is fed into the box one of several possible outcomes is registered, and a new state \\(\\rho^{\\prime}\\) leaves the box, which is fixed for that particular outcome.
The motivation is to use the most general linear operation on a density matrix as a measurement, a linear map. A general measurement can be framed as follows: given a quantum system in state \\(\\rho\\), let the measurement be given by a set of positive trace reducing linear maps \\(\\mathcal{B}^{(j)}\\). The probability of registering the \\(j^{th}\\) outcome is given by \\(\\mathrm{Tr}[\\mathcal{B}^{(j)}(\\rho)]\\). If the \\(j^{th}\\) outcome is observed, then the system collapses to the state:
\\[\\rho\\rightarrow\\rho^{\\prime}=\\frac{\\mathcal{B}^{(j)}(\\rho)}{\\mathrm{Tr}[ \\mathcal{B}^{(j)}(\\rho)]}. \\tag{10}\\]
To ensure the probabilities sum to one, the sum of the maps \\(\\sum_{i}\\mathcal{B}^{(j)}\\) must be itself a trace preserving map.
Writing \\(\\mathcal{B}^{(j)}\\) its canonical form with its eigen-matrices \\(C^{(j)}_{\\alpha}\\) and eigenvalues \\(c^{(j)}_{\\alpha}\\) yields:
\\[\\mathcal{B}^{(j)}(\\rho)=\\sum_{\\alpha}c^{(j)}_{\\alpha}C^{(j)}_{\\alpha}\\rho\\;{C ^{(j)}_{\\alpha}}^{\\dagger}. \\tag{11}\\]
The condition that the overall map \\(\\sum_{j}\\mathcal{B}^{(j)}\\) preserves trace gives:
\\[\\sum_{j\\alpha}c^{(j)}_{\\alpha}{C^{(j)}_{\\alpha}}^{\\dagger}C^{(j)}_{\\alpha}= \\openone. \\tag{12}\\]
Given this condition, we can prove that this very general scheme can be accomplished with a unitary transformation and von Neumann measurement, by using an ancillary system of sufficient size. The method we use here is similar to the method used to show how a trace preserving map can be given as a reduced unitary transformation [32].
### Generalized measurements
Consider a transformation \\(W\\) that acts on a tensor product space consisting of the original system and two ancillary systems. Let the original system and the ancillary systems be spanned by the basis states \\(\\left|r,j,\\alpha\\right\\rangle\\) respectively. The action of \\(W\\) is defined as follows:
\\[W:\\left|r^{\\prime},0,0\\right\\rangle\\rightarrow\\sum_{rj\\alpha}\\sqrt{c^{(j)}_{ \\alpha}}\\left[C^{(j)}_{\\alpha}\\right]_{rr^{\\prime}}\\left|r,j,\\alpha\\right\\rangle. \\tag{13}\\]
The size of the ancillary systems is bounded by \\(\\mu N^{2}\\), since \\(j\\) ranges from \\(1\\) to \\(\\mu\\), where \\(\\mu\\) is the number of maps \\(\\mathcal{B}^{(j)}\\) making up the measurement and \\(\\alpha\\) ranges from \\(1\\) to \\(N^{2}\\) (where \\(N\\) is dimension of the system) since each map \\(\\mathcal{B}^{(j)}\\) has at most \\(N^{2}\\)\\(C\\)-matrices.
The action of the transformation \\(W\\) on a complete set of basis states is not yet defined. However for the states on which it is defined, \\(W\\) does preserve orthornormality between those states. The proof is straight forward using Eq. 12:
\\[\\left(\\left\\langle r^{\\prime},0,0\\right|W^{\\dagger}\\right)\\left(W \\left|s^{\\prime},0,0\\right\\rangle\\right) = \\sum_{rj\\alpha}c^{(j)}_{\\alpha}\\left[C^{(j)}_{\\alpha}\\right]^{*}_{ r^{\\prime}}\\left[C^{(j)}_{\\alpha}\\right]_{rs^{\\prime}}\\] \\[= \\delta_{r^{\\prime}s^{\\prime}}.\\]Since \\(W\\) preserves the orthonormality, it can be made into a valid unitary transformation by carefully defining its action on the remaining space that is so far not covered by equation 10.
Now we demonstrate that our generalized measurement, given by the set of maps \\(\\mathcal{B}^{(j)}\\), can be equivalently performed by this unitary transformation \\(W\\) and a von Neumann measurement. Performing the unitary transformation \\(W\\) on the original system in state \\(\\rho\\) and the ancillary systems in the initial state \\(\\left|0,0\\right\\rangle\\left\\langle 0,0\\right|\\) to gives:
\\[\\chi = W\\left(\\rho\\otimes\\left|0,0\\right\\rangle\\left\\langle 0,0\\right| \\right)W^{\\dagger}\\] \\[= \\sum_{rsr^{\\prime}s^{\\prime}jka\\beta}\\sqrt{c_{\\alpha}^{(j)}c_{ \\beta}^{(k)}}[C_{\\alpha}^{(j)}]_{rr^{\\prime}}\\rho_{r^{\\prime}s^{\\prime}}[C_{ \\beta}^{(k)}]_{s^{\\prime}s}^{*}\\] \\[\\qquad\\qquad\\times\\left|r,j,\\alpha\\right\\rangle\\left\\langle s,k, \\beta\\right|.\\]
Now perform a von Neumann measurement on the first ancillary system, given by the set of orthonormal projections \\(\\left|j\\right\\rangle\\left\\langle j\\right|\\). The probability of the \\(j^{th}\\) outcome is:
\\[\\mathrm{Tr}[\\chi\\left|j\\right\\rangle\\left\\langle j\\right|] = \\sum_{rr^{\\prime}s^{\\prime}\\alpha}c_{\\alpha}^{(j)}[C_{\\alpha}^{(j )}]_{rr^{\\prime}}\\rho_{r^{\\prime}s^{\\prime}}[C_{\\alpha}^{(j)}]_{s^{\\prime}r}^ {*}\\] \\[= \\mathrm{Tr}[\\mathcal{B}^{(j)}(\\rho)].\\]
If the \\(j^{th}\\) outcome is observed, then the original plus ancillary system collapses to the state:
\\[\\left\\langle j\\right|\\chi\\left|j\\right\\rangle = \\frac{1}{K}\\sum_{rsr^{\\prime}s^{\\prime}\\alpha\\beta}\\sqrt{c_{\\alpha }^{(j)}c_{\\beta}^{(j)}}[C_{\\alpha}^{(j)}]_{rr^{\\prime}}\\rho_{r^{\\prime}s^{ \\prime}}[C_{\\beta}^{(j)}]_{s^{\\prime}s}^{*}\\] \\[\\qquad\\qquad\\qquad\\times\\left|r,j,\\alpha\\right\\rangle\\left\\langle s,j,\\beta\\right|.\\]
where \\(K=\\mathrm{Tr}[\\mathcal{B}^{(n)}(\\rho)]\\) normalizes the state.
Finally the measurement is over and the particle exits the apparatus. The state of the system, now outside the apparatus, is given by tracing over the remaining ancillary system inside the apparatus:
\\[\\rho_{r^{\\prime}s^{\\prime}}\\rightarrow\\rho_{rs}^{\\prime} = \\frac{1}{K}\\sum_{rsr^{\\prime}s^{\\prime}\\alpha}c_{\\alpha}^{(n)}[C_ {\\alpha}^{(n)}]_{rr^{\\prime}}\\rho_{r^{\\prime}s^{\\prime}}[C_{\\alpha}^{(n)}]_{ ss^{\\prime}}^{*}\\] \\[= \\frac{[\\mathcal{B}^{(n)}(\\rho)]_{rs}}{\\mathrm{Tr}[\\mathcal{B}^{(n )}(\\rho)]}.\\]
Therefore this gives the same results as the generalized measurement we laid out. We only needed an ancillary system big enough (dimension \\(\\mu N^{2}\\)), one unitary transformation, and one von Neumann measurement to perform the most general quantum measurement.
## Appendix B Process vs. dynamical maps
Dynamical maps [21; 22] allow the description of stochastic processes and the evolution of open systems. Dynamical maps used to describe the evolution of open systems are usually defined with a constant environment state \\(\\tau\\):
\\[\\mathscr{B}\\left(\\rho^{\\mathbb{A}}\\right)=\\mathrm{Tr}_{\\mathbb{B}}\\left[U \\rho\\otimes\\tau U^{\\dagger}\\right]\\]
Implicitly, the state of the environment \\(\\tau\\) is a parameter of the map \\(\\mathscr{B}\\). Therefore, the linear dynamical map \\(\\mathscr{B}\\) would only consistently describe an experiment if different input states \\(\\rho\\) can be prepared independently of the environment state \\(\\tau\\). The actual issue of how this can be executed is never addressed.
The stochastic preparation method provides a way for an experiment to be made so that different input states can be prepared with a fixed environment state. Consider the process equation Eq. (7) in section IV.
\\[Q^{(n)} = \\mathrm{Tr}_{\\mathbb{B}}\\left[U\\,\\left[\\Omega^{(n)}\\circ\\Theta \\right]\\otimes\\mathcal{I}\\,\\left(\\gamma_{0}\\right)U^{\\dagger}\\right]\\] \\[= \\mathrm{Tr}_{\\mathbb{B}}\\left[UP^{(n)}\\otimes\\tau(\\Theta)U^{ \\dagger}\\right].\\]
The process map is then given by:
\\[\\Lambda\\left(\\rho^{\\mathbb{A}}\\right) = \\mathrm{Tr}_{\\mathbb{B}}\\left[U\\rho^{\\mathbb{A}}\\otimes\\tau( \\Theta)U^{\\dagger}\\right]\\]
Therefore, in this context, the dynamical map is equivalent to the process map. However, for consistency, we have to remember that the environment state is a constant to the problem, therefore the pin map \\(\\Theta\\) should also be a constant to the problem.
It is possible to consider a dynamical map where the environment is not fixed, such as the reduced dynamical evolution of a non-simply separable state \\(\\gamma_{0}\\)[16]:
\\[\\mathscr{B}\\left(\\mathrm{Tr}_{\\mathbb{B}}\\left[\\gamma_{0}\\right]\\right)= \\mathrm{Tr}_{\\mathbb{B}}\\left[U\\gamma_{0}U^{\\dagger}\\right] \\tag{11}\\]
The dynamical map in this problem is applicable only over a compatibility domain of states, rather than over the complete state space of the system \\(\\mathbb{A}\\). The compatibility domain [16; 33] is the set of states that are compatible with the correlations in \\(\\gamma_{0}\\). Formally, this problem defines an extension map[34; 35; 33] (also known as preparations) that relates the initial state of the system \\(\\mathbb{A}\\) to the overall initial state of \\(\\mathbb{AB}\\). The extension map is linear but not necessarily a completely positive map. Therefore, in this context, such dynamical maps are a theoretical tool, and have a limited relation to a physical experiment. This why we differentiate between process maps and dynamical maps.
## References
* (1) I. L. Chuang and M. A. Nielsen, J. Mod. Opt. **44**, 2455 (1997).
* (2) J. F. Poyatos, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. **78**, 390 (1997).
* (3) G. M. D'Ariano and P. Lo Presti, Phys. Rev. Lett. **86**, 4195 (2001).
* (4) F. De Martini, A. Mazzei, M. Ricci, and G. M. D'Ariano, Phys. Rev. A **67**, 062307 (2003).
* (5) J. B. Altepeter, D. Branning, E. Jeffrey, T. C. Wei, P. G. Kwiat, R. T. Thew, J. L. O'Brien, M. A. Nielsen, and A. G. White, Phys. Rev. Lett. **90**, 193601 (2003).
* (6) G. M. D'Ariano and P. Lo Presti, Phys. Rev. Lett. **91**, 047902 (2003).
* (7) M. Mohseni and D. A. Lidar, Physical Review Letters **97**, 170501 (2006).
* (8) M. Mohseni and D. A. Lidar, Physical Review A **75**, 062331 (2007).
* (9) M. A. Nielsen, E. Knill, and R. Laflamme, Nature **396**, 52 (1998).
* (10) A. M. Childs, I. L. Chuang, and D. W. Leung, Phys. Rev. A **64**, 012314 (2001).
* (11) M. W. Mitchell, C. W. Ellenor, S. Schneider, and A. M. Steinberg, Phys. Rev. Lett. **91**, 120402 (2003).
* (12) Y. S. Weinstein, T. F. Havel, J. Emerson, N. Boulant, M. Saraceno, S. Lloyd, and D. G. Cory, J. Chem. Phys. **121**, 120402 (2004).
* (13) J. L. O'Brien, G. J. Pryde, A. Gilchrist, D. F. V. James, N. K. Langford, T. C. Ralph, and A. G. White, Phys. Rev. Lett. **93**, 080502 (2004).
* (14) M. Howard, J. Twamley, C. Wittmann, T. Gaebel, F. Jelezko, and J. Wrachtrup, New J. Phys. **8**, 33 (2006).
* (15) S. H. Myrskog, J. K. Fox, M. W. Mitchell, and A. M. Steinberg, Physical Review A (Atomic, Molecular, and Optical Physics) **72**, 013615 (2005).
* (16) T. F. Jordan, A. Shaji, and E. C. G. Sudarshan, Phys. Rev. A **70**, 1 (2004).
* (17) C. A. Rodriguez, K. Modi, A. Kuah, E. C. G. Sudarshan, and A. Shaji, arXiv:quant-ph/0703022v3 (2007).
* (18) P. Stelmachovic and V. Buzek, Phys. Rev. A **64**, 062106 (2001).
* (19) H. Carteret, D. R. Terno, and K. Zyckowski, arXiv:quant-ph/0512167 (2005).
* (20) M. Ziman, arXiv:quant-ph/0603166 (2006).
* (21) E. Sudarshan, P. Mathews, and J. Rau, **121**, 920 (1961).
* (22) E. C. G. Sudarshan and T. F. Jordan, J. Math. Phys. **2**, 772 (1961).
* (23) M. Nielsen and I. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, 2000).
* (24) V. Gorini and E. C. G. Sudarshan, Comm. Math. Phys. **46**, 43 (1976).
* (25) K. F. Romero, P. Tolkner, and P. Hanngi, Phys. Rev. A **69**, 052109 (2004).
* (26) A. Kuah and E. C. G. Sudarshan, arXiv:quant-ph/0209083v1 (2002).
* (27) M. S. Byrd and N. Khaneja, Physical Review A (Atomic, Molecular, and Optical Physics) **68**, 062322 (2003).
* (28) T. F. Havel, J. Math. Phys. **44** (2003).
* (29) J. von Neumann, _Mathematische Grundlagen der Quantenmechanik_ (Springer, Berlin, 1932).
* (30) J. Jauch and C. Piron, Helv. Phys. Acta **40**, 559 (1967).
* (31) E. B. Davies and J. T. Lewis, Comm. Math. Phys. **17** (1970).
* (32) E. Sudarshan, _From SU(3) to Gravity (Festschrift in honor of Yuval Ne'eman): Quantum Measurement and Dynamical Maps_ (Cambridge University Press, 1986).
* (33) P. Stelmachovic and V. Buzek, Phys. Rev. A **64**, 062106 (2001).
* (34) H.-P. Breuer, Physical Review A **75**, 022103 (2007). | We study the effects of preparation of input states in a quantum tomography experiment. We show that maps arising from a quantum process tomography experiment (called process maps) differ from the well know dynamical maps. The difference between the two is due to the preparation procedure that is necessary for any quantum experiment. We study two preparation procedures, stochastic preparation and preparation by measurements. The stochastic preparation procedure yields process maps that are linear, while the preparations using von Neumann measurements lead to non-linear processes, and can only be consistently described by a bi-linear process map. A new process tomography recipe is derived for preparation by measurement for qubits. The difference between the two methods is analyzed in terms of a quantum process tomography experiment. A verification protocol is proposed to differentiate between linear processes and bi-linear processes. We also emphasize the preparation procedure will have a non-trivial effect for any quantum experiment in which the system of interest interacts with its environment.
pacs: 03.65.Ta, 03.65.Yz, 03.67.Mn | Summarize the following text. |
arxiv-format/0706_0660v2.md | **Altitude and Latitude Distribution of Atmospheric Aerosol and Water Vapor**
**from the Narrow-Band Lunar Eclipse Photometry**
Oleg S. Ugolnikov \\({}^{\\rm a,b,}\\)1 and Igor A. Maslov \\({}^{\\rm a,c}\\)
Footnote 1: Corresponding author. Fax: +7-495-333-5178. E-Mail: [email protected].
\\({}^{\\rm a}\\)_Space Research Institute, Profsoyuznaya st., 84/32, 117997, Moscow, Russia_
\\({}^{\\rm b}\\)_Astro-Space Center, Lebedev's Physical Institute, Profsoyuznaya st., 84/32, 117997, Moscow, Russia_
\\({}^{\\rm c}\\)_Sternberg Astronomical Institute, Universitetsky prosp., 13, 119992, Moscow, Russia_
## 1 Introduction
The lunar eclipses photometry is the only ground-based technique for atmosphere remote sensing with altitude resolution by measurements of tangent path absorption. During the eclipse the Moon crosses the Earth's umbra. Each point of this space is illuminated by solar emission refracted by the definite range of angles in the atmosphere of the Earth. The value of refraction angle is determined by the minimal altitude of the light ray path above the Earth's surface (or ray path perigee). The position angle of the point on the Moon determines location of the ray perigee above the Earth. Thus, different points of the umbra crossed by the Moon during the eclipse correspond to different perigee locations and different altitudes in the atmosphere. In fact, the scheme of the eclipse is similar to the one of space atmosphere absorption measurements, where the Sun is the light source and the Moon plays the role of spacecraft. The orbit radius of the Moon is sufficiently larger than the one of atmospheric measurement satellites. It causes both advantages and disadvantages.
The advantage of the lunar eclipse method is slow dependency of effective ray perigee altitude on the position of the lunar surface point inside the umbra. It gives the possibility to expect high altitude resolution in the troposphere. Astronomical measurements technique brings high sensitivity which is important in the case of strong absorption along the troposphere tangent path. One more advantage of the method is large angular size of the Moon allowing to hold the simultaneous measurements in different parts of umbra. Each total eclipse the Moon covers the large part of northern (or southern) half of the umbra, giving the possibility to build the altitude distribution of atmosphere absorption along the northern (or southern) part of the Earth's limb - the line where the Sun and the Moon are at the horizon during the totality.
The main problem of the method is large angular size of the Sun covering the definite range of refraction angles (or perigee altitudes) and position angles (or locations at the Earth's limb) being observed from the Moon. To convert the observational data to the resulting extinction values on the net of altitudes and locations, we have to solve the incorrect mathematical problem. It can be done using the regularization algorithms, restricting the horizontal and vertical resolution, and using a priori information about the possible solution values.
Wide theoretical and observational analysis of lunar eclipse phenomena was performed by Link in [1]. The theoretical part of this book contains the review of the methods of lunar surface brightness calculation during the eclipse. Observational part describes the correlation of eclipse magnitude (or brightness) with some characteristics that can influence on the magnitude, including the solar activity. As it was shown by A. Danjon in the early 1900s, the darkest eclipses (during those the Moon almost disappears from the sky) preferably happen right after the solar activity minimum. However, the reason of such correlation is not found, and the author did not reject it to be occasional.
The method of reverse problem solution was suggested in [2] for the observational data of two total lunar eclipses in 2004. The observations were carried out in two-peaked spectral band in red and near-IR ranges mostly avoiding the absorption bands of atmosphere gases. Comparing the observational data with gaseous atmosphere model calculations, authors found the additional extinction, which can be related with atmospheric aerosol. The reverse problem solution was one-dimensional, the solar disk reintegration was made only by the refraction angle \\(\\beta\\) (see Figure 1) basing on the dependency of brightness on the distance from the umbra center at definite umbra position angle \\(PA\\) (see Figure 2). The procedure works well if the latitude dependence of the aerosol extinction is slow. It also works in the outer umbra part with small refraction angle, where solar disk beyond the Earth covers small range of position angles \\(\\psi\\) (see Figure 1). These small refraction angles correspond to the altitudes about 10 km, where the aerosol distribution is compared with clouds cluster using the meteorological maps.
In this paper the two-dimensional method of reverse problem solution will be developed for the observational data of the total lunar eclipse of March, 3, 2007. The reintegration is done simultaneously by refraction angle \\(\\beta\\) and position angle \\(\\psi\\). The possible resolution and comparison with one-dimensional reintegration data are determining the possibilities of atmosphere remote sensing by the lunar eclipses observations. Measurements in two narrow spectral bands in the near-IR range outside and inside the H\\({}_{2}\\)O absorption lines allow to investigate not only aerosol, but also the water vapor distribution.
## 2 Observations.
Lunar surface photometry during the total eclipse of March, 3, 2007 was held at Southern Laboratory of Sternberg Astronomical Institute, Crimea, Ukraine (44.7\\({}^{\\rm o}\\)N, 34.0\\({}^{\\rm o}\\)E). The observations were carried
Figure 1: Eclipse geometry being observed from the Moon.
out by two CCD-cameras: SBIG ST-6 with \"Rubinar-500\" lens (focal distance 500 mm, 1:8) and Sony DSI-Pro with \"Jupiter-36B\" lens (focal distance 250 mm, 1:3.5). Entire lunar disk was placed in both frames. Both devices worked with near-IR narrow-band interference filters with centering wavelengths equal to 867 and 938 nm and the width by half-maximum level equal to 10 and 55 nm, respectively. Glass color filters blocked the secondary maxima of interference filters. The exposure time varied in 100 times from non-eclipsed Moon to totality. First spectral interval is free from atmospheric gases absorption lines. The extinction in the atmosphere is formed by molecular and aerosol scattering. Last one has slow wavelength dependency and we can assume that aerosol extinction coefficient is the same for the wavelength 938 nm, where strong water vapor absorption is added. The effective cross-section of H\\({}_{2}\\)O, integrated by the second observational band using the spectral measurements [3, 4] is equal to 5.6\\(\\cdot\\)10\\({}^{-23}\\) cm\\({}^{2}\\) molecule\\({}^{-1}\\).
Observations were carried out in variable weather conditions, the sky was partially cloudy. The atmosphere transparency at the observation point changed rapidly. To control it, the measurements of the star 56 VY Leonis in the same frame with the Moon were held. This star was situated close to the Moon during the whole eclipse, having experienced 3-min grazing occultation during totality. It is bright in near-IR enough to hold the CCD-photometry even near the eclipsed lunar disk and can be considered to be uniform light source in our bands on the time scale of several hours.
The methods of lunar surface photometry and background subtraction were analogous to [2]. The result of the procedure is the two-dimensional distribution of umbra darkening factor _U_(_d_, _PA_). This value is equal to the ratio of brightness of the element of eclipsed lunar surface at distance \\(d\\) from the center of umbra at the position angle _PA_ (see designations in the Figure 2) and the brightness of the same element outside the umbra and penumbra. This distribution inside the umbra is shown as the isophotes map in the Figure 2 for both wavelengths. The most noticeable feature visible in the both maps is the dark spot shifted far from the umbra center in equatorial direction. This minimum is also seen in the dependency _U_(_d_) for position angle 90\\({}^{\\circ}\\) (east equatorial direction) shown in the Figure 3 with the one for position angle 0\\({}^{\\circ}\\) (northern polar direction) and theoretical curve for dry gaseous atmosphere model analogous to [2]. This spot can appear even in the case of monotonous dependence of the extinction coefficient on the altitude. Less brightness of umbra in the equatorial regions was also noticed in [2] for the other lunar eclipses and spectral passbands.
As it can be seen in the Figure 3, the brightness of eclipsed lunar disk is less than the one calculated for the dry gaseous atmosphere model. This difference is small in the outer regions and increases to the center. This picture is similar to the one observed for two eclipses in 2004 [2], and the eclipse of March, 3, 2007 was not darker than the previous ones as it was expected from the solar activity correlation [1]. We can also see that the umbra darkening factor in polar regions (_PA_ near zero) is principally the same for both wavelengths. It shows the vanishing level of water vapor in the northern winter troposphere. Equatorial part of umbra, including the dark spot, shows sufficient difference of umbra darkening factor for two wavelengths. This difference is related with high amount of water vapor in the equatorial troposphere. It will be proved numerically in the next chapter of this work.
## 3 Two-dimensional reintegration procedure.
Each point inside the umbra is illuminated by the emission of the Sun refracted in the atmosphere of the Earth. The Sun has large angular size (just about 3.5 times less than the one of Earth if we observe from the Moon). So the emission of different parts of the solar disk is refracted by different angles (at different altitudes) above the different points of the Earth's limb. To calculate the extinction at definite position angle and altitude, the reintegration procedure is needed. In paper [2] this procedure was one-dimensional, solar disk was divided into arcs with the same refraction angle and perigee altitudes, the data was related with the point on the limb corresponding to the middle of the arc (see Figure 1). Here the arcs are also divided into the zones corresponding to different limb locations. However, the one-dimensional reintegration procedure [2] will be also run for comparison of results.
The umbra darkening factor of the point at the angular distance \\(d\\) from the umbra center and position angle _PA_ is equal to [2]:Figure 2: Umbra darkening factor distribution for two observational spectral bands with position angles denotations.
Figure 3: Dependencies of umbra darkening factor on the distance from the center of umbra for two wavelengths and two position angles compared with theoretical curves for dry gaseous atmosphere.
\\[U(d,PA)=\\frac{I(d,PA)}{I_{0}}=\\frac{1}{I_{0}}\\int\\limits_{0}^{ \\rho_{0}}\\int\\limits_{0}^{2\\pi}S(\\rho)\\Bigg{(}\\frac{\\pi_{0}}{\\pi_{0}-\\beta_{1}}E( \\beta_{1},\\gamma_{1})+\\frac{\\pi_{0}}{\\beta_{2}-\\pi_{0}}E(\\beta_{2},\\gamma_{2}) \\Bigg{)}\\rho\\,d\\varphi\\,d\\rho,\\] \\[I_{0}=\\int\\limits_{0}^{\\rho_{0}}\\int\\limits_{0}^{2\\pi}S(\\rho)\\, \\rho\\,d\\varphi\\,d\\rho,\\quad\\beta_{1,2}=\\beta_{1,2}(d,\\rho,\\varphi);\\quad\\gamma _{1,2}=PA-\\psi_{1,2}(d,\\rho,\\varphi)\\] (1).
Here \\(I\\) and \\(I_{0}\\) are the brightness values of Sun visible from this point beyond the Earth's atmosphere and outside the eclipse, \\(S(\\rho)\\) is the surface brightness distribution of the solar disk, \\(\\pi_{0}\\) is the horizontal parallax of the Moon, \\(\\beta_{1,2}\\) and \\(\\gamma_{1,2}\\) are the refraction and position angles of the solar disk fragment emission. Other designations can be seen in the Figure 1. Note that the picture visible from the Moon is mirror-transformed relatively the one seen from the Earth (Figure 2), and the angles \\(PA\\) and \\(\\gamma\\) in the Figure 1 are counted clockwise. The ray dilution factor \\(E(\\beta,\\gamma)\\) is expressed as follows [2]:
\\[E(\\beta,\\gamma)=0,\\quad\\beta>\\beta_{0};\\] \\[E(\\beta,\\gamma)=T_{G}\\,(h(\\beta))\\cdot T_{A}\\,(h(\\beta),\\gamma )\\cdot\\frac{1}{1-L\\frac{d\\beta}{dh}},\\quad 0\\leq\\beta\\leq\\beta_{0};\\] \\[E(\\beta,\\gamma)=1,\\quad\\beta<0 \\tag{2}\\]
Here \\(h(\\beta)\\) is the perigee altitude of the ray path refracting by the angle \\(\\beta,L\\) is the distance between the Earth and the Moon, \\(T_{G}\\) is the transparency of the dry gaseous atmosphere along this path, that can be assumed to be independent on position angle \\(\\gamma\\), and \\(T_{A}\\) is the transparency of the atmospheric aerosol along the same path, which depends on \\(\\gamma\\). If we observe inside the water vapor absorption band, the term \\(T_{W}(h(\\beta),\\gamma)\\) will also appear.
To regularize the solution, we assume (following [2]) that the dependency \\(T_{A}(h(\\beta),\\gamma)\\) (or the result of its multiplication with \\(T_{W}\\) for 938 nm) is partially linear by the angles \\(\\beta\\) and \\(\\gamma\\):
\\[T_{A}(h(\\beta),\\gamma)=T_{A2}\\,(\\beta_{k},\\gamma_{l})\\Bigg{(}1- \\frac{\\Big{|}\\beta-\\beta_{k}\\Big{|}}{\\beta_{S}}\\Bigg{)}\\Bigg{(}1-\\frac{\\Big{|} \\gamma-\\gamma_{l}\\Big{|}}{\\gamma_{S}}\\Bigg{)}+T_{A2}\\,(\\beta_{k+1},\\gamma_{l} )\\Bigg{(}1-\\frac{\\Big{|}\\beta-\\beta_{k+1}\\Big{|}}{\\beta_{S}}\\Bigg{)}\\Bigg{(}1- \\frac{\\Big{|}\\gamma-\\gamma_{l}\\Big{|}}{\\gamma_{S}}\\Bigg{)}+\\] \\[+T_{A2}\\,(\\beta_{k},\\gamma_{l+1})\\Bigg{(}1-\\frac{\\Big{|}\\beta- \\beta_{k}\\Big{|}}{\\beta_{S}}\\Bigg{)}\\Bigg{(}1-\\frac{\\Big{|}\\gamma-\\gamma_{l+1 }\\Big{|}}{\\gamma_{S}}\\Bigg{)}+T_{A2}\\,(\\beta_{k+1},\\gamma_{l+1})\\Bigg{(}1- \\frac{\\Big{|}\\beta-\\beta_{k+1}\\Big{|}}{\\beta_{S}}\\Bigg{)}\\Bigg{(}1-\\frac{ \\Big{|}\\gamma-\\gamma_{l+1}\\Big{|}}{\\gamma_{S}}\\Bigg{)} \\tag{3}\\]
Here \\(\\beta_{k}\\) and \\(\\beta_{k+1}\\) are the left and right neighbor grid points by the refraction angle, \\(\\gamma\\) and \\(\\gamma_{l+1}\\) are the same for position angle, \\(\\beta_{S}\\) and \\(\\gamma_{S}\\) are the steps of the grid. For the wavelength \\(\\lambda_{1}\\) (867 nm) the integral in formula (1) turns into the sum
\\[U(d,PA,\\lambda_{1})=\\sum_{k}\\sum_{l}U_{kl}(d,PA,\\lambda_{1})\\cdot T_{A2}\\,( \\beta_{k},\\gamma_{l}) \\tag{4}\\]
where the coefficients
\\[U_{kl}(d,PA,\\lambda_{1})=\\frac{1}{I_{0}}\\int\\limits_{0}^{\\rho_{0} \\geq\\pi}S(\\rho,\\lambda_{1})\\cdot\\{\\frac{\\pi_{0}}{\\pi_{0}-\\beta_{1}}T_{G}\\,(h( \\beta_{1}),\\lambda_{1})\\frac{1}{1-L\\frac{d\\beta_{1}}{dh}}B_{k}\\,(\\beta_{1})G_{ l}(\\gamma_{1})+\\] \\[+\\cdot\\frac{\\pi_{0}}{\\beta_{2}-\\pi_{0}}T_{G}\\,(h(\\beta_{2}), \\lambda_{1})\\frac{1}{1-L\\frac{d\\beta_{2}}{dh}}B_{k}\\,(\\beta_{2})\\,G_{l}(\\gamma _{2})\\}\\cdot\\rho\\,d\\varphi\\,d\\rho \\tag{5}\\]can be calculated using the dry gaseous atmosphere model. Here
\\[B_{k}\\left(\\beta\\right)=1-\\frac{\\left|\\beta-\\beta_{k}\\right|}{ \\beta_{S}},\\ \\ \\ \\left|\\beta-\\beta_{k}\\right|<\\beta_{S};\\ \\ \\ B_{k}\\left(\\beta\\right)=0,\\ \\ \\ \\left|\\beta-\\beta_{k}\\right|\\geq\\beta_{S};\\] \\[G_{l}(\\gamma)=1-\\frac{\\left|\\gamma-\\gamma_{l}\\right|}{\\gamma_{S} },\\ \\ \\ \\left|\\gamma-\\gamma_{l}\\right|<\\gamma_{S};\\ \\ \\ G_{l}(\\gamma)=0,\\ \\ \\ \\left|\\gamma-\\gamma_{l}\\right|\\geq\\gamma_{S} \\tag{6}\\]
The formula (3) is the linear equation for the values \\(T_{A2}(\\beta_{k},\\gamma_{l})\\). Having written it for different values of \\(d\\) and \\(PA\\), we obtain the system that can be solved by the minimum squares method. But solutions can still have large errors due to mathematical incorrectness of the problem. For further regularization we have to take into account all a priori information about the solutions. First of all, the solutions are the values of atmosphere transparency, which has the following property:
\\[0\\leq T_{A2}\\left(\\beta_{k},\\gamma_{l}\\right)\\leq 1 \\tag{7}\\]
The dependency of umbra darkening factor on \\(d\\) (see figure 3) is like the ones for 2004 eclipses and we can follow [2] using the natural assumption
\\[T_{A2}\\left(\\beta_{k},\\gamma_{l}\\right)=1,\\ \\ \\ \\beta_{k}=0;\\] \\[T_{A2}\\left(\\beta_{k},\\gamma_{l}\\right)=0,\\ \\ \\ \\beta_{k}\\geq 1^{ \\mathrm{o}} \\tag{8}\\]
Here we mean that the atmosphere is transparent at its upper border and absorbs all tangent emission propagating through its lowest layers (less than 0.7 km). Moreover, for the better accuracy of the results at the edges of the grid we will calculate them in the range of angles \\(\\gamma\\) between \\(\\gamma_{MN}=-60^{\\mathrm{o}}\\) and \\(\\gamma_{MAX}=120^{\\mathrm{o}}\\) corresponding to minimal and maximal \\(PA\\) values (as shown in the Figure 2) assuming
\\[T_{A2}\\left(\\beta_{k},\\gamma\\right)=T_{A2}\\left(\\beta_{k},\\gamma _{MIN}\\right),\\ \\ \\ \\gamma<\\gamma_{MIN};\\] \\[T_{A2}\\left(\\beta_{k},\\gamma\\right)=T_{A2}\\left(\\beta_{k},\\gamma _{MAX}\\right),\\ \\ \\ \\gamma>\\gamma_{MAX} \\tag{9}\\]
Following [2], resolution by the refraction angle \\(\\beta_{S}\\) is accepted to be \\(0.2^{\\mathrm{o}}\\), that is a little bit lower than the angular radius of the Sun. It corresponds to the altitude resolution in the troposphere 3-4 km. Better resolution leads to errors increase in solution. The corresponding resolution by the position angle \\(\\gamma_{S}\\) is equal to \\(15^{\\mathrm{o}}\\). So we have six grid points by angle \\(\\beta\\) (from \\(0.0^{\\mathrm{o}}\\) to \\(1.0^{\\mathrm{o}}\\) through \\(0.2^{\\mathrm{o}}\\)) and 13 grid points by angle \\(\\gamma\\) (from \\(-60^{\\mathrm{o}}\\) to \\(120^{\\mathrm{o}}\\) through \\(15^{\\mathrm{o}}\\)). The number of independent parameters is less than 78 due to a priori information.
Having run this procedure for 867 nm observational data, we obtain the array \\(T_{A2}(\\beta_{k},\\gamma_{l})\\). According to slow wavelength dependency of refraction and aerosol extinction alongside with small difference of two observational wavelengths compared with the wavelengths themselves, we assume these parameters to be the same for \\(\\lambda_{2}\\) equal to 938 nm. In order to maximize a priori data for this wavelengths the values \\(T_{A2}(\\beta_{k},\\gamma_{l})\\) are included to the matrix. Thus, the equations system for the second wavelength takes the form
\\[U(d,PA,\\lambda_{2})=\\sum_{k}\\sum_{l}\\bigl{[}T_{A2}\\left(\\beta_{k},\\gamma_{l} \\right)\\cdot U_{kl}(d,PA,\\lambda_{2})\\bigr{]}\\cdot T_{W2}\\left(\\beta_{k}, \\gamma_{l}\\right) \\tag{10}\\]
The number of independent parameters here will be less than in the equations (4) for 867 nm, since the pairs (\\(k\\), \\(l\\)) for which \\(T_{A2}(\\beta_{k},\\gamma_{l})\\)=0 will be excluded; \\(T_{W}\\) for these \\(k\\) and \\(l\\) can not be found. It is basically the case of low altitudes where the atmospheric aerosol absorbs all tangent emission. But if \\(T_{A2}(\\beta_{k},\\gamma_{l})\\) is close to 1, corresponding \\(T_{W}\\) values can be found with higher accuracy.
The same procedure consequence is used to run one-dimensional reintegration and to find the arc-average values of \\(T_{A1}(\\beta_{\\rm k},\\,PA)\\) and \\(T_{W1}(\\beta_{\\rm k},\\,PA)\\) for different \\(PA\\) values in the same range as for \\(\\gamma\\) angles by the method described in [2].
## 4 Results.
Figures 4 and 5 show the results of one- and two-dimensional reintegration procedures of calculation of \\(T_{A}\\) values for 867 nm at different perigee altitudes (or \\(\\beta\\) angles) depending on the position angles \\(PA\\) (for one-dimensional procedure) or \\(\\gamma\\) (for two-dimensional procedure). The coordinates of the Earth's limb points corresponding to the position angles are also shown. One-dimensional reintegration results are shown by the connected dots, two dimensional results are shown by the broken lines according to formula (3), minimum square method errors bars are shown for the points where 0\\(<\\)\\(T_{A2}\\)\\(<\\)1. It is clear to see the good agreement of one- and two-dimensional analyses for the refraction angles 0.2\\({}^{\\circ}\\) and 0.4\\({}^{\\circ}\\) corresponding to ray perigee altitudes 14.9 and 10.5 km, respectively (Figure 4). It draws the altitude range in the troposphere where the method of lunar eclipse photometry is effective to detect the absorbing components. Upper troposphere (14.9 km) is basically free from aerosol absorption except northern polar regions and central Canada. Aerosol at this altitude was not detected anywhere during 2004 lunar eclipses [2]. It also was not detected by twilight polarization measurements in Crimea from 2000 until October 2006 [5-7], but appeared there in December, 2006. This aerosol is possible part of polar stratospheric clouds moved southwards by polar stratospheric vortex.
Atmospheric aerosol at the altitude 10.5 km appears near the equator and polar regions, disappearing in the tropical zone. The same was observed during the eclipse of May, 4\\({}^{\\rm th}\\), 2004. Aerosol absorption at 10.5 km is practically absent over the Siberia and Himalayan mountains, that is natural for the continental winter conditions.
Analysis of aerosol extinction for larger refraction angles 0.6\\({}^{\\circ}\\) and 0.8\\({}^{\\circ}\\) and corresponding ray perigee altitudes 6.2 and 3.2 km meets serious difficulties (see Figure 5). Radiation propagating along such path in the atmosphere transfers to the deep umbra regions. Being observed from there, the Sun is hidden deep behind the Earth and its disk covers wide range of \\(\\psi\\) angles (see Figure 1). Two-dimensional reintegration data with resolution 15\\({}^{\\circ}\\) show rapid variations with larger errors, averaged one-dimensional reintegration data contain no remarkable features. The only thing that should be noted from two-dimensional analysis is vanishing of \\(T_{A2}\\) values in the equatorial and northern tropical latitudes. It was expected owing to huge aerosol concentration near the equator and light obscuration by Himalayan mountains in the tropics.
The aerosol data obtained from the observations at the wavelength 867 nm and described above is then used to find the water vapor transparency \\(T_{W2}(\\beta_{\\rm k},\\,\\gamma_{\\rm l})\\) using the 938 nm data by the solution of the equations system (10). For the altitudes 3.2 and 6.2 km, where the parameters \\(T_{A2}(\\beta_{\\rm k},\\gamma_{\\rm l})\\) are small and found with bad accuracy, the numbers \\(T_{W2}\\) will also have bad accuracy or even can not be found (if corresponding \\(T_{A2}(\\beta_{\\rm k},\\,\\gamma_{\\rm l})\\) value is equal to zero). We will focus our attention on the results for the ray perigee altitudes 10.5 and 14.9 km.
Figure 6 shows the data of one-dimensional and two-dimensional reintegration for these perigee altitudes, the designations are analogous to the Figure 4. The most remarkable feature is strong water vapor light absorption at the altitude 10.5 km near the equator. The same figure contains the distribution of total water vapor column density along the limb obtained by the AMC-DOAS technique in SCIAMACHY space mission for the same day. The technique is described in [8-10] and updated in [11]. Good anti-correlation of one-dimension reintegrated value \\(T_{W1}(0.4^{\\circ},\\,PA)\\) for perigee altitude \\(h\\)=10.5 km and total column amount \\(C_{0}\\) is clear to see. Optical depth of the water vapor along this tangent path is related with column density as
\\[\\tau_{h}=-\\ln T_{W1}(\\beta(h),PA)=D\\cdot C_{0} \\tag{11}\\]Figure 4: Atmospheric aerosol tangent transparency for different locations at the limb in the upper troposphere as the result of one-dimensional (connected dots) and two-dimensional (broken lines) reintegration procedures.
Figure 5: Atmospheric aerosol tangent transparency for different locations at the limb in the lower troposphere as the result of one-dimensional (connected dots) and two-dimensional (broken lines) reintegration procedures.
where the coefficient \\(D\\) is equal to 0.12 cm\\({}^{2}\\)/g, being quite stable along the meridian. Assuming the altitude distribution of the water vapor to be exponential with the scale \\(H\\) and small value of refraction angle, we write the equation for water vapor optical depth along the tangent path with perigee altitude \\(h\\):
\\[\\tau_{h}=\\int\\limits_{-\\infty}^{\\infty}\\sigma\\cdot n_{0}\\cdot e^{-\\frac{\\sqrt{ \\left(R+h\\right)^{2}+x^{2}}-R}{H}}dx\\approx\\sigma\\cdot n_{0}\\cdot e^{-\\frac{h }{H}\\int\\limits_{-\\infty}^{\\infty}}e^{-\\frac{x^{2}}{2RH}}dx=\\sigma\\cdot n_{h} \\sqrt{2\\pi RH} \\tag{12}\\]
Here \\(R\\) is the radius of the Earth, \\(\\sigma\\) is the effective water vapor cross section in the observational passband, \\(n_{0}\\) and \\(n_{h}\\) are the vapor concentration values near the ground and at the altitude \\(h\\). The water vapor column density above the altitude \\(h\\) is equal to
\\[C_{h}=n_{h}mH=\\frac{\\tau_{h}m}{\\sigma}\\sqrt{\\frac{H}{2\\pi R}} \\tag{13}\\]
Here \\(m\\) is the water molecule mass. Substituting (11) into (13) and taking into account the exponential altitude dependence of water vapor amount, we obtain:
\\[\\frac{C_{h}}{C_{0}}=\\frac{mD}{\\sigma}\\sqrt{\\frac{H}{2\\pi R}}=e^{-\\frac{h}{H}} \\tag{14}\\]
Solving this equation numerically, we obtain the values of \\(H\\) (1.3 km) and \\(C_{b}\\)/\\(C_{0}\\) (about 4\\(\\cdot\\)10\\({}^{-4}\\)). It shows the rapid decrease of water vapor amount with the altitude. The scale weakly depends on the latitude. For the equatorial troposphere, assuming \\(T_{W}\\) to be equal to 0.5 (one-dimensional reintegration
Figure 6: Water vapor tangent transparency for different locations at the limb in the upper troposphere as the result of one-dimensional (connected dots) and two-dimensional (broken lines) reintegration procedures compared with AMC-DOAS water vapor total column density.
result), we obtain the water vapor concentration value for 10.5 km using formula (12): 5\\(\\cdot\\)10\\({}^{14}\\) cm\\({}^{-3}\\). For the ray path with the perigee altitude equal to 14.9 km weak traces of water vapor (concentration estimation for the same \\(H\\) is about 10\\({}^{14}\\) cm\\({}^{-3}\\)) are seen northwards from equator. It is also seen as rapid fall of \\(T_{W}\\) southwards from equator, but numerical analysis at the edge of observed range of \\(PA\\) and \\(\\gamma\\) can lead to sufficient errors.
Summarizing the results, Figure 7 shows the distribution of one-dimension reintegration values of tangent path extinction \\(T_{A1}\\) and \\(T_{W1}\\) for the perigee altitude 10.5 km compared with the weather map provided by Space Science and Engineering Center, University of Wisconsin-Madison, for the March, 4, 2007, 6h UT (a few hours after the eclipse).
## 5 Discussion and conclusion
The paper describes two IR-bands photometric analysis of total lunar eclipse of March, 3, 2007 and its application to the retrieval of latitude and altitude distribution of aerosol and water vapor in the troposphere. From the one hand, the lunar eclipse gives the unique possibility to hold such type of remote sensing from the ground. From the other hand, necessity of reverse problem solution with account of large angular diameter of the Sun restricts the resolution of the method on the position angle (or latitude) and altitude. Analysis made in this paper shows that good results can be achieved in upper troposphere layers, from 10 km to tropopause.
Aerosol distribution is quite similar to the one obtained during two lunar eclipses in 2004 [2], that is natural since the visual brightness of 2007 eclipse was close to the 2004 ones. It does not follow the hypothesis of Danjon described in [1] predicting the decrease of eclipse brightness and corresponding increase of aerosol level in the upper troposphere after the solar activity minimum. The exception is the central and northern Canada, where the aerosol at the altitude 14.9 km absent anywhere in 2004 was detected alongside with enchanted level of aerosol at the altitude 10.5 km. Siberian and Himalayan regions of the Earth's limb are free from aerosol in the upper troposphere. Such aerosol appears in the near-equatorial part of South-Eastern Asia.
Analysis of aerosol distribution below 10 km is difficult owing to strong tangent rays absorption and loss of resolution by position angle used by the angular size of the Sun. The only effects that are clearly seen are the increase of aerosol extinction near the equator and obscuration by Himalayan mountains in the tropics.
The data obtained in the second observational passband in the range of water vapor absorption give the possibility to investigate the distribution of water vapor at the same altitudes above the limb. The 10.5 km results show the good agreement between one- and two-dimensional models of reintegration and correlation with total water vapor column densities obtained by SCIAMACHY AMC-DOAS technique [8-11]. Analysis of this correlation gives the possibility to calculate the character scale of water vapor altitude distribution (1.3 km), which is sufficiently less than the one for other atmosphere components, and to determine the water vapor concentration in the upper troposphere above the limb.
## Acknowledgements
Authors are thankful to Stefan Noel (Institute of Environmental Physics/Remote Sensing, University of Bremen) for supplying the SCIAMACHY AMC-DOAS water vapor data and to Space Science and Engineering Center, University of Wisconsin-Madison for the weather map for the eclipse date. We would also like to thank V.I. Shenavrin (Sternberg Astronomical Institute) for the help during the observations, N.N. Shakhvorostova (Astro-Space Center, Lebedev's Physical Institute of RAS), K.B. Moiseenko (Institute of Atmospheric Physics of RAS), A.M. Feigin (Institute of Applied Physics of RAS) and E. Tsimerinov for some useful remarks.
O.S. Ugolnikov is supported by Russian Science Support Foundation grant.
## References
1. Link F. Die Mondfinsternisse. Leipzig: Akademische Verlagsgesellschaft, 1956.
2. Ugolnikov O.S., Maslov I.A. Atmospheric aerosol limb scanning based on the lunar eclipses photometry. Journal of Quantitative Spectroscopy and Radiative Transfer, 2006, Vol. 102, P.499-512.
3. Aldener M., Brown S.S., Stark H., Daniel J.S., and Ravishankara A.R. Near-IR absorption of water vapor: Pressure dependence of line strengths and an upper limit for continuum absorption. Journal of Molecular Spectroscopy, 2005, Vol. 232, P. 223-230.
4. Merienne M.-F., Jenouvrier A., Hermans C., Vandaele A.C., Carleer M., Clerbaux C., Coheur P.-F., Colin R., Fally S., Bach M. Water vapor line parameters in the 1300-9250 cm\\({}^{-1}\\) region. Journal of Quantitative Spectroscopy and Radiative Transfer, 2003, Vol. 82, P. 99-117.
5. Ugolnikov O.S., Maslov I.A. Multi-Color Polarimetry of the Twilight Sky. The role of multiple scattering as the function of wavelength. Cosmic Research. 2002, Vol. 40, P. 242-251.
6. Ugolnikov O.S., Postylyakov O.V., Maslov I.A. Effects of multiple scattering and atmospheric aerosol on the polarization of the twilight sky. Journal of Quantitative Spectroscopy and Radiative Transfer, 2004, Vol. 88, P. 233-241.
7. Ugolnikov O.S., Maslov I.A. Detection of Leonids meteoric dust in the upper atmosphere by polarization measurements of the twilight sky. Planetary and Space Science. 2007 (in press).
8. Noel S., Buchwitz M., Burrows J.P. First retrieval of water vapour column amounts from SCIAMACHY measurements. Atmosphere Chemistry and Physics, 2004, Vol. 4, P.111-125.
9. Noel S., Buchwitz M., Bovensmann H., Burrows J.P. SCIAMACHY water vapour retrieval using AMC-DOAS. Proceedings of ENVISAT Symposium, Salzburg, Austria, 6-10 September, 2004 (ESA SP-572), 2005.
10. Noel S., Buchwitz M., Bovensmann H., Burrows J.P. Validation of SCIAMACHY AMC-DOAS water vapour columns. Atmosphere Chemistry and Physics, 2005, Vol. 5, P. 1835-1841.
11. Noel S., Mieruch S., Bovensmann H., Burrows J.P. A combined GOME and SCIAMACHY global water vapour data set. Proceedings of ENVISAT Symposium, Montreux, Switzerland, 23-27 April 2007 (ESA SP-636), 2007.
Figure 7: Transparency of atmospheric aerosol and water vapor by the tangent path with perigee altitude 10.5 km above the different locations on the limb compared with weather map provided by Space Science and Engineering Center, University of Wisconsin-Madison. | The work contains the description of two narrow IR-bands observational data of total lunar eclipse of March, 3, 2007, one- and two-dimension procedures of radiative transfer equation solution. The results of the procedure are the extinction values for atmospheric aerosol and water vapor at different altitudes in the troposphere along the Earth's terminator crossing North America, Arctic, Siberia and South-Eastern Asia. The altitude range and possible latitude and altitude resolution of atmosphere remote sensing by the lunar eclipses observation are fixed. The results of water vapor retrieval are compared with data of space experiment, the scale of vertical water vapor distribution is found.
**Keywords:** Lunar eclipse; Radiative transfer; Atmospheric aerosol; Water vapor. | Condense the content of the following passage. |
arxiv-format/0706_1432v1.md | # Reconstruction of \\(5d\\) Cosmological Models From Recent Observations
Chengwu Zhang
[email protected]
Lixin Xu
Yongli Ping and Hongya Liu
[email protected] School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian,116024, P.R. China
Day Month Year
Day Month YearDay Month YearDay Month YearDay Month YearDay Month Year
## 1 Introduction
Observations of Cosmic Microwave Background (CMB) anisotropies[1], high redshift type Ia supernovae[2] and the surveys of clusters of galaxies[3] indicate that an exotic component with negative pressure dubbed dark energy dominates the present universe. The most obvious candidate for this dark energy is the cosmological constant \\(\\Lambda\\) with equation of state (\\(w_{\\Lambda}=-1\\)), which is consistent with recent observations[1, 4] in \\(2\\sigma\\) region. However, it raises several theoretical difficulties[5, 6]. This has lead to models for dark energy which evolves with time, such as quintessence[7], phantom[8], quintom[9], K-essence[10], tachyonic matter[11] and so on. For this kind of models, one can design many kinds of potentials[12] and then study EOS for the dark energy. Another way is to use a parameterization of the EOS to fit the observational data, and then to reconstruct the potential and the evolution of the universe[13]. Various parameterization of the EOS of dark energy have been presented and investigated[14, 15, 16, 17].
If the universe has more than four dimensions, general relativity should be extended from \\(4D\\) to higher dimensions. One of such extensions is the \\(5D\\) Space-Time-Matter (STM) theory[18, 19] in which our universe is a \\(4D\\) hypersurface floatingin a \\(5D\\) Ricci-flat manifold. This theory is supported by Campbell's theorem which states that any analytical solution of the \\(ND\\) Einstein equations can be embedded in a \\((N+1)D\\) Ricci-flat manifold[20]. A class of cosmological solutions of the STM theory is given by Liu and Mashhoon[22],the authors restudied the solutions and pointed out that it can describe a bounce universe. It was shown that dark energy models, similar as the 4D quintessence and phantom ones, can also be constructed in this \\(5D\\) cosmological solution in which the scalar field is induced from the \\(5D\\) vacuum[23, 24]. The purpose of this paper is to use a model-independent method to reconstruct a \\(5D\\) cosmological model and then study the universe evolution and the EOS of the dark energy which is constrained by recent observational data: the latest observations of the 182 Gold SNe Ia [25], the 3-year WMAP CMB shift parameter [4, 26] and the SDSS baryon acoustic peak[27]. The paper is organized as follows. In Section 2, we briefly introduce the \\(5D\\) Ricci-flat cosmological solution and derive the densities for the two major components of the universe. In Section 3, we will reconstruct the evolution of the model from cosmological observations. Section 4 is a short discussion.
## 2 Dark energy in the \\(5d\\) Model
The \\(5D\\) cosmological model was described as before[21, 22, 28, 30]. In this paper we consider the case where the \\(4D\\) induced matter \\(T^{\\alpha\\beta}\\) is composed of two components: dark matter \\(\\rho_{m}\\) and dark energy \\(\\rho_{x}\\), which are assumed to be noninteracting. So we have
\\[\\frac{3\\left(\\mu^{2}+k\\right)}{A^{2}} = \\rho_{m}+\\rho_{x},\\] \\[\\frac{2\\mu\\dot{\\mu}}{A\\dot{A}}+\\frac{\\mu^{2}+k}{A^{2}} = -p_{m}-p_{x}, \\tag{1}\\]
with
\\[\\rho_{m} = \\rho_{m0}A_{0}^{3}A^{-3},\\quad p_{m}=0, \\tag{2}\\] \\[p_{x} = w_{x}\\rho_{x}. \\tag{3}\\]
From Eqs. (1) - (3) and for \\(k=0\\), we obtain the EOS of the dark energy
\\[w_{x}=\\frac{p_{x}}{\\rho_{x}}=-\\frac{2\\mu\\dot{\\mu}/\\left(A\\dot{A}\\right)+\\mu^{ 2}/A^{2}}{3\\mu^{2}A^{2}-\\rho_{m0}A_{0}^{3}A^{-3}}, \\tag{4}\\]
and the dimensionless density parameters
\\[\\Omega_{m} = \\frac{\\rho_{m}}{\\rho_{m}+\\rho_{x}}=\\frac{\\rho_{m0}A_{0}^{3}}{3 \\mu^{2}A}, \\tag{5}\\] \\[\\Omega_{x} = 1-\\Omega_{m}. \\tag{6}\\]
where \\(\\rho_{m0}\\) is the current values of dark matter density.
Consider Eq. (4) where \\(A\\) is a function of \\(t\\) and \\(y\\). However, on a given \\(y=constant\\) hypersurface, \\(A\\) becomes \\(A=A(t)\\), which means we consider a supersurface embedded in a flat \\(5D\\) spacetime. As noticed before[29, 30], the term \\(\\dot{\\mu}/\\dot{A}\\) in (4) can now be rewritten as \\(d\\mu/dA\\). Furthermore, we use the relation
\\[A_{0}/A=1+z, \\tag{7}\\]
as an ansatz[29, 30] and define \\(\\mu_{0}^{2}/\\mu^{2}=f(z)\\) (with \\(f(0)\\equiv 1\\)), then we find that Eqs. (4)-(6) can be expressed in term of the redshift \\(z\\) as
\\[w_{x}=-\\frac{1+(1+z)dlnf(z)/dz}{3(1-\\Omega_{m})}, \\tag{8}\\]
\\[\\Omega_{m}=\\Omega_{m_{0}}(1+z)f(z), \\tag{9}\\]
\\[\\Omega_{x}=1-\\Omega_{m}, \\tag{10}\\]
\\[q=-\\frac{1+z}{2}dlnf(z)/dz. \\tag{11}\\]
where q is the deceleration parameter and \\(q<0\\) meas our universe is accelerating. Now we conclude that if the function \\(w_{x}\\) is given, the evolution of all the cosmic observable parameters in Eqs. (8) - (11) could be determined uniquely. Then we adopt the parametrization of EOS as follows[15, 31]
\\[w_{x}(z)=w_{0}+w_{1}\\frac{z}{1+z} \\tag{12}\\]
From Eq. (8) and Eq. (12), we can obtain the function \\(f(z)\\)
\\[f(z)=\\frac{1}{(1+z)\\left[\\Omega_{m0}+(1-\\Omega_{m0})(1+z)^{3w_{0}+3w_{1}}\\exp( -\\frac{3w_{1}z}{1+z})\\right]}. \\tag{13}\\]
In the next section, we will use the recent observational data to find the best fit parameter (\\(w0,w1,\\Omega_{m0}\\)).
## 3 The best fit parameters from cosmological observations
In a flat universe with Eq. (12), the Friedmann equation can be expressed as
\\[H^{2}(z)=H_{0}^{2}E(z)^{2}=H_{0}^{2}[\\Omega_{0m}(1+z)^{3}+(1-\\Omega_{0m})(1+z) ^{3(1+w_{0}+w_{1})}e^{\\frac{-3w_{1}z}{(1+z)}}] \\tag{14}\\]
Then the knowledge of \\(\\Omega_{m0}\\) and \\(H(z)\\) is sufficient to determine \\(w_{x}(z)\\) with \\(H_{0}=72\\ \\mathrm{km\\cdot s^{-1}\\cdot Mpc^{-1}}\\)[32]. We use the maximum likelihood method[33] to constrain the parameters.
The Gold dataset compiled by Riess et. al is a set of supernova data from various sources and contains 182 gold points by discarding all SNe Ia with \\(z<0.0233\\) and all SNe Ia with quality='Silver' from previously published data with 21 new points with \\(z>1\\) discovered recently by the Hubble Space Telescope[25]. Theoretical model parameters are determined by minimizing the quantity
\\[\\chi^{2}_{SNe}(\\Omega_{m0},w_{0},w_{1})=\\sum_{i=1}^{N}\\frac{(\\mu_{obs}(z_{i}) -\\mu_{th}(z_{i}))^{2}}{\\sigma^{2}_{(obs;i)}} \\tag{15}\\]where \\(N=182\\) for Gold SNe Ia data, \\(\\sigma^{2}_{(obs;i)}\\) are the errors due to flux uncertainties, intrinsic dispersion of SNe Ia absolute magnitude and peculiar velocity dispersion respectively. These errors are assumed to be gaussian and uncorrelated. The theoretical distance modulus is defined as
\\[\\mu_{th}(z_{i}) \\equiv m_{th}(z_{i})-M \\tag{16}\\] \\[= 5\\log_{10}(D_{L}(z))+5\\log_{10}(\\frac{H_{0}^{-1}}{Mpc})+25\\]
where
\\[D_{L}(z)=H_{0}d_{L}(z)=(1+z)\\int_{0}^{z}\\frac{H_{0}dz^{{}^{\\prime}}}{H(z^{{}^{ \\prime}};\\Omega_{m0},w_{0},w_{1})} \\tag{17}\\]
and \\(\\mu_{obs}\\) is given by supernovae dataset.
The shift parameter is defined as[34]
\\[\\bar{R}=\\frac{l_{1}^{\\prime TT}}{l_{1}^{TT}}=\\frac{r_{s}}{r_{s}^{\\prime}} \\frac{d_{A}^{\\prime}(z_{rec}^{\\prime})}{d_{A}(z_{rec})}=\\frac{2}{\\Omega_{m0}^ {1/2}}\\frac{q(\\Omega_{r}^{\\prime},a_{rec})}{\\int_{0}^{z}\\frac{H_{0}dz^{\\prime }}{H(z^{\\prime})}} \\tag{18}\\]
where \\(z_{rec}\\) is the redshift of recombination, \\(r_{s}\\) is the sound horizon, \\(d_{A}(z_{rec})\\) is the sound horizon angular diameter distance, \\(q(\\Omega_{r}^{\\prime},a_{rec})\\) is the correction factor. For weak dependence of \\(q(\\Omega_{r}^{\\prime},a_{rec})\\), the shift parameter is usually expressed as
\\[R=\\Omega_{m0}^{1/2}\\int_{0}^{z}\\frac{H_{0}dz^{\\prime}}{H(z^{\\prime};\\Omega_{m 0},w_{0},w_{1})} \\tag{19}\\]
The R obtained from 3-year WMAP data[4, 26] is
\\[R=1.70\\pm 0.03 \\tag{20}\\]
With the measurement of the R, we obtain the \\(\\chi^{2}_{CMB}\\) expressed as
\\[\\chi^{2}_{CMB}(\\Omega_{m0},w_{0},w_{1})=\\frac{(R(\\Omega_{m0},w_{0},w_{1})-1.7 0)^{2}}{0.03^{2}} \\tag{21}\\]
The size of Baryon Acoustic Oscillation (BAO) is found by Eisenstein et al[27] by using a large spectroscopic sample of luminous red galaxy from SDSS and obtained a parameter \\(A\\) which does not depend on dark energy directly models and can be expressed as
\\[A=\\Omega_{m0}^{1/2}E(z_{BAO})^{-1/3}[\\frac{1}{z_{BAO}}\\int_{0}^{z}\\frac{dz^{ \\prime}}{E(z^{\\prime};\\Omega_{m0},w_{0},w_{1})}]^{2/3} \\tag{22}\\]
where \\(z_{BAO}=0.35\\) and \\(A=0.469\\pm 0.017\\). We can minimize the \\(\\chi^{2}_{BAO}\\) defined as[35]
\\[\\chi^{2}_{BAO}(\\Omega_{m0},w_{0},w_{1})=\\frac{(A(\\Omega_{m0},w_{0},w_{1})-0.46 9)^{2}}{0.017^{2}} \\tag{23}\\]
To break the degeneracy of the observational data and find the best fit parameters, we combine these datasets to minimize the total likelihood \\(\\chi^{2}_{total}\\)[36\\[\\chi^{2}_{total}=\\chi^{2}_{SNe}+\\chi^{2}_{CMB}+\\chi^{2}_{BAO} \\tag{24}\\]
We obtain the best fit values \\((\\Omega_{m0},w_{0},w_{1})\\) are (0.288, -1.050, 0.824) and to identify the dependence of the best fit values of the parameters, we set \\(\\Omega_{m0}\\) to be fixed when calculating the confidence level of \\((w0,w1)\\). The errors of the best fit \\(w_{x}(z)\\) are calculated using the covariance matrix[37] and shown in Fig.1. The corresponding \\(\\chi^{2}\\) contours in parameters space (w0,w1) is shown in Fig.2.
From Fig.1 we find that \\(w_{x}(z)\\) is constrained in a narrow space, the best fit \\(w_{x}(z)\\) crosses \\(-1\\) at about \\(z=0.07\\) and at present the best value of \\(w_{x}(0)<-1\\), but in \\(1\\sigma\\) confidence level we can't rule out the possibility \\(w_{x}(0)>-1\\). Fig.2 shows that a cosmological constant is ruled out in \\(1\\sigma\\) confidence level.
Using the function \\(f(z)\\), the best fit values \\((\\Omega_{m0},w_{0},w_{1})\\), we obtain \\(\\Omega_{m}\\), \\(\\Omega_{x}\\), the deceleration parameter \\(q\\) from Eq.(9)-(11) and their evolution is plotted in Fig.3. Fig.3 also shows the evolution of q\\({}_{\\Lambda CDM}\\), \\(\\Omega_{m-\\Lambda CDM}\\), \\(\\Omega_{\\Lambda}\\) in a \\(4D\\) flat \\(\\Lambda\\)CDM model with the present \\(\\Omega_{m0-\\Lambda CDM}=0.283\\) obtained from above cosmological observations. We can see that the transition point from decelerated expansion to accelerated expansion with \\(q=0\\) is at \\(z\\simeq 0.5\\) and it is earlier than the \\(\\Lambda\\)CDM model. Our universe experiences a expansion at present in a \\(4D\\) supersurface embedded in a \\(5D\\) Ricci-flat spacetime or in \\(\\Lambda\\)CDM model.
## 4 Discussion
Observations indicate that our universe now is dominated by two dark components: dark energy and dark matter. The 5D cosmological solution presented by Liu, Mashhoon and Wesson in[21] and[22] contains two arbitrary functions \\(\\mu(t)\\) and \\(\
u(t)\\), one of the two functions, \\(\\mu(t)\\), plays a similar role as the potential \\(V(\\phi)\\) in the
Figure 1: The best fits of \\(w_{x}(z)\\) with \\(1\\sigma\\) errors (shaded region).
quintessence and phantom dark energy models, which can easily change to another arbitrary function \\(f(z)\\). Thus, if the current values of the matter density parameter \\(\\Omega_{m0}\\), \\(w0\\) and \\(w1\\) in the EOS are all known, this \\(f(z)\\) could be determined uniquely. In this paper we mainly focus on the constraints on this model from recent observational data: the 182 Gold SNe Ia, the 3-year WMAP CMB shift parameter and the SDSS baryon acoustic peak. Our results show that the recent observations allow for a narrow variation of the dark energy EOS and the best fit dynamical \\(w_{x}(z)\\) crosses \\(-1\\) in the recent past. Using the best fit values \\((\\Omega_{m0},w0,w1)\\), we have studied the evolution of the dark matter density \\(\\Omega_{m}\\), the dark energy density \\(\\Omega_{x}\\) and the deceleration parameter \\(q\\) in a \\(4D\\) supersurface of \\(5D\\) spacetime, which is similar to the \\(\\Lambda\\)CDM model. In the future, we hope that more and precision cosmological
Figure 2: The contours show 2-D marginalized \\(1\\sigma\\) and \\(2\\sigma\\) confidence limits in the \\((w_{0},w_{1})\\) plane
observations could determine the key points of the evolution of our universe, such as the transition point from deceleration to acceleration, then distinguish the \\(5D\\) cosmological model from others.
## 5 Acknowledgments
This work was supported by NSF (10573003), NBRP (2003CB716300) of P.R. China. The research of Lixin Xu was also supported in part by DUT 893321 and NSF (10647110).
## References
* [1] D.N. Spergel, et. al, _Astrophys. J._ Supp. **148** 175(2003), astro-ph/0302209.
* [2] A. G. Riesset, et. al, _Astron. J._**116** 1009(1998), astro-ph/9805201.
* [3] A. C. Pope, et. al, _Astrophys.J._**607** 655(2004),astro-ph/0401249.
* [4] D. N. Spergel _et al._, arXiv (2006), astro-ph/0603449.
* [5] P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. **75**, 559 (2003), astro-ph/0207347.
* [6] T. Padmanabhan, Phys. Rep. **380**, 235 (2003).
* [7] I. Zlatev, L. M. Wang, and P. J. Steinhardt, Phys. Rev. Lett. **82**, 896 (1999).
* [8] R. R. Caldwell, M. Kamionkowski, and N. N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003).
* [9] B. Feng, X. L. Wang, and X. M. Zhang, Phys. Lett. B **607**, 35 (2005).
* [10] C. Armendariz-Picon, T. Damour, and V. Mukhanov, Phys. Lett. B **458**, 209 (1999).
* [11] T. Padmanabhan, Phys. Rev. D **66**, 021301 (2002).
* [12] V. Sahni, _Chaos. Soli. Frac._**16** 527(2003).
* [13] Z.K. Guo, N. Ohtab and Y.Z. Zhang, astro-ph/0505253.
* [14] P.S. Corasaniti and E.J. Copeland, _Phys. Rev._**D67**, 063521 (2003), astro-ph/0205544.
* [15] E.V. Linder, _Phys. Rev. Lett._**90**, 091301 (2003), astro-ph/0208512.
* [16] A. Upadhye, M. Ishak and P.J. Steinhardt, astro-ph/0411803.
* [17] Y. Wang and M. Tegmark, _Phys. Rev. Lett._**92**, 241302 (2004), astro-ph/0403292.
* [18] P.S. Wesson, _Space-Time-Matter_ (Singapore: World Scientific) 1999.
* [19] J.M. Overduin and P.S. Wesson, _Phys. Rept._**283**, 303(1997).
* [20] J. E. Campbell, _A Course of Differential Geometry_, (Clarendon, 1926).
* [21] H.Y. Liu and P.S. Wesson, _Astrophys. J._**562** 1(2001), gr-qc/0107093.
* [22] H.Y. Liu and B. Mashhoon, _Ann. Phys._**4** 565(1995).
* [23] B.R. Chang, H. Liu and L. Xu, _Mod. Phys. Lett._**A20**, 923(2005), astro-ph/0405084.
* [24] H.Y. Liu, et. al, _Mod. Phys. Lett._**A20**, 1973(2005), gr-qc/0504021.
* [25] A. G. Riess _et al._, arXiv (2006), astro-ph/0611572.
* [26] Y. Wang and P. Mukherjee, Astrophys. J. **650**, 1 (2006), astro-ph/0604051.
* [27] D. J. Eisenstein _et al._, Astrophys. J. **633**, 560 (2005).
* [28] H. Y. Liu, _Phys. Lett._**B560** 149(2003), hep-th/0206198.
* [29] L. Xu, H.Y. Liu and C.W. Zhang, _Int. J. Mod. Phys. D_**15**, 215(2006), astro-ph/0510673.
* [30] C. W. Zhang, H.Y. Liu and L. Xu _Mod. Phys. Lett. A_**21**, 571(2006).
* [31] M. Chevallier, D. Polarski _Int. J. Mod. Phys. D_**10**, 213 (2001).
* [32] W. L. Freedman _et al._, Astrophys. J. **553**, 47 (2001), astro-ph/0012376.
* [33] S. Nesserisa and L. Perivolaropoulosb, arXiv (2006), astro-ph/0610092.
* [34] J. R. Bond, G. Efstathiou, and M. Tegmark, Mon. Not. R. Astron. Soc. **291**, L33 (1997), astro-ph/9702100.
* [35] U. Alam and V. Sahni, Phys. Rev. D **73**, 084024(2006).
* [36] S. Nesserisa and L. Perivolaropoulosb, JCAP, 0701 (2007) 018, astro-ph/0610092.
* [37] U. Alam _et al._, astro-ph/0406672. | We use a parameterized equation of state (EOS) of dark energy to a \\(5D\\) Ricci-flat cosmological solution and suppose the universe contains two major components: dark matter and dark energy. Using the recent observational datasets: the latest 182 type Ia Supernovae Gold data, the 3-year WMAP CMB shift parameter and the SDSS baryon acoustic peak, we obtain the best fit values of the EOS and two major components' evolution. We find that the best fit EOS crossing \\(-1\\) in the near past \\(z\\simeq 0.07\\), the present best fit value of \\(w_{x}(0)<-1\\) and for this model the universe experiences the acceleration at about \\(z\\simeq 0.5\\).
Kaluza-Klein theory; cosmology; dark energy | Summarize the following text. |
arxiv-format/0706_2886v1.md | # Time-Distance Imaging of Solar Far-Side Active Regions
Junwei Zhao
W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA94305-4085
## 1 Introduction
Being able to detect a large solar active region before it rotates into our view from the Sun's far-side, and monitor a large active region after it rotates out of our view into the far-side from the Sun's west limb, is of great importance for the space weather forecast (Schrijver & DeRosa 2003). Using helioseismic holography technique (for a review, see Lindsey & Braun 2000a), Lindsey & Braun (2000b) mapped the central region of the far-side Sun by utilizing double-skip solar acoustic signals visible in the near-side of the Sun but originated from or arriving into the targeted far-side area. Then, Braun & Lindsey (2001) developed furthermore this technique to map the near-limb and polar areas of the solar far-side by combining acoustic signals visible in the near-side after single and triple skips on either side of the targeted far-side area. These efforts have made monitoring active regions of the whole back side of the Sun possible, and daily solar far-side images have becomeavailable using _Solar and Heliospheric Observatory_/Michelson Doppler Imager (_SOHO_/MDI; Scherrer et al., 1995) and Global Oscillation Network Group (GONG; Harvey et al., 1996) observations1. That the far-side active regions are detectable by phase-sensitive holography (Braun & Lindsey, 2000) is mainly because low- and medium-\\(l\\) (\\(l\\) is spherical harmonic degree) acoustic waves manifest apparent travel time deficit in solar active regions, and it is these travel time anomalies that far-side maps image, although it is still quite uncertain whether such anomalies are due to the near-surface perturbation in solar magnetic regions (Fan et al., 1995; Lindsey & Braun, 2005), or caused by faster sound speed structures in the interior of those active regions (e.g., Kosovichev et al., 2000; Zhao & Kosovichev, 2006).
Footnote 1: See [http://soi.stanford.edu/data/farside/](http://soi.stanford.edu/data/farside/) for MDI results, and see [http://gong.nso.edu/data/farside/](http://gong.nso.edu/data/farside/) for GONG results
Time-distance helioseismology is another local helioseismological tool that is capable of mapping the solar far-side active regions. It was demonstrated that time-distance helioseismology could map the central areas of the far-side Sun in a similar way as Lindsey & Braun (2000b) had done (Duvall et al., 2000; Duvall & Kosovichev, 2001), however, no studies have been carried out to extend the mapping area into the whole far-side. In this Letter, utilizing the time-distance helioseismology technique, I explore both the previously used four-skip scheme and a new five-skip scheme and image two whole far-side maps using four-skip and five-skip acoustic signals separately. Combining both maps gives us solar far-side images with better signal-to-noise ratio and more reliable maps of active regions on the far-side. This provides another solar far-side imaging tool in addition to the existing holography technique, hence gives a possibility to cross-check the active regions seen in both techniques, and enhances the accuracy of monitoring solar far-side active regions.
## 2 Data and Technique
The medium-\\(l\\) program of _SOHO_/MDI provides nearly continuous observations of solar oscillation modes with \\(l\\) from 0 to \\(\\sim\\)300. The medium-\\(l\\) data are acquired with one minute cadence and a spatial sampling of 10 arcsec (0.6 heliographic degrees per pixel) after some averaging onboard the spacecraft and ground pipeline processing (Kosovichev et al., 1997). The oscillation modes that this program observes and the continuity of such observations make the medium-\\(l\\) data ideal to be used in solar far-side imaging.
The acoustic wave signals that originate from and then return to the solar front side after traveling to the back side and experiencing four and five bounces are to be used in the time-distance far-side analysis. It is natural and often useful to first examine whether such acoustic signals are detectable by the time-distance technique. A 1024-min MDI medium-\\(l\\) data set with a spatial size of \\(120^{\\circ}\\times 120^{\\circ}\\) is used for such an examination after the region is tracked with Carrington rotation rate and is remapped using Postel's projection with the solar disk center as its remapping center. The \\(l-\
u\\) power spectrum diagram of this data set is shown in Figure 1\\(a\\). The time-distance diagram computed using all oscillation modes (Figure 1\\(b\\)) clearly shows acoustic bounces of up to seven times in the front side, and at the locations of the expected signals traveling back to the front side after four and five bounces, there are some not-very-clear but existing signals. Since the acoustic signals that are needed for the far-side analysis should have a fairly long single-skip travel distance, thus corresponding to low-\\(l\\) modes, it is helpful to filter out all other acoustic modes and keep only those signals that are corresponding to the required travel distances. Such a filtering should help to improve the signal-to-noise ratio of time-distance signals at the desired distances. However, such a filtering is applied to the data after Postel's projection, thus it only works best where the geometry is least distorted. The white quadrangle in Figure 1\\(a\\) delimits the acoustic modes that are used in both four- and five-skip analyses (note that when computing far-side images, both four- and five-skip analyses use only part of the acoustic modes shown in the white quadrangle), covering a frequency range of \\(2.5-4.5\\) mHz and \\(l\\) of \\(3-50\\), and the time-distance diagram made using only these modes (Figure 1\\(c\\)) shows clearly the four- and five-skip time-distance signals. Considering that the time-distance group travel time is a bit off from the ray approximated time, as was pointed out by Kosovichev & Duvall (1996) and Duvall et al. (1997), it is understandable that theoretical travel time is a few minutes off from the time-distance group travel time after four and five bounces.
In order to keep good signal-to-noise ratio as well as reasonable computational burden, not all acoustic signals that come back from the far-side after four or five skips are used. The white boxes in Figure 1\\(c\\) indicate the distances that are utilized for far-side imaging. Namely, as sketched in Figure 2, for the four-skip scheme, the time-distance annulus radii are \\(133\\fdg 8-170\\fdg 0\\) from the target point for the double-double skip combination that is used to map the far-side central area, and \\(66\\fdg 9-85\\fdg 0\\) for the single-skip and \\(200\\fdg 7-25\\fdg 0\\) for the triple-skip in the single-triple combination that is used to map the areas that are close to the far-side limb and polar regions; such a combination covers a total of \\(190^{\\circ}\\) in longitude (at the equator, \\(5^{\\circ}\\) past the limb to the front side near both limbs). For the five-skip scheme, the annulus radii are \\(111\\fdg 6-174\\fdg 0\\) from the target point for the double-skip and \\(167\\fdg 4-261\\fdg 0\\) for the triple-skip; this scheme covers a total of \\(160^{\\circ}\\) in longitude, less than the whole far-side. Unlike in Braun & Lindsey (2001) where far-side polar areas were also included in the imaging, to reduce unnecessary computations only areas lower than the latitude of \\(48^{\\circ}\\), where nearly all active regions are located, are included in the time-distance far-side imaging computations reported here.
The computation procedure is as follows. A 2048-min long MDI medium-\\(l\\) data set is tracked with Carrington rotation rate, and remapped to Postel's coordinates centered on the solar disk as observed at the mid-point of the dataset, with a spatial sampling of \\(0\\fdg 6\\) pixel\\({}^{-1}\\), covering a span of 120\\({}^{\\circ}\\) along the equator as well as the central meridian. This data set is then filtered in the Fourier domain to keep only the oscillation modes that have travel distances in agreement with the distances listed above. Corresponding pixels in the annuli on both sides of the target point are selected, and the cross-covariances with both positive and negative travel time lags are computed. Such computations for all distances shown in the white boxes of Figure 1\\(c\\) are repeated. Then, both time lags are combined and all cross-covariances obtained from different distances after appropriate shifts in time that are obtained from theoretical estimates are also combined. The final cross-covariance is fitted with a Gabor wavelet function (Kosovichev & Duvall 1996), and an acoustic phase travel time is obtained. This procedure is repeated for the double-double and single-triple combinations in the four-skip scheme, and for the double-triple combination for total distances both shorter and longer than 360\\({}^{\\circ}\\) in the five-skip scheme, separately. The far-side image is a display of the measured acoustic travel times after a Gaussian smoothing with FWHM of 2\\(\\fdg\\)0.
## 3 Results
NOAA AR0484, AR0486 and AR0488 are selected to test this time-distance far-side imaging technique. Figure 3 shows the MDI synoptic magnetic field chart when these active regions were on the near-side of the Sun before they rotated into the far-side. AR0488 was still growing when the magnetic field data to make this synoptic map were taken.
Selected far-side images of these three active regions, obtained from separate four- and five-skip measurement schemes, and a combination of both measurements, are shown in the top three rows of Figure 4. In each image, the near-side magnetic field map is combined with the far-side acoustic travel time map, and the combined map is displayed based on Carrington longitudes. Standard deviations, \\(\\sigma\\), are 4.1 sec and 4.5 sec for four-skip and five-skip far-side maps, respectively, and after combination of both measurement schemes, \\(\\sigma\\) falls to 3.3 sec. In these maps, the three interested active regions are mostly visible in all four-skip, five-skip and combined results, except when the regions fall into the uncovered areas of five-skip measurement.
The bottom row of Figure 4 presents the same map as the third row but highlighting active regions by Gaussian smoothing the unsigned near-side magnetic field, and displaying the far-side travel time map with a threshold of \\(-3.5\\sigma\\) to \\(-2.0\\sigma\\), i.e., \\(-11.5\\) to \\(-6.5\\) sec. It is clear that the images combining four- and five-skip results are quite clean of spurious signals,although unidentified features still exist. It is also noteworthy that the far-side images clearly show these active regions when part of the regions were in the far-side and part in the near-side, like AR0486 and AR0488 in the fourth row and third column of Figure 4.
Both four- and five-skip far-side acoustic travel times are displayed after a mean travel time background is removed. It turns out that the background mean travel time depends on its angular distance from the antipode of the solar disk center. Figure 5 presents the variation of background mean travel times at different locations for both measurement schemes after the mean value of the background is removed. Basically, the mean acoustic travel times vary with a magnitude of \\(\\sim\\)4 sec for four-skip measurement, and a magnitude of \\(\\sim\\)9 sec for five-skip measurement. The measured mean acoustic travel times are shorter for the four-skip scheme, but longer for the five-skip scheme when the targeted areas are near the limb and near the antipode of the solar disk center. These variations are unlikely physical, yet it is unclear why there are such variations in the measurement. The rms of travel times can be used to assess the quality of far-side images. Also shown in Figure 5 is travel time rms variation with distance to the far-side center. It shows that double-double scheme has the lowest rms near the far-side center, while the single-triple scheme has the lowest rms near the far-side limb. For the five-skip measurement, it has the lowest rms when at and roughly 30\\({}^{\\circ}\\) from the far-side center, and the rms increases substantially close to the limb.
It is interesting to make a statistical study investigating how accurate the far-side active region imaging is. Since sometimes active regions change fast, grow or decay unexpectedly, it is quite impossible to make a precise assessment on the accuracy of far-side imaging. Therefore, the results given below can only be regarded as a reference. The far-side images of the whole year of 2001 were computed at 12-hour intervals, and a total of 730 images were obtained and displayed in the same way as the bottom row of Figure 4. Typically, a solar area is located on the far-side of the Sun for about 13.5 days, or 27 far-side images. Far-side images are often noisy, and even for a large active region, it is quite unlikely for it to be unambiguously visible in each of 27 images. Also, active regions on the far-side grow and decay as well. For these reasons, if an area is visible as dark in our figures 6 times (or on 3 days) on the other side, where 6 is an arbitrary number, that area is regarded as an active region. Based on this assumption, it is found that for 59 active regions that rotate into the near-side from the east limb, 53 (or 89.8%) are able to be detected far-side, and for 63 active regions that rotate into the far-side from the west limb, 56 (or 89.0%) are detectable on the far-side. Among these regions, 33 are actually detected both before and after the near-side appearance. And, from the other perspective, for 61 regions that are detected by far-side imaging rotating into the far-side from the west limb, 55 (or 90.1%) are visible near the west limb of the near-side; for 53 active regions that are detected by far-side imaging rotating out of the far-side into the near-side, 49 (or 92.4%) are visible near the east limb of the near-side.
Among these regions, 28 are visible on the near-side before and after the region is located in the far-side. These statistics do not count all active regions that appeared in that year, but only those that have a diameter roughly larger than 8\\(\\fdg\\)0 after projected into a spatial resolution of solar disk center and viewed in a figure like the fourth row of Figure 4, when these regions are near either limb of of the Sun.
## 4 Discussion
By combining four-skip and five-skip measurement schemes, I have successfully made time-distance far-side images of the Sun with a fast computation speed. The combination significantly enhances the signal-to-noise ratio of far-side acoustic travel times over either four- or five-skip measurement, thus making the composite far-side active region map much cleaner. It helps to remove most, but not all of the spurious features that signify active regions. On the other hand, the combination of both maps also remove some small active regions that can otherwise be seen in one map or the other.
Time-distance far-side imaging provides another whole far-side imaging tool in addition to the existing helioseismic holography technique. It gives results that are in reasonable agreement with the holography results, which can be seen at _SOHO_/MDI website, by visual comparison, yet detailed comparisons are not done. It is intriguing to calibrate the measured far-side acoustic travel times into magnetic field strength, and some efforts have been taken using holography results (Gonzalez Hernandez et al., 2007). However, there are some apparent difficulties to carry out the calibration, because the far-side images of active regions are often relatively noisy, and sizes and shapes of those regions change from image to image. In addition, the magnetic field strengths of the far-side active regions are also unknown, although the full sphere magnetic field made by use of flux dispersal model (Schrijver & DeRosa, 2003) may help in this. Perhaps eventually, a numerical simulation of solar oscillations in the whole Sun is needed to determine a satisfactory calibration.
It is notable that the five-skip measurement scheme has a large fraction of overlapping areas as the four-skip measurement in the annulus of one side, and has one more skip on the other side (see Figure 2), but results from both measurement schemes have quite uncorrelated noises (see Figure 4). It is unlikely that the noise differences of two schemes are caused while waves travel merely one more skip, 1/5 of the total distance, however, it may imply noises come from data or the filtering of geometrically distorted data.
The holography far-side images show strong acoustic travel time variation from the far-side center to limb, with an order of 70 sec or so in their four-skip measurement scheme (P.
Scherrer, private communication), and this is believed (C. Lindsey, private communication) to be connected with the \"ghost signature\" described by Lindsey & Braun (2004). Although such a travel time variation trend also exists in time-distance far-side images, the variation magnitude is merely 4 sec in four-skip scheme, substantially smaller than the holography trend. The variation in the five-skip scheme measurement is about twice of that in the four-skip scheme. What causes such travel time variations in our measurement, though small, is worth further investigation.
With the availability of both time-distance and helioseismic holography far-side images, it is more robust to monitor activities on the far-side of the Sun, and we can forecast the appearance of large active regions rotating into our view from the far-side with more confidence. Furthermore, the success in far-side imaging provides us some experiences and confidence in analyzing low- and medium-\\(l\\) mode oscillations by use of local helioseismology techniques, and this will greatly help us in analyzing solar deeper interiors and polar areas by use of these techniques.
I thank Tom Duvall and Sasha Kosovichev for reading through the manuscript and giving valuable comments to improve this paper, as well as their suggestions while developing the code. I am deeply indebted to Charlie Lindsey and the referee, Doug Braun, for thoroughly studying my manuscript and giving numerous constructive comments. I also thank Phil Scherrer for encouraging me to carry out this work. _SOHO_ is a project of international cooperation between ESA and NASA.
## References
* Braun & Lindsey (2000) Braun, D. C., & Lindsey, C. 2000, Sol. Phys., 192, 307
* Braun & Lindsey (2001) Braun, D. C., & Lindsey, C. 2001, ApJ, 560, L189
* Duvall & Kosovichev (2001) Duvall, T. L., Jr., & Kosovichev, A. G. 2001, in IAU Symp. 203, Recent Insights into the Physics of the Sun and Heliosphere: Highlights from _SOHO_ and Other Space Missions, ed. P. Brekke, B. Fleck, & J. B. Gurman (San Francisco: ASP), 159
* Duvall et al. (2000) Duvall, T. L., Jr., Kosovichev, A. G., & Scherrer, P. H. 2000, Bull. Am. Astr. Soc. 32, 837
* Duvall et al. (1997) Duvall, T. L., Jr., et al. 1997, Sol. Phys., 170, 63
* Fan et al. (1995) Fan, Y., Braun, D. C., & Chou, D.-Y. 1995, ApJ, 451, 877
* Gonzalez Hernandez et al. (2007) Gonzalez Hernandez, I., Hill, F., & Lindsey, C. 2007, ApJ, submitted
* Gonzalez Hernandez et al. (2008)Harvey, J. W., et al. 1996, Science, 272, 1284
* () Kosovichev, A. G., & Duvall, T. L., Jr. 1996, in Proceedings of SCORe'96 Workshop: Solar Convection and Oscillations and Their Relationship, ed. F. P. Pijpers, J. Christensen-Dalsgaard & C. S. Rosenthal (Dordrecht: Kluwer), 241
* () Kosovichev, A. G., Duvall, T. L., Jr., & Scherrer, P. H. 2000, Sol. Phys., 192, 159
* () Kosovichev, A. G., et al. 1997, Sol. Phys., 170, 43
* () Lindsey, C., & Braun, D. C. 2000b, Science, 287, 1799
* () Lindsey, C., & Braun, D. C. 2000a, Sol. Phys., 192, 261
* () Lindsey, C., & Braun, D. C. 2004, ApJS, 155, 209
* () Lindsey, C., & Braun, D. C. 2005, ApJ, 620, 1107
* () Scherrer, P. H., et al. 1995, Sol. Phys., 162, 129
* () Schrijver, C. J., & DeRosa, M. L. 2003, Sol. Phys., 212, 165
* () Zhao, J., & Kosovichev, A. G. 2006, ApJ, 643, 1317Figure 1: Power spectrum diagram (\\(a\\)) computed from a 1024-min MDI medium-\\(l\\) data set, time-distance diagram (\\(b\\)) computed using the whole power spectrum of the same data set, and time-distance diagram (\\(c\\)) computed using only the oscillations that have frequency and \\(l\\) included in the white quadrangle as indicated in (\\(a\\)). The white dashed lines in both (\\(b\\)) and (\\(c\\)) are theoretical time-distance relationships based on acoustic ray approximation, with the lower ‘\\(<\\)’-like curve as the fourth-skip, and the upper ‘\\(<\\)’-like curve as the fifth-skip. The lower and upper branch of each ‘\\(<\\)’-like curve represents acoustic wave propagation distance shorter and longer than 360\\({}^{\\circ}\\), respectively. The white boxes in (\\(c\\)) delimit the acoustic travel distances and times used for far-side imaging.
Figure 2: Sketches for four-skip measurement schemes, which includes (\\(a\\)) double-double skip combination when the target point is near the central area of the far-side and two sets of double-skip rays are located on both sides of the target point, and (\\(b\\)) single-triple skip combination when the target point is near the limb or polar area of the far-side and one set of single-skip and one set of triple-skip rays are located on either side of the target; and, five-skip measurement schemes (\\(c\\)) when one set of double-skip and one set of triple-skip rays are located on the either side of the target point.
Figure 3: MDI magnetic field synoptic chart for Carrington rotation 2009, taken from October 23 to November 19, 2003, and made by use of observations near the central meridian only. Note that AR0484 is divided at the longitude of 0\\({}^{\\circ}\\) and 360\\({}^{\\circ}\\).
Figure 4: Results of time-distance far-side active region imaging, obtained from four-skip (_first row_), five-skip (_second row_), and combination of four- and five-skip measurements (_third and bottom row_). From the left to the right column, images were obtained at 00:00UT of November 6, 12:00UT of November 13, and 12:00UT of November 18, 2003, respectively. The given observation time for far-side image is the middle time of its 2048-min observational period. Each black and white image displays a combination of the near-side MDI magnetic field map and a time-distance far-side acoustic travel time map. The boundaries between the far-side and near-side are not vertical because of the solar B-angle adjustment. Magnetic field is displayed with a range of \\(-150\\) to \\(150\\) Gs, and the acoustic travel time ranging from \\(-12\\) to \\(12\\) sec. Blank regions in five-skip images indicate areas that cannot be covered by this measurement. Color images display the near-side map with a range of \\(40\\) to \\(150\\) Gs after a Gaussian smoothing of unsigned magnetic field with a same FWHM that is applied to travel time maps, and display the far-side map with a range of \\(-3.5\\sigma\\) to \\(-2.0\\sigma\\) in order to highlight the far-side active regions that are of our interest. To better distinguish far-side and near-side images, different color contrast is used.
Figure 5: Background travel time variations (_black curves_) and travel time rms variations (_grey curves_) as functions of angular distance from the antipode of solar disk center for different combinations of acoustic wave bounces. Scales for grey curves are marked on the right hand side of the figure. | It is of great importance to monitor large solar active regions in the far-side of the Sun for space weather forecast, in particular, to predict their appearance before they rotate into our view from the solar east limb. Local helioseismology techniques, including helioseismic holography and time-distance, have successfully imaged solar far-side active regions. In this Letter, we further explore the possibility of imaging and improving the image quality of solar far-side active regions by use of time-distance helioseismology. In addition to the previously used scheme with four acoustic signal skips, a five-skip scheme is also included in this newly developed technique. The combination of both four- and five-skip far-side images significantly enhances the signal-to-noise ratio in the far-side images, and reduces spurious signals. The accuracy of the far-side active region imaging is also assessed using one whole year's solar observation.
Sun: helioseismology -- Sun: oscillations -- sunspots | Give a concise overview of the text below. |
arxiv-format/0706_3074v1.md | # A review of wildland fire spread modelling,
1990-present
1: Physical and quasi-physical models
A.L. Sullivan
## Introduction
### History
The field of wildland fire behaviour modelling has been active since the 1920s. The work of Hawley (1926) and Gisborne (1927, 1929) pioneered the notion that understanding of the phenomenon of wildland fire and the prediction of the danger posed by a fire could be gained through measurement and observation and theoretical considerations of the factors that might influence such fires. Despite the fact that the field has suffered from a lack ofreadily achievable goals and consistent funding (Williams, 1982), the pioneering work by those most closely affected by wildland fire-the foresters and other land managers-has led to a broad framework of understanding of wildland fire behaviour that has enabled the construction of operational models of fire behaviour and spread that, while not perfect for every situation, at least get the job done.
In the late 1930s and early 1940s, Curry and Fons (1938, 1940), and Fons (1946) brought a rigorous physical approach to the measurement and modelling of the behaviour of wildland fires. In the early 1950s, formal research initiatives by Federal and State Government forestry agencies commenced concerted efforts to build fire danger rating systems that embodied a fire behaviour prediction component in order to better prepare for fire events. In the US this was through the Federal US Forest Service and through State agencies; in Canada this was the Canadian Forest Service; in Australia this was through the Commonwealth Forestry and Timber Bureau in conjunction with various state authorities.
In the 1950s and 60s, spurred on by incentives from defense budgets, considerable effort was expended exploring the effects of mass bombing (such as occurred in Dresden or Hamburg, Germany, during World War Two) and the collateral incentiary effects of nuclear weapons (Lawson, 1954; Rogers and Miller, 1963). This research effort was closely related to large forest or conflagration fires and had the spin-off of bringing additional research capacity into the field (Chandler et al., 1963). This resulted in an unprecedented boom in the research of wildland fires. The late 1960s saw a veritable explosion of research publications connected to wildland fire that dominated the fields of combustion and flame research for some years.
The 1970s saw a dwindling of research interest from defense organisations and by the 1980s, research into the behaviour of wildland fires returned to those that had direct interest in the understanding and control of such phenomena. By the 1980s, it was of occasional interest to journeyman mathematicians and physicists on their way to bigger, and more achievable, goals.
An increase in the capabilities of remote sensing, geographical information systems and computing power during the 1990s resulted in a revival in the interest of fire behaviour modelling, this time applied to the prediction of fire spread across the landscape.
## Background
This series of review papers endeavours to comprehensively and critically review the extensive range of modelling work that has been conducted in recent years. The range of methods that have been undertaken over the years represents a continuous spectrum of possible modelling (Karplus, 1977), ranging from the purely physical (those that are based on fundamental understanding of the physics and chemistry involved in the combustion of biomass fuel and behaviour of a wildland fire) through to the purely empirical (those that have been based on phenomenological description or statistical regression of observed fire behaviour). In between is a continuous model of approaches from one end of the spectrum or the other. Weber (1991a) in his comprehensive review of physical wildland fire modelling proposed a system by which models were described as physical, empirical or statistical, depending on whether they account for different modes of heat transfer, make no distinction between different heat transfer modes, or involve no physics at all. Pastor et al.
(2003) proposed descriptions of theoretical, empirical and semi-empirical, again depending on whether the model was based on purely physical understanding, of a statistical nature with no physical understanding, or a combination of both. Grishin (1997) divided models into two classes, deterministic or stochastic-statistical. However, these schemes are rather limited given the combination of possible approaches and, given that describing a model as semi-empirical or semi-physical is a 'glass half-full or half-empty' subjective issue, a more comprehensive and complete convention was required.
Thus, this review series is divided into three broad categories: physical and quasi-physical models; empirical and quasi-empirical models; and simulation and mathematical analogue models. In this context, a physical model is one that attempts to represent both the physics and chemistry of fire spread; a quasi-physical model attempts to represent only the physics. An empirical model is one that contains no physical basis at all (generally only statistical in nature), a quasi-empirical model is one that uses some form of physical framework upon which to base the statistical modelling chosen. Empirical and quasi-empirical models are further subdivided into field-based and laboratory-based. Simulation models are those that implement the preceding types of models in a simulation rather than modelling context. Mathematical analogue models are those that utilise a mathematical precept rather than a physical one for the modelling of the spread of wildland fire.
Since 1990, there has been rapid development in the field of spatial data analysis, e.g. geographic information systems and remote sensing. Following this, and the fact that there has not been a comprehensive critical review of fire behaviour modelling since Weber (1991a), I have limited this review to works published since 1990. However, as much of the work that will be discussed derives or continues from work carried out prior to 1990, such work will be included much less comprehensively in order to provide context.
### Previous reviews
Many of the reviews that have been published in recent years have been for audiences other than wildland fire researchers and conducted by people without an established background in the field. Indeed, many of the reviews read like purchase notes by people shopping around for the best fire spread model to implement in their part of the world for their particular purpose. Recent reviews (e.g. Perry (1998); Pastor et al. (2003); etc), while endeavouring to be comprehensive, have offered only superficial and cursory inspections of the models presented. Morvan et al. (2004) takes a different line by analysing a much broader spectrum of models in some detail and concludes that no single approach is going to be suitable for all uses.
While the recent reviews provide an overview of the models and approaches that have been undertaken around the world, mention must be made of significant reviews published much earlier that discussed the processes in wildland fire propagation themselves. Foremost is the work of Williams (1982), which comprehensively covers the phenomenology of both wildland and urban fire, the physics and chemistry of combustion, and is recommended reading for the beginner. The earlier work of Emmons (1963, 1966) and Lee (1972) provides a sound background on the advances made during the post-war boom era. Grishin (1997) provides an extensive review of the work conducted in Russia in the 1970s, 80s and 90s.
This particular paper will discuss those models based upon the fundamental principles of the physics and chemistry of wildland fire behaviour. Later papers in the series will discuss those models based upon observation of fire behaviour and upon mathematical analogies to fire spread. As the laws of physics are the same no matter the origin of the modeller, or the location of the model, physical models are essentially based on the same rules and it is only the implementation of those rules that differs in each model. A brief discussion of the fundamentals of wildland fire behaviour covering the chemistry and physics is given, followed by discussions of how these are applied in physical models themselves. This is then followed by a discussion of the quasi-physical models.
## Fundamentals of fire and combustion
Wildland fire is the complicated combination of energy released (in the form of heat) due to chemical reactions (broadly categorised as an oxidation reaction) in the process of combustion and the transport of that energy to surrounding unburnt fuel and the subsequent ignition of said fuel. The former is the domain of chemistry (more specifically, _chemical kinetics_) and occurs on the scale of molecules, and the latter is the domain of physics (more specifically, _heat transfer_ and _fluid mechanics_) and occurs on scales ranging from millimetres up to kilometres (Table 1). It is the interaction of these processes over the wide range of temporal and spatial scales that makes the modelling of wildland fire behaviour a not inconsiderable problem.
Grishin (1997, pg. 81) proposed five relative independent stages in the development of a deterministic physical model of wildland fire behaviour:
1. Physical analysis of the phenomenon of wildland fire spread; isolation of the mechanism governing the transfer of energy from the fire front into the environment; definition of the medium type, and creation of a physical model of the phenomenon.
2. Determination of the reaction and thermophysical properties of the medium, the transfer coefficients and structural parameters of the medium, and deduction of the basic system of equations with corresponding additional (boundary and initial) conditions.
3. Selection of a method of numerical solution of the problem, and derivation of differential equations approximating the basic system of equations.
4. Programming; test check of the program; evaluation of the accuracy of the difference scheme; numerical solution of the system of equations.
5. Testing to see how well the derived results comply with the real system; their physical interpretation; development of new technical suggestions for ways of fighting wildland fire.
Clearly, stages one and two represent considerable hurdles and sources of contention for the best method in which to represent the phenomenon of wildland fire. This section aims to provide a background understanding of the chemistry and physics involved in wildland fire. However, it must be noted that even though these fields have made great advances in the understanding of what is going on in these processes, research is still very active and sometimes cause for contention (di Blasi, 1998).
## Chemistry of combustion
The chemistry of combustion involved in wildland fire is necessarily a complex and complicated matter. This is in part due to the complicated nature of the fuel itself but also in the range of conditions over which combustion can occur which dictates the evolution of the combustion process.
### Fuel chemistry
Wildland fuel is composed of live and dead plant material consisting primarily of leaf litter, twigs, bark, wood, grasses, and shrubs. (Beall and Eickner, 1970), with a considerable range of physical structures, chemical components, age and level of biological decomposition. The primary chemical constituent of biomass fuel is cellulose (of chemical form (C\\({}_{6}\\)O\\({}_{5}\\)H\\({}_{10}\\))\\({}_{n}\\)), which is a polymer of a glucosan (variant of glucose) monomer, C\\({}_{6}\\)O\\({}_{6}\\)H\\({}_{12}\\)(Shafizadeh, 1982; Williams, 1982). Cellulose is a linear, unbranched polysaccharide of \\(\\simeq\\) 10,000 D-glucose units in \\(\\beta(1,4)\\) linkage3. The parallel chains are held together by hydrogen bonds, a non-covalent lineage in which surplus electron density on hydroxyl group oxygens is distributed to hydrogens with partial positive charge on hydroxyl groups of adjacent residues (Ball et al., 1999).
Footnote 3: The D- prefix refers to one of two configurations around the chiral centre of carbon-5. The \\(\\beta(1,4)\\) refers to the configuration of the covalent link between adjacent glucose units, often called a glycosidic bond. There are two possible geometries around C-1 of the pyranose (or 5-membered) ring: in the \\(\\beta\\) anemover the hydrogen on C-1 sits on the opposite side of the ring to that on C-2; in the \\(\\alpha\\) anemover it is on the same side. The glycosidic bond in cellulose is between C-1 of one \\(\\beta\\) D-glucose residue and the hydroxyl group on C-4 of the next unit (see Figure 1).
Other major chemical components of wildland fuel include hemicelluloses (copolymers of glucosan and a variety of other possibly monomers) and lignin (a phenolic compound) in varying amounts, depending upon the species, cell type and plant part (See Table 2). Minerals, water, salts and other extractives and inorganics also exist in these fuels. The cellulose is the same in all types of biomass, except for the degree of polymerisation (i.e. the number of monomer units per polymer). Solid fuel is often referred to as a _condensed phase_ fuel in the combustion literature.
Cellulose is an extraordinarily stable polysaccharide due to its structure: insoluble, relatively resistant to acid and base hydrolysis, and inaccessible to all hydrolytic enzymes except those from a few biological sources. Cellulose is the most widely studied substance in the field of wood and biomass combustion; by comparison, few studies have been carried out on the combustion of hemicelluloses or lignin (di Blasi, 1998), due perhaps to the relative thermal instability of these compounds. The degradation of biomass is generally considered as the sum of the contribution of its main components (cellulose, hemicelluloses and lignin) but the extrapolation of the thermal behaviour of the main biomass components to describe the kinetics of complex fuels is only a rough approximation (di Blasi, 1998). The presence of inorganic matter in the biomass structure can act as a catalyst or an inhibitor for the degradation of cellulose; differences in the purity and physical properties of cellulose and hemicelluloses and lignin also play an important role in the degradation process (di Blasi, 1998).
### Combustion reactions
Chemical reactions can be characterised by the amount of energy required to initiate a reaction, called the activation energy, \\(E_{a}\\). This energy controls the rate of reaction through an exponential relation, which can be derived from first principles, known as the Arrhenius law:
\\[k=A^{(\\frac{-E_{a}}{k\\textit{T}})} \\tag{1}\\]
where \\(k\\) is the reaction rate constant, \\(A\\) is a pre-exponential factor related to collision rate in Eyring theory, \\(R\\) is the gas constant and \\(T\\) is the absolute temperature of the reactants. Thus, the rate constant, \\(k\\), is a function of the temperature of the reactants; generally the higher the temperature, the faster the reaction will occur.
### Solid phase reactions-competing processes
When heat is applied to cellulose, the cellulose undergoes a reaction called thermal degradation. In the absence of oxygen, this degradation is called _pyrolysis_, even though in the literature the term pyrolysis is often used incorrectly to describe any form of thermal degradation (Babrauskas, 2003). Cellulose can undergo two forms of competing degradation reaction: volatilsation and char formation (Figure 2). While each of these reactions involves the depolymerisation of the cellulose (described as the 'unzipping' of the polymer into shorter strands (Williams, 1982, 1985)), each has a different activation energy and promoting conditions, and result in different products and heat release.
_Volatilisation_ generally occurs in conditions of low or nil moisture and involves thermolysis of glycosidic linkages, cyclisation and the release of free levoglucosan via thermolysis at the next linkage in the chain (Ball et al., 2004). This reaction is endothermic (requiring about 300 J g\\({}^{-1}\\)(Ball et al., 1999)) and has a relatively high activation energy (about 240 kJ mol\\({}^{-1}\\)(di Blasi, 1998)). The product, levoglucosan (sometimes described as 'tar' (Williams, 1982)), is highly unstable and forms the basis of a wide range of subsequent species following further thermal degradation that readily oxidise in the process of combustion, resulting in a multitude of intermediate and final, gas and solid phase, products and heat.
_Char formation_, on the other hand, occurs when thermal degradation happens in the presence of moisture or low rates of heating. In this competing reaction pathway, the nucleophile that bonds to the thermolysed carbo-cation at C-1 is a water molecule. The initial product is a reducing end which has 'lost the opportunity' to volatilise. Instead, further heating of such fragments dehydrates, polyunsaturated, decarboxylates, and cross-links the carbon skeleton of the structure, ultimately producing char. This reaction has a relatively low activation energy (about 150 kJ mol\\({}^{-1}\\)(di Blasi, 1998)) and is exothermic (releasing about 1 kJ g\\({}^{-1}\\)).
Thus, the thermal degradation of cellulose results in two competing pathways controlled by thermal and chemical feedbacks such that if heating rates are low and/or moisture is present, the charring pathway is promoted. If sufficient energy is released in thisprocess (or additional heat is added) or moisture evaporated, then cyclisation and the release of free levoglucosan from thermolysed, positively charged chain fragments, becomes statistically favoured over nucleophilic addition of water and char production. If the subsequent combustion of the levoglucosan and products releases enough energy then this process becomes self-supporting. However, if the heat released is convected away from the reactants or moisture is trapped, then the char-formation path becomes statistically favoured. These two competing pathways will oscillate until conditions become totally self-supporting or thermal degradation stops.
### Gas phase reactions
Gas phase combustion of levoglucosan and its derivative products is highly complex and chaotic. The basic chemical reaction is assumed to be:
\\[\\mathrm{C_{6}O_{5}H_{10}+6O_{2}\\to 6CO_{2}+5H_{2}O}, \\tag{2}\\]
however, this assumes that all intermediate reactions, consisting of oxidisation reactions of derivative products mostly, are complete. But the number of pathways that such reactions can take is quite large, and not all paths will result in completion to water and carbon dioxide.
As an example of a gas-phase hydrocarbon reaction, Williams (1982) gives a non-exhaustive list of 14 possible pathways for the combustion of CH\\({}_{4}\\), one of the possible intermediates of the thermal degradation of levoglucosan, to H\\({}_{2}\\)O and CO\\({}_{2}\\). Intermediate species include CH\\({}_{3}\\), H\\({}_{2}\\)CO, HCO, CO, OH and H\\({}_{2}\\).
At any stage in the reaction process, any pathway may stop (through loss of energy or reactants) and its products be advected away to take no further part in combustion. It is these partially combusted components that form smoke. The faster and more turbulent the reaction, the more likely that reaction components will be removed prior to complete combustion, hence the darker and thicker the smoke from a headfire, as opposed to the lighter, thinner smoke from a backing fire (Cheney and Sullivan, 1997).
Because the main source of heat into the combustion process comes from the exothermic reaction of the gas-phase products of levoglucosan and these products are buoyant and generally convected away from the solid fuel, the transport of the heat generated from these reactions is extremely complex and brings us to the physics of combustion.
### Physics of combustion
The physics involved in the combustion of wildland fuel and the behaviour of wildland fires is, like the chemistry, complicated and highly dependent on the conditions in which a fire is burning. The primary physical process in a wildland fire is that of heat transfer. Williams (1982) gives nine possible mechanisms for the transfer of heat from a fire:
1. Diffusion of radicals2. Heat conduction through a gas
3. Heat conduction through condensed materials
4. Convection through a gas
5. Liquid convection
6. Fuel deformation
7. Radiation from flames
8. Radiation from burning fuel surfaces
9. Firebrand transport.
1, 2 and 3 could be classed as diffusion at the molecular level. 4 and 5 are convection (although the presence of liquid phase fuel is extremely rare) but can be generalised to advection to include any transfer of heat through the motion of gases. 7 and 8 are radiation. 6 and 9 could be classed as solid fuel transport. This roughly translates to the three generally accepted forms of heat transfer (conduction, convection and radiation) plus solid fuel transport, which, as Emmons (1966) points out, is not trivial or unimportant in wildland fires.
The primary physical processes driving the transfer of heat in a wildland fire are that of advection and radiation. In low wind conditions, the dominating process is that of radiation (Weber, 1989). In conditions where wind is not insignificant, it is advection that dominates (Grishin et al., 1984). However, it is not reasonable to assume one works without the other and thus both mechanisms must be considered.
In attempting to represent the role of advection in wildland fire spread, the application of fluid dynamics is of prime importance. This assumes that the gas flow can be considered as a continuous medium or fluid.
#### Advection or Fluid transport
Fluid dynamics is a large area of active research and the basic outlines of the principles are given here. The interested reader is directed to a considerable number of texts on the subject for more in-depth discussion (e.g. Batchelor (1967); Turner (1973)).
The key aspect of fluid dynamics and its application to understanding the motion of gases is the notion of continuity. Here, the molecules or particles of a gas are considered to be _continuous_ and thus behave as a fluid rather than a collection of particles. Another key aspect of fluid mechanics (and physics in general) is the fundamental notion of the conservation of quantities which is encompassed in the fluidised _equations of motion_.
A description of the rate of change of the density of particles in relation to the velocity of the particles and distribution of particles provides a method of describing the continuity of the particles. By taking the zeroth velocity moment of the density distribution (multiplying by \\({\\bf u}^{k}\\) (where \\(k=0\\), in this case) and integrating with respect to \\({\\bf u}\\)), the equation of continuity is obtained. If the particles are considered to have mass, then the continuity equation also describes the conservation of mass:
\\[\\frac{\\partial\\rho}{\\partial t}+\
abla.(\\rho\\mathbf{u})=0, \\tag{3}\\]
where \\(\\rho\\) is density, \\(t\\) is time, and \\(\\mathbf{u}\\) is the fluid velocity (with vector components \\(u\\), \\(v\\), and \\(w\\)) and \\(\
abla.\\) is the Laplacian or gradient operator (i.e. in three dimensions \\(\\mathbf{i}\\frac{\\partial}{\\partial x}+\\mathbf{j}\\frac{\\partial}{\\partial y}+ \\mathbf{k}\\frac{\\partial}{\\partial z}\\)). This is called the _fluidised_ form of the continuity equation and is presented in the form of Euler's equations as a partial differential equation.
However, in order to solve this equation, the evolution of \\(\\mathbf{u}\\) is needed. This incompleteness is known as the closure problem and is a characteristic of all the fluid equations of motion. The next order velocity moment (\\(k=1\\)) can be taken and the evolution of the velocity field determined. This results in an equation for the force balance of the fluid or the _conservation of momentum_ equation:
\\[\\frac{\\partial\\rho\\mathbf{u}}{\\partial t}+\
abla.(\\rho\\mathbf{u})\\mathbf{u}+ \
abla p=0, \\tag{4}\\]
where \\(p\\) is pressure. However, the evolution of \\(p\\) is then needed to solve this equation. This can be determined by taking the second velocity moment (k=2) which provides an equation for the conservation of energy, but it itself needs a further, incomplete, equation to provide a solution. One can either continue determining higher order moments _ad nauseum_ in order to provide a suitably approximate solution (as the series of equations can never be truly closed) or, as is more frequently done, utilise an equation of state to provide the closure mechanism. In fluid dynamics, the equation of state is generally that of the ideal gas law (e.g. \\(pV=nRT\\)). The above equations are in the form of the Euler equations and represent a simplified (inviscid) form of the Navier-Stokes equations.
### Buoyancy, convection and turbulence
The action of heat release from the chemical reaction within the combustion zone results in heated gases, both in the form of combustion products as well ambient air heated by, or entrained into, the combustion products. The reduction in density caused by the heating of the gas increases the buoyancy of the gas and results in the gas rising as convection which can then lead to turbulence in the flow. Turbulence acts over the entire range of scales in the atmosphere, from the fine scale of flame to the atmospheric boundary layer, and acts to mix heated gases with ambient air and to mix the heated gases with unburnt solid phase fuels. It also acts to increase flame immersion of fuel. The action of turbulence also affects the transport of solid phase combustion, such as that of firebrands, resulting in spotfires downwind of the main burning front.
Suitably formulated Navier-Stokes equations can be used to incorporate the effects of buoyancy, convection and turbulence. However, these components of the flow can be investigated individually utilising particular approximations, such as Boussinesq's concept of eddy viscosity for the modelling of turbulence, or buoyancy as a renormalised variable for modelling the effects of buoyancy. Specific methods for numerically solving turbulence within the realm of fluid dynamics, including renormalisation group theory (RNG) and large eddy simulation (LES), have been developed. Convective flows are generally solved within the broader context of the advection flow with a prescribed heat source.
#### Radiant heat transfer
Radiant heat is a form of electromagnetic radiation emitted from a hot source and is in the infra-red wavelength band. In flame, the primary source of the radiation is thermal emission from carbon particles, generally in the form of soot (Gaydon and Wolfhard, 1960), although band emission from electronic transitions in molecules also contributes to the overall radiation from a fire.
The general method of modelling radiant heat transfer is through the use of a radiant transfer equation (RTE) of which the simplest is that of the Stefan-Boltzmann equation:
\\[q=\\sigma T^{4}, \\tag{5}\\]
where \\(\\sigma\\) is the Stefan-Boltzmann constant (\\(5.67\\times 10^{-8}\\)J K\\({}^{-4}\\) m\\({}^{-2}\\) s\\({}^{-1}\\) and \\(T\\) is the radiating temperature of the surface (K). While it is possible to approximate the radiant heat flux from a fire as a surface emission from the flame face, this does not fully capture the volumetric emission nature of the flame (Sullivan et al., 2003) and can lead to inaccuracies in flux estimations if precise flame geometry (i.e. view factor), temperature and emmissivity equivalents are not known.
More complex solutions of the RTE, such as treating the flame as a volume of radiation emitting, scattering and absorbing media, can improve the prediction of radiant heat but are necessarily more computationally intensive; varying levels of approximation (both physical and numerical) are frequently employed to improve the computational efficiency. The Discrete Transfer Radiation Model (DTRM) solves the radiative transfer equation throughout a domain by a method of ray tracing from surface elements on its boundaries and thus does not require information about the radiating volume itself. Discrete Ordinate Method (DOM) divides the volume into discrete volumes for which the full RTE is solved at each instance and the sum of radiation along all paths from an observer calculated. The Differential Approximation (or P1 method) solves the RTE as a diffusion equation which includes the effect of scattering but assumes the media is optically thick. Knowledge about the media's absorption, scattering, refractive index, path length and local temperature are required for many of these solutions. Descriptions of methods for solving these forms of the RTE are given in texts on radiant heat transfer (e.g. Drysdale (1985)). Sacadura (2005) and Goldstein et al. (2006) review the use of radiative heat transfer models in a wide range of applications.
Transmission of thermal radiation can be affected by smoke or band absorption by certain components of the atmosphere (e.g CO\\({}_{2}\\), H\\({}_{2}\\)O).
#### Firebrands (solid fuel transport)
Determination of the transport of solid fuel (i.e. firebrands), which leads to the initiation of spotfires downwind of the main fire front is highly probabilistic (Ellis, 2000) and not readily amenable to a purely deterministic description. This is due in part to the wide variation in firebrand sources and ignitions and the particular flight paths any firebrand might take. Maximum distance that a firebrand may be carried is determined by the intensity of the fire and the updraught velocity of the convection, the height at which the firebrand was sourced and the wind profile aloft (Albini, 1979; Ellis, 2000). Whether or not the firebrand lands alight and starts a spotfire is dependent upon the nature of the firebrand, how it was ignited, its combustion properties (including flaming lifetime) and the ignition properties of the fuel in which it lands (e.g. moisture content, bulk density, etc) (Plucinski, 2003).
#### Atmospheric interactions
The transport of the gas phase of the combustion products interacts with the atmosphere around it, transferring heat and energy, through convection and turbulence. The condition of the atmosphere, particularly the lapse rate, or the ease with which heated parcels of air rise within the atmosphere, controls the impact that buoyancy of the heated air from the combustion zone has on the atmosphere and the fire.
Changes in the ambient meteorological conditions, such as changes in wind speed and direction, moisture, temperature, lapse rates, etc, both at the surface and higher in the atmosphere, can have a significant impact on the state of the fuel (moisture content), the behaviour of a fire, its growth, and, in turn, the impact that fire can have on the atmosphere itself.
#### Topographic interactions
The topography in which a fire is burning also plays a part in the way in which energy is transferred to unburnt fuel and the ambient atmosphere. It has long been recognised that fires burn faster upslope than they do down, even with a downslope wind. This is thought to be due to increased transfer of radiant heat due to the change in the geometry between the fuel on the slope and the flame, however recent work (Wu et al., 2000) suggests that there is also increased advection in these cases.
## Physical Models
This section briefly describes each of the physical models that were developed since 1990 (Table 3). Many are based on the same basic principles and differ only in the methodology of implementation or the purpose of use. They are presented in chronological order of first publication. Some have continued development, some have been implemented and tested against observations, others have not. Many are implemented in only one or twodimensions in order to improve computational or analytical feasibility. Where information about the performance of the model on available computing hardware is available, this is given.
### Weber (1991)
Weber's (1991b) model was an attempt to provide the framework necessary to build a physical model of fire spread through wildland fuel, rather than an attempt to actually build one. To that end, Weber highlights several possible approaches but does not give any definitive answer.
Weber begins with a reaction-transport formulation of the conservation of energy equation, which states that the rate of change of enthalpy per unit time is equal to the spatial variation of the flux of energy plus heat generation. He then formulates several components that contribute to the overall flux of energy, including radiation from flames, radiation transfer to fuel through the fuel, advection and diffusion of turbulent eddies. Heat is generated through a chemical reaction that is modelled by an Arrhenius law which includes heat of combustion.
This results in a first cut model that is one dimensional in \\(x\\) plus time. Advection, radiation and reaction components allow the evolution of the fluid velocity to be followed. Solid phase and gas phase fuel are treated separately due to different energy absorption characteristics.
In a more realistic version of this model, Weber treats the phase differences more explicitly, producing two coupled equations for the conservation of energy. The coupling comes from the fact that when the solid volatilises it releases flammable gas that then combusts, returning a portion of the released energy back to the solid for further volatilisation.
Weber determines that in two dimensions, the solution for the simple model is a two-dimensional travelling wave that produces two parametric equations for spatial \\(x\\) and \\(y\\) that yields an ellipse whose centre has been shifted. Weber favourably compares this result with that of Anderson et al. (1982), who first formalised the spread of a wildland fire perimeter as that of an expanding ellipse. No performance data are given.
### Aiolos-F (Cinar S.A., Greece)
AIOLOS-F was developed by CINAR S.A., Greece, as a decision support tool for wildland fire behaviour prediction. It is a computational fluid dynamics model that utilises the 3-dimensional form of the conservation laws to couple the combustion of a fuel layer with the atmosphere to model forest fire spread (Croba et al., 1994). It consists of two components, AIOLOS-T which predicts the local wind field and wind-fire interaction, and AILOS-F which models the fuel combustion.
The gas-phase conservation of mass equation is used to calculate the local wind perturbation potential, the gas-phase conservation of momentum is used to determine the vertical component of viscous flow, and a state equation to predict the air density and pressure change with air temperature (Lymberopoulos et al., 1998).
The combustion model is a 3D model of the evolution of enthalpy from which change in solid-phase temperature is determined. A thermal radiation heat transfer equation provides the radiant heat source term. Fuel combustion is modelled through a 3-dimensional fuel mixture-fraction evolution that is tied to a single Arrhenius Law for the consumption of solid phase fuel. The quantity of fuel consumed by the fire within a time interval is an exponential function of the mixture fraction.
The equations are solved iteratively and in precise order such that the wind field is solved first, the enthalpy, mixture-fraction, and temperature second. These are then used to determine the change in air density which is then fed back into the wind field equations taking into account the change in buoyancy due to the fire. The enthalpy, mixture-fraction and temperature are then updated with the new wind field. This is repeated until a solution converges, then the amount of fuel consumed for that time step is determined and the process continues for the next time step.
Fuel is assumed to be a single layer beneath the lowest atmosphere grid. Fuel is specified from satellite imagery on grids with a resolution in the order of 80 m. No data on calculation time is given, although it is described (Croba et al., 1994; Lymberopoulos et al., 1998) as being faster than real time.
### FIRETEC (Los Alamos National Laboratory, USA)
FIRETEC (Linn, 1997), developed at the Los Alamos National Laboratory, USA, is a coupled multiphase transport/wildland fire model based on the principles of conservation of mass, momentum and energy. It is fully 3-dimensional and in combination with a hydrodynamics model called HIGRAD (Reisner et al., 1998, 2000a,b), which is used to solve equations of high gradient flow such as the motions of the local atmosphere, it employs a fully compressible gas transport formulation to represent the coupled interactions of the combustion, heat transfer and fluid mechanics involved in wildland fire (Linn et al., 2002b).
FIRETEC is described by the author as self-determining, by which it is meant that the model does not use prescribed or empirical relations in order to predict the spread and behaviour of wildland fires, relying solely on the formulations of the physics and chemistry to model the fire behaviour. The model utilises the finite volume method and the notion of a resolved volume to solve numerically its system of equations. It attempts to represent the average behaviour of the gases and solid fuels in the presence of a wildland fire. Many small-scale processes such as convective heat transfer between solids and gases are represented without each process actually being resolved in detail (Linn, 1997; Linn and Harlow, 1998a; Linn et al., 2002a). Fine scale wind patterns around structures smaller than the resolved scale of the model, including individual flames, are not represented explicitly.
The complex combustion reactions of a wildland fire are represented in FIRETEC using a few simplified models, including models for pyrolysis, char burning, hydrocarbon combustion and soot combustion in the presence of oxygen (Linn, 1997). Three idealised limiting cases were used as a basis for the original FIRETEC formulation:
1. gas-gas, with two reactants forming a single final product and no intermediate species.
2. gas-solid, being the burning of char in oxygen.
3. single reactant, being pyrolysis of wood.
However, Linn et al. (2002a) further refined this to a much simplified chemistry model that reduced the combustion to a single solid-gas phase reaction:
\\[N_{f}+N_{O_{2}}\\rightarrow\\,products+heat \\tag{6}\\]
where \\(N_{f,O_{2}}\\) are the stoichiometric coefficients for fuel and oxygen. The equations for the evolution of the solid phase express the conservation of fuel, moisture and energy:
\\[\\frac{\\partial\\rho_{f}}{\\partial t}=-N_{f}F_{f} \\tag{7}\\]
\\[\\frac{\\partial\\rho_{w}}{\\partial t}=-F_{w} \\tag{8}\\]
\\[(c_{p_{f}}\\rho_{f}+c_{p_{w}}\\rho_{w})\\frac{\\partial T_{s}}{ \\partial t} = Q_{rad,s}+ha_{v}(T_{g}-T_{s})-F_{w}(H_{w}+c_{p_{w}}T_{cap})+ \\tag{9}\\] \\[F(\\Theta H_{f}+c_{p_{f}}T_{pyr}N_{f})\\]
where \\(F_{f,W}\\) are the reaction rates for solid fuel and liquid water depletion (i.e. the evaporation rate), \\(\\rho_{f,w}\\) are the solid phase (i.e. fuel and liquid water) density, \\(\\Theta\\) is the fraction of heat released from the solid-gas reaction and deposited back to the solid, \\(c_{p_{f},w}\\) are the specific heats at constant pressure of the fuel and water, \\(T_{s,g}\\) is the temperature of the solid or gas phase, \\(T_{pyr}\\) is the temperature at which the solid fuel begins to pyrolyse, \\(Q_{rad,g}\\) is the net thermal radiation flux to the gas, \\(h\\) is the convective heat transfer coefficient, \\(a_{v}\\) is the ratio of solid fuel surface area to resolved volume, \\(H_{w,f}\\) is the heat energy per unit mass associated with liquid water evaporation or solid-gas reaction (Eq. 6). It is assumed that the rates of exothermic reaction in areas of active burning are limited by the rate at which reactants can be brought together in their correct proportions (i.e. mixing limited). In a later work (Colman and Linn, 2003) a procedure to improve the combustion chemistry used in FIRETEC by utilising a non-local chemistry model in which the formation of char and tar are competing processes (as in for example, Fig. 2) is outlined. No results have been presented yet.
The gas phase equations utilise the forms of the conservation of mass, momentum, energy and species equations (Linn and Cunningham, 2005), similar to those of eqs (3 & 4), except that the conservation of mass is tied to the creation and consumption of solid and gas phase fuel, a turbulent Reynolds stress tensor and coefficient of drag for the solid fuel is included in the momentum equation, and a turbulent diffusion coefficient is included in the energy equation.
A unique aspect of the FIRETEC model is that the variables that occur in the relevant solid and gas phase conservation equations are divided into mean and fluctuating components and ensemble averages of the equations taken. This approach is similar to that used for the modelling of turbulence in flows.
The concept of a critical temperature within the resolved volume is used to initiate combustion and a probability distribution function based on the mean and fluctuating components of quantities in the resolved volume used to determine the mean temperature of the volume. Once the mean temperature exceeds the critical temperature, combustion commences and the evolution equations are used to track the solid and gas phase species. The critical temperature is chosen to be 500 K (Linn, 1997).
Turbulence in the flow around the combusting fuel is taken into account as the sum of three separate turbulence spectra corresponding to three cascading spatial scales, _viz._: scale A, the scale of the largest fuel structure (i.e. a tree); scale B, the scale of the distance between fuel elements (i.e. branches); and scale C, the scale of the smallest fuel element (i.e. leaves, needles, etc) (Linn, 1997). In the original work modelling fire spread through a forest type, the characteristic scale lengths, \\(s\\), for each scale were \\(s_{A}=4.0\\) m, \\(s_{B}=2.0\\) m and \\(s_{C}=0.05\\) m. By representing turbulence explicitly like this, the effect of diffusivity in the transfer of heat can be included.
The original version of FIRETEC did not explicitly include the effects of radiation, from either flame or fuel bed, or the absorption of radiation into unburnt fuel-primarily because flames and flame effects were at an unresolved scale within the model. As a result fires failed to propagate in zero wind situations or down slopes. Later revised versions (Bossert et al., 2000; Linn et al., 2001, 2002a) include some form of radiant transfer, however, this has not been formally presented anywhere and Linn et al. (2003) admits to the radiant heat transfer model being'very crude'.
Because FIRETEC models the conservation of mass, momentum and energy for both the gas and solid phases, it does have the potential, via the probability density function of temperature within a resolved volume, to track the probability fraction of mass in a debris-laden plume above the critical temperature (Linn and Harlow, 1998b) and thus provide a method of determining the occurrence of'spotting' downwind of the main fire.
Running on a 128-node SGI computer with R10000 processors, a simplified FIRETEC simulation is described as running at 'one to two orders of magnitude slower than realtime' for a reasonable domain size (Hanson et al., 2000).
### Forbes (1997)
Forbes (1997) developed a two dimensional model of fire spread utilising radiative heat transfer, species consumption and flammable gas production to explain why most fires don't become major problems and why, when they do, they behave erratically. The basis for his model is observations of eucalypt forest fires which appeared either to burn quiescently or as raging infernos.
The main conceit behind the model is a two-path combustion model in which the solid fuel of eucalypt trees either thermally degrades directly and rapidly in an endothermic reaction, creating flammable fuel that then combusts exothermically, or produces flammable 'eucalypt vapours' endothermically which then combust exothermically.
Forbes developed a set of differential equations to describe this process and, because the reaction rates are temperature dependent, a temperature evolution for both the solid and gas phases, which are the sum of radiation, conduction (only included in the solid phase), convective heat loss, and the endothermic reaction losses in the production of the two competing flammable gases. Wind is included in the reaction equations.
Forbes concludes from his analysis of the one-dimensional form of the equations that a travelling wave solution is only sustainable if one of the two reaction schemes is endothermic overall and, since this won't be the case in a large, intense bushfire, that bushfires are unlikely to propagate as simple travelling waves. He determines a solution of a one-dimensional line fire but found that for most parameter values, the fire does not sustain itself. He found that the activation energies for each reaction, rate constants and heat release coefficients govern the propagation of the fire. Low activation energies and temperatures and high heat release rates are most likely to lead to growth of large fires.
Forbes then develops a two-dimensional solution for his equations, making the assumption that the height of the processes involved in the vertical direction (i.e. the flames) is small when compared to the area of the fire (i.e. by some orders of magnitude). This solution produces an elliptical fire shape stretched in the direction of the wind. He suggests improving the model by including fuel moisture. No performance data are given.
### Grishin (Tomsk State University, Russia)
The work of AM Grishin has long been recognised for its comprehensive and innovative approach to the problem of developing physical models of forest fire behaviour (Weber, 1991b). While most of this work was conducted and published in Russia in the late 1970s and early 1980s, Grishin published a major monograph in 1992 that collected the considerable research he had conducted in one place, albeit in Russian. In 1997, this monograph was translated into English (Grishin, 1997) (edited by Frank Albini) and, for the first time, all of Grishin's work was available for English readers and is the main reason for the inclusion of his work in this review.
Grishin's model, as described in a number of papers (Grishin et al., 1983; Grishin, 1984; Grishin et al., 1984; Grishin and Shipulina, 2002), was based on analysis of experimental data and developed using the concepts and methods of reactive media mechanics. In this formulation, the wildland fuel (in this case, primarily forest canopy) and combustion products represent a non-deformable porous-dispersed medium (Grishin, 1997). Turbulent heat and mass transfer in the forest, as well as heat and mass exchange between the near-ground layer of the atmosphere and the forest canopy, are incorporated. The forest is considered as a multi-phase, multi-storied, spatially heterogeneous medium outside the fire zone. Inside the fire zone, the forest is considered to be a porous-dispersed, seven-phase, two-temperature, single-velocity, reactive medium. The six phases within the combustion zone are: dry organic matter, water in liquid state, solid products of fuel pyrolysis (char), ash, gas (composed of air, flying pyrolytic products and water vapour), and particles in the dispersed phase.
The model takes into account the basic physicochemical processes (heating, drying, pyrolysis of combustible forest material) and utilises the conservation of mass, momentum and energy in both the solid and gas phases. Other equations, in conjunction with initial and boundary conditions, are used to determine the concentrations of gas phase components, radiation flux, convective heat transfer, and mass loss rates through Arrhenius rate laws using experimentally-determined activation energy and reaction rates. Grishin uses an effective reaction whose mass rate is close to that of CO to describe the combustion of 'flying' pyrolytic materials, because he determined that CO is the most common pyrolytic product (Grishin et al., 1983). Numerical analysis then enables the structure of the fire front and its development from initiation to be predicted. Versions of the full formulation of the multi-phase model are given in each of the works of Grishin (e.g. Grishin et al. (1983); Grishin (1997); Grishin and Shipulina (2002)).
While the model is formulated for three spatial dimensions plus time, the system of equations is generally reduced to a simpler form in which the vertical dimension is averaged over the height of the forest and the fire is assumed to be infinite in the y-direction, resulting in a one-dimensional plus time system of equations in which x is the direction of spread. The original formulation was intended only for the acceleration phase from ignition until steady state spread is achieved (Grishin et al., 1983). This was extended using a moving frame of reference and a steady-state rate of spread (ROS) to produce an analytical solution for the ROS which was found to vary linearly with wind speed (Grishin, 1984).
The speed of the fire front is taken to be the speed of the 700 K isotherm. The domain used for numerical analysis is in the order of 100-200 m long. Rate of spread is found to be dependent on initial moisture content of the fuel. No performance data are given.
### IUSTI (Institut Universitaire des Systemes Thermiqes Industriels, France)
IUSTI (Larini et al., 1998; Porterie et al., 1998a,b, 2000) is based on macroscopic conservation equations obtained from local instantaneous forms (Larini et al., 1998) using an averaging method first introduced by Anderson and Jackson (1967). It aims to extend the modelling approach of Grishin et al. (1983) to thermal non-equilibrium flows. IUSTI considers wildland fire to be a multi-phase reactive and radiative flow through a heterogeneous combustible medium, utilising coupling through exchange terms of mass, momentum and energy between a single gas phase and any number of solid phases. The physico-chemical processes of fuel drying and pyrolysis due to thermal decomposition are modelled explicitly. Whereas FIRETEC was intended to be used to model wildland fire spread across large spatial scales, IUSTI concentrates on resolving the chemical and conservation equations at a much smaller spatial scale at the expense of 3-dimensional solutions. Thus, in its current formulation, IUSTI is 2-dimensional in the x and z directions.
Having derived the set of equations describing the general analysis of the reactive, radiative and multi-phase medium (Larini et al., 1998; Porterie et al., 1998a), the authors of IUSTI then reduced the system of equations to that of a much simplified version (called a zeroth order model) in which the effects of phenomena were dissociated from those of transfers. This was done by undertaking a series of simplifying assumptions. The first assumption was that solid particles are fixed in space, implying that solid phase momentum is nil; there is no surface regression and no char contribution in the conservation equations; and that the only combustion process is that of pyrolysis in the gaseous phase.
Mass loss rates are deduced from Arrhenius-type laws following on from the values used by Grishin et al. (1983) and Grishin (1997) and thermogravitic analysis (Porterie et al., 2000). Mass rate equations for the conversion of solid fuel (gaseous production and solid fuel mass reduction) assume an independent reaction path between char formation and pyrolysis such that the rate of particle mass reduction relative to thermal decomposition of the solid phase and gas production rate is the sum of the all solid fuel mass loss rates due to water vaporisation, pyrolysis, char combustion (as a consequence of pyrolysis), and ash formation (as a consequence of char oxidation from the idealised reaction, C + O\\({}_{2}\\rightarrow\\) CO\\({}_{2}\\)). The model also includes a set of equations governed by the transition from the solid phase to a gas phase called the 'jump' condition because IUSTI considers such relatively small volumes. The pyrolysis products are assumed to be removed out of the solid instantaneously upon release. Mass diffusion of any chemical species is neglected and no chemical reactions occur in the solid phase. A single one-step reaction model in which fuel reacts with oxidant to produce product is implemented.
A later version of IUSTI (Porterie et al., 2000) utilises the density-weighted or Favre average form of the conservation equations due to the density variations caused by the heat release. The time-averaged, density-weighted (Favre) fluctuation of turbulent flux is approximated from Boussinesq's eddy viscosity concept and the turbulent kinetic energy, \\(\\kappa\\), and dissipation rate, \\(\\epsilon\\), are obtained from the renormalisation group theory (RNG).
The formation of soot is modelled as the soot volume fraction which forms mostly as a result of the pyrolysis process and so is assumed to be a percentage of the mass loss rate due to pyrolysis. The radiative transfer equation is based upon the mole fraction of the combustion products and the average soot volume fraction, treating the gas as gray.
Drag is included through the drag coefficient which is a function of the Reynolds number of the solid phase. Solid phase particles are treated as spheres. The conductive/convective heat transfer coefficient is expressed using the Reynolds number for flow around cylinders.
The governing equations of conservation in both gas and solid phases are discretised on a non-uniform grid using a finite-volume scheme. The domain over which the equations are solved is in the order of 1-2 m long by 0.1 m with an average resolution of \\(\\simeq\\) 0.01 m.
A one-dimensional version of this model was constructed in an attempt to simplify the model (Morvan and Larini, 2001). A numerical experiment replicating fire spread through a tube containing pine needles (in order to replicate one-dimensional spread experimentally) was conducted. Results showed a linear increase in ROS with increasing wind speed up to a value of 16 cm s\\({}^{-1}\\). Beyond this value, ROS dropped off dramatically and pyrolysis flow rate reduced. Analysis of the species composition mass fractions showed that below 16 cm s\\({}^{-1}\\), the combustion is oxygen limited and is akin to smouldering combustion. Above 16 cm s\\({}^{-1}\\), the combustion became fuel-limited as the increased air flow increased convective cooling and slowed pyrolysis and hence ROS.
No performance data are given.
### Pif97
The detailed work of Larini et al. (1998), Porterie et al. (1998a) and Porterie et al. (2000) provided the framework for the development of a related model, named PIF97 by its authors (Dupuy and Larini, 1999; Morvan and Dupuy, 2001). The aim of this work was to simplify the multi-phase IUSTI model of Larini et al. (1998) and Porterie et al. (1998a) in order to develop a more operationally-feasible model of wildland fire spread. The full 2D IUSTI was reduced to a quasi-two-dimensional version in which the fuel bed is considered to be one-dimensional and the gas interactions (including radiation and convective mixing above the bed) are two dimensional (x and z). In a manner similar to IUSTU, two phases of media are considered-gas and solid. However, PIF97 assumes that the solid is homogeneous, unlike IUSTI which considers multiple solid phases.
PIF97 comes in two parts. The first is a combustion zone model that considers the radiative and convective heat transferred to the fuel bed in front of the flaming zone. The radiative component considers radiation flux from adjacent fuels, the ignition interface, flame and the ambient media surrounding the fuel. Radiation from solids is assumed to be blackbody at a temperature of 1123K. This value was selected so that the model could predict the spread of a single experimental fire in pinaster needles. Convective heat exchange depends on the Nusselt number which is approximated through a relation with the Reynolds number for the type of flow the authors envisage. This in turn relies on the assumption of flow around a cylinder of infinite length. Mass transfer and drag forces are similarly derived using approximations to published models and empirical correlations (i.e. assuming cylindrical particles). An ignition temperature for solid fuel of 600 K is used.
The second part of the model is the fire-induced flow in the flaming combustion zone behind the ignition interface. This depends on the ROS of the interface derived from the combustion part of the model. The temperature of this gas is assumed to be fixed at 900K. Other values between 750K and 1050K were investigated but no significant difference in results was found.
The numerical solution of PIF97 is based on a domain that is 25 cm long and uses a spatial resolution of 1 mm. Results of the model are compared to experimental results presented by Dupuy (2000) in which two radiation-only models, that of de Mestre et al. (1989) and a one-dimensional version of Albini (1985, 1986) were compared to laboratory experiments conducted with _Pinus pinaster_ and _P. halipensis_ needles. PIF97 was found to be comparable to the Albini model, except in _P.halipensis_ needles where it performed markedly better. However, no model was found to 'perform well' in conditions of wind and slope.
A later version of PIF97 (Morvan and Dupuy, 2004) was extended to multiple solid phases in order to simulate Mediterranean fuel complexes comprising live and dead components of shrub and grass species, including twigs and foliage. An empirical correlation is used for the drag coefficient based on regular shapes (i.e. cylinder, sphere, etc.) A RNG \\(\\kappa-\\epsilon\\) turbulence model using turbulent diffusion coefficients is incorporated and a pressure correction algorithm used to couple the pressure with the velocity.
The revised model was implemented as a 2D vertical slice through the fire front as a compromise between the computational time and need to study the main physical mechanisms of the fire propagation. 80 \\(\\times\\) 45 control volumes, each 10 cm \\(\\times\\) 3 cm were used, defining a domain 8 m by 1.35 m. ROS was defined as the movement of the 500K isotherm inside the pyrolysis front. ROS was compared to other models and observations of shrub fires (Fernandes, 2001) and did not perform well. The authors summarise their model as producing a ROS relationship for wind \\(<\\) 3 m s\\({}^{-1}\\) as 'an increasing function of wind speed' and then say the ROS reaches a limiting value at a wind speed of about 5 m s\\({}^{-1}\\). The other models and observations showed either linear or power law (exp \\(<\\) 1.0) relationships.
Dupuy and Morvan (2005) added a crown layer to this model resulting in six families of solid phase fuel: three for shrubs (leaves and two size classes of twigs (0-2, 2-6 mm), one for grass, and two for the overstorey _P. halepensis_ canopy (needles and twigs 2-6 mm). This version implemented a combustion model based on Arrenhius-type laws after Grishin (1997). Soot production (for the radiation transfer) was assumed at 5% of the rate of solid fuel pyrolysis.
The domain was 200 m \\(\\times\\) 50 m high with, at its finest scale, cells 0.25 m \\(\\times\\) 0.025 m, average of 0.25 \\(\\times\\) 0.25 and largest 1.0 \\(\\times\\) 0.25 m. 200 s of simulation took 48 hours on an Intel Pentium P4 2GHz machine.
### LEMTA (Laboratoire d'Energetique et de Mecanique Theorique et Appliquee, France)
This comprehensive model, developed by Sero-Guillaume and Margerit (2002) of Laboratoire d'Energtique et de Mecanique Theorique et Appliquee in France, considers a two-phase model, gas and solid, in three regions of a forest-above the forest, in the forest and below the ground-at three scales: microscopic (plant cell solid/gas level), mesoscopic (branch and leaf level) and macroscopic (forest canopy/atmosphere level). They identify but do not investigate a fourth scale, that of the 'gigascopic' or landscape level.
The combustion chemistry is simplified in that only gas-phase combustion is allowed. Solid phase chemistry only considers pyrolysis to gas-phase volatile fuel, char and tar. Soot production is not considered, nor is char combustion. Gas phases include O\\({}_{2}\\), water, N\\({}_{2}\\), fuel and inert residue. Solid to gas phase transitions are handled by interface jump relations.
Conservation of species mass, momentum and energy are derived for mesoscopic gas and solid phase interactions. These are then averaged over the larger macroscopic scale by using distribution theory and convoluting the equations to macroscopic quantities. Extended irreversible thermodynamics is then used to close the system of equations. Arguments about thermal equilibrium are used to further reduce the non-equilibrium equations for temperature and pressure.
The system of equations are then further simplified using assumptions about the nature of the fuel (at rest) and the size and interaction of the fuel particles with the gas phase (i.e. no advection, pressure or porosity variations in the solid phase). Drag is not included. Gas phase equations in the region above the forest do not include solid phase particles and, since soot is not modeled, cannot suitably describe radiant heat from flames.
Margerit and Sero-Guillaume (2002) and Chetehouna et al. (2004) reduced Sero-Guillaume and Margeri (2002) to two dimensions in order to produce a more operationally-feasible fire spread model. Margerit and Sero-Guillaume (2002) achieved this through assumptions that the scale of the fuel to the fire was such that the fuel could be considered a boundary layer and the fire a one-dimensional interface between burning and unburnt fuel on the surface. (i.e. the fuel is thin relative to the width of the fire). A few assumptions are then made: there is no vertical component in the wind, the solid and gas phases are in thermal equilibrium, and the non-local external radiative heat flux is blackbody. The resulting two-dimensional model is a reaction-diffusion model similar in form to Weber (1991b). Assumptions about the speed of chemical reactions are made to get the pyrolysis occurring at an ignition temperature.
Chetehouna et al. (2004) further reduced the two-dimensional reaction-diffusion equations of Margerit and Sero-Guillaume (2002) by making some simplifying assumptions about the evaporation and ignition of the solid phase fuel. Here, fixed temperatures are used, 100 and 300\\({}^{\\circ}\\)C respectively. Five distinct heating stages are used, each separated by the temperature of the fuel: 1) fuel heating to 100\\({}^{\\circ}\\)C; 2) moisture evaporation at 100\\({}^{\\circ}\\)C; 3) fuel heating to ignition temperature; 4) combustion at 300\\({}^{\\circ}\\)C; and 5) mass loss due to chemical reactions and heat loss at flame extinction.
Separate equations with different boundary conditions are used for each stage but only stages 1-3 are important for fire spread. The equations for these stages are then non-dimensionalised and a limiting parameter, the thermal conductivity in the solid phase, is used as a parameter for variation. The equations are then solved as an eigenvalue problem in order to determine the ROS for each stage. Two flame radiation models are used to incorporate long distance radiant heat flux from flames: de Mestre et al. (1989) and the version given by Margerit and Sero-Guillaume (2002). Rates of spread are similar for both flame models and reduce with increasing thermal conductivity. However, despite the fact that the authors say the models compare well with experimental results, no results or comparisons are given.
The model is then simulated on a computer. It provides a circular shape in no wind/no slope, and an elongated shape under wind. An example burning in real terrain is shown but no discussion of its performance against real fires is given. Mention is made of it operating in real-time on a PC.
## 4 UoS (University of Salamanca, Spain)
Asensio and Ferragut (2002) constructed a 2D model of fire spread that used radiation as the primary mode of heat transfer but also incorporated advection of hot gas and convective cooling of fuels. The model, described here as UoS, employed a simplified combustion chemistry model (only two phases: gas and solid, and two species: fuel and oxygen) and utilised only conservation of energy and species mass. It is assumed combustion is fuel limited and thus only one species is conserved. Arrhenius laws for fuel consumption are used. Turbulence is not accounted for directly or explicitly, but a term for advection with a wind velocity vector is included.
The model is of a form that explicitly includes convective heating, radiation, chemical energy release and natural convection. Non-dimensionalised forms of the system of equations are then discretised into a finite element form for numerical computation. The model is considered to be a first step and the authors aim to link it to the Navier-Stokes equations for better incorporation of turbulence.
Asensio et al. (2005) attempted to provide a link from the 2D surface fire spread of UoS to a model of convection above the fire. The model starts with the conservation of momentum equation and then makes hydrostatic assumptions about the atmosphere. It then decomposes this 3D model into a 2D model with height that is averaged over a layer of fixed thickness. An asymptotic model is then formed and solved producing 2D streamfunctions and an equation for the velocity on the surface (which can then be inserted directly into the original spread model for the advection of heat around the fire).
No performance data are given.
### WFDS (National Institute of Safety Technology, USA)
The Wildland Fire Dynamic Simulator (WFDS), developed by the US National Institute of Safety Technology (Mell et al., 2006), is an extension of the model developed to predict the spread of fire within structures, Fire Dynamic Simulator (FDS). This model is fully 3D, is based upon a unique formulation of the equations of motion for buoyant flow (Rehm and Baum, 1978) and is intended for use in predicting the behaviour of fires burning through peri-urban/wildlands (what the authors call 'Community-scale fire spread' (Rehm et al., 2003; Evans et al., 2003)). The main objective of this model is to predict the progress of fire through predominantly wildland fuel augmented by the presence of combustible structures.
WFDS utilises a varying computational grid to resolve volumes as low as 1.6 m (x) \\(\\times\\) 1.6 m (y) \\(\\times\\) 1.4 m (z) within a simulation domain in the order of 1.5 km\\({}^{2}\\) in area and 200 m high. Outside regions of interest, the grid resolution is decreased to improve computation efficiency.
Mell et al. (2006) give a detailed description of the WFDS formulated for the specific initial case of grassland fuels, in which vegetation is not resolved in the gas-phase (atmosphere) grid but in a separate solid fuel (surface) grid (which the authors admit is not suitable for fuels in which there is significant vertical flame spread and air flow through the fuel). In the case presented, the model includes features such as momentum drag caused by the presence of the grass fuel (modelled as cylinders) which changes over time as the fuel is consumed. Mechanical turbulence, through the dynamic viscosity of the flow through the fuel, is modelled as a subgrid parameter via a variant of the Large Eddy Simulation (LES) method.
The WFDS assumes a two-stage endothermic thermal decomposition (water evaporation and then solid fuel 'pyrolysis'). It uses the temperature dependent mass loss rate expression of Morvan and Dupuy (2004) to model the solid fuel degradation and assumes that pyrolysis occurs at 127\\({}^{\\circ}\\)C. Solid fuel is represented as a series of layers which are consumed from the top down until the solid mass reaches a predetermined char fraction at which point the fuel is considered consumed.
WFDS assumes combustion occurs solely as the result of fuel gas and oxygen mixing in stoichiometric proportion (and thus is independent of temperature). Char oxidation is not accounted for. The gas phase combustion is modelled using the following stoichiometric relation:
\\[\\mathrm{C_{3.4}H_{6.2}O_{2.5}+3.7(O_{2}+3.76N_{2})\\to 3.4CO_{2}+3.1H_{2}O+13.91N_{2}} \\tag{10}\\]Due to the relatively coarse scale of the resolved computation grids within WFDS, detailed chemical kinetics are not modelled. Instead, the concept of a mixture fraction within a resolved volume is used to represent the mass ratio of gas-phase fuel to oxygen using a fast chemistry or flame sheet model which then provides the mass loss flux for each species. The energy release associated with chemical reactions is not explicitly presented but is accounted for by an enthalpy variable as a function of species. The model assumes that the time scale of the chemical reactions is much shorter than that of mixing.
Thermal radiation transport assumes a gray gas absorber-emitter using the P1 radiation model for which the absorption coefficient is a function of the mixture fraction and temperature for a given mixture of species. A soot production model is not used; instead it is an assumed fraction of the mass of fuel gas consumed.
Mell et al. (2006) provides simulation information for two experimental grassfires. In the first case, a high intensity fire in a plot 200 m \\(\\times\\) 200 m within a domain of 1.5 km \\(\\times\\) 1.5 km and vertical height of 200 m for a total 16 million grid cells, the model, running on 11 processors, took 44 cpu hours for 100 s of simulated time. Another lower intensity experiment over a similiar domain took 25 cpu hours for 100 s of simulated time.
## Quasi-physical models
This section briefly describes quasi-physical models that have appeared in the literature since 1990 (de Mestre et al. (1989) is included because it was missed by previous reviews and provides the basis for a subsequent model).
The main feature of this form of model is the lack of combustion chemistry and reliance upon the transfer of a prescribed heat release (i.e. flame geometry and temperature are generally assumed). They are presented in chronological order of publication (Table 4).
### Australian Defence Force Academy (ADFA) I, Australia
de Mestre et al. (1989) of the Australian Defence Force Academy, University of New South Wales, developed a physical model of fire spread based initially only on radiative effects, in much the same manner as that of Albini (1985, 1986) (see below) but in a much simplified manner.
The authors utilise a conservation of heat approach to model the spread of a planar fire of infinite width across the surface of a semi-transparent fuel bed in a no wind, no slope situation. However, unlike Albini, who modelled the fuel bed in two dimensions (i.e., x and z), de Mestre et al. (1989) chose to model only the top of the fuel bed, arguing that it is this component of the fuel bed that burns first before burning down into the bed; thus this model is one dimensional plus time.
Assumptions include vertical flames that radiate as an opaque surface of fixed temperature and emissivity, a fixed fuel ignition temperature, and radiation from the combustion zone as an opaque surface of fixed temperature and emissivity. Here they also assume that the ignition interface in the fuel bed is a linear surface, as opposed to Albini's curved one, in order to simplify the computations.
Two approaches to the vaporisation of the fuel moisture are modelled-one in which it all boils off at 373 K (i.e. 3 phase model (\\(<\\)373 K, 373 K, \\(>\\)373 K) and one in which it boils off gradually below 373 K (2 phase model (\\(\\leq\\) 373 K, \\(>\\) 373 K).
The final model includes terms for radiation from flame, radiation from combustion zone, radiative cooling from solid fuel, and convective cooling from solid fuel. Without the cooling terms, the model was found to over-predict ROS by a factor of 13. A radiative cooling factor brought the over-prediction down to a factor of 9. Including a convective cooling term to the ambient air apparently brought the prediction down to the observed ROS but this was not detailed.
No performance data are given.
### Trw, Usa
Carrier et al. (1991) introduced an analytical model of fire spread through an array of sticks in a wind tunnel (called here TRW). Unlike many preceding fire modelling attempts, they did not assume that the dominant preheating mechanism is radiation, but a mixture of convective/diffusive (what they called 'confusive') heating.
Predominately concerned with deriving a formula for the forward spread of the fire interface in the wind tunnel (based on a series of experiments conducted and reported by Wolff et al. (1991)), Carrier et al. (1991) assumed that the fire achieves a 'quasi-steady' ROS in conditions of constant wind speed and fuel conditions. They make the point that, at the scale they are looking at, the spread can be viewed as continuous and can thus involve a catch-all heat transfer mechanism (gas-phase diffusion flame) in which radiation plays no part and it is the advection of hot gas from the burned area that preheats the fuel (assuming all of it is burnt).
The model is two-dimensional in the plane XY in which it is assumed there is no lateral difference in the spread of the fire (which is different to assuming an infinite width fire). Indeed, their formulation actually needs the width of the fuel bed _and_ the width of the wind tunnel. The fluctuating scale of the turbulence within the tunnel is incorporated in a scale length parameter. Air flow within the fuel bed is ignored.
Using a first-principles competing argument, they say that if radiation was to be the source of preheating, the estimate of radiant energy (2.9 J/g) ahead of the fire falls well short of the 250 J/g required for pyrolysis. A square root relation between wind speed normalised by fuel load consumed and rate of forward spread was determined and supported by experimental observation (Wolff et al., 1991). Carrier et al. (1991) suggest that only when fuel loading is very high (on the order of 2 kg m\\({}^{-2}\\)) will radiative preheating play a role comparable to that of convective/diffusive preheating.
No performance data are given.
## Albini, USA
Albini (1985, 1986) developed a two-dimensional (XZ) quasi-physical radiative model of fire spread through a single homogeneous fuel layer. The latter paper improved upon the former by including a fuel convective cooling term. Both models required that flame geometry and radiative properties (temperature and emissive power) be prescribed _a priori_ in order for the model to then determine, in an iterative process, the steady-state speed of the fire front. The fire front is considered to be the isothermal flame ignition interface between unburnt and burnt fuel expressed as an eigenvalue problem, utilising a 3-stage fuel heating model (\\(<\\)373 K, 373 K, 373 K \\(\\leq\\) T\\({}_{i}\\)), where T\\({}_{i}\\) is the ignition temperature of 700 K.
A modified version of this spread model, in which a thermally-inert zone that allowed the passage of a planar flame front but did not influence the spread process was placed beneath the homogeneous fuel layer to simulate propagation of a fire through the tree crowns, was tested against a series of field-based experimental crown fires conducted in immature Jack Pine (Albini and Stocks, 1986). The results from one experimental fire were used to parameterise the model (flame radiometric temperature and free flame radiation) and obtain flame geometry and radiative properties for the remaining fires. The model was found to perform reasonably well, with a maximum absolute percent error of 14%.
Albini (1996) extended the representation of the fuel to multiple levels, where surface fuel, sub-canopy fuel and the canopy fuel are incorporated into the spread model. Albini also introduced a closure mechanism for removing the requirement for flame geometry and radiative properties to be given _a priori_. The former transcribed the fuel complex into a vertical series of equivalent single-component (homogeneous) surrogate layers based on weighted contributions from different fuel components (e.g. surface-volume ratio, packing ratio, etc.) within a layer.
The closure method involves the positing of a 'trial' rate of spread, along with free flame geometry and ignition interface shape, that are then used to predict a temperature distribution within the fuel. This distribution is then subsequently used in a sub-model to refine the fire intensity, rate of spread, flame geometry, etc. This continues iteratively until a convergence of results is achieved. A quasi-steady ROS is assumed and the temperature distribution is assumed stationary in time. A conservation of energy argument, that the ROS will yield a fire intensity that results in a flame structure that will cause that ROS, is then used to check the validity of the final solution.
Butler et al. (2004) used the heat transfer model of Albini (1996) in conjunction with models for fuel consumption, wind velocity profile and flame structure, to develop a numerical model for the prediction of spread rate and fireline intensity of high intensity crown fires. The model was found to accurately predict the relative response of fire spread rate to fuel and environment variables but significantly over-predicted the magnitude of the speed, in one case by a factor of 3.5. No performance data are given.
## University of Corsica (UC), France
The University of Corsica undertook a concerted effort to develop a physical model of fire spread (called here UC) that would be suitable for faster than real-time operationaluse. The initial approach (Santoni and Balbi, 1998a,b; Balbi et al., 1999) was a quasi-physical model in which the main heat transfer mechanisms were combined into a so-called'reactive diffusion' model, the parameters of which were determined experimentally.
The main components of UC are a thermal balance model that incorporates the combined diffusion of heat from the three mechanisms and a diffusion flame model for determination of radiant heat from flames. The heat balance considers: heat exchanged with the air around a fuel cell, heat exchanged with the cell's neighbours, and heat released by the cell during combustion. It is assumed that the rate of energy release is proportional to the fuel consumed and that the fuel is consumed exponentially. The model is two-dimensional in the fuel layer (the plane XY). No convection, apart from convective cooling to neighbouring cells is taken into account, nor is turbulence. The model assumes that all combustion follows one path. Model parameters were determined from laboratory experiments.
Initially, radiation from the flame was assumed to occur as surface emission from a flame of height, angle and length computed from the model and an isothermal of 500K. Flame emissivity and fuel absorptivity were determined from laboratory experiments in a combined parameter. The early version of the model was one dimensional for the fuel bed and two dimensional (x and z) for flame. Forms of the conservation of mass and momentum equations are used to control variables such as gas velocity, enthalpy, pressure and mass fractions.
Santoni et al. (1999) presented a 2D version of the model in which the radiative heat transfer component was reformulated such that the view factor, emissivity and absorptivity were parameterised with a single value for each fuel and slope combination that was derived from laboratory experiments. This version was compared to experimental observations (Dupuy, 1995) and the radiation-only models of Albini (1985, 1986) and de Mestre et al. (1989) (Morandini et al., 2000). It was found to predict the experimental increase in ROS with increasing fuel load much better than the other models. The UC model also outperformed the other models on slopes but this is not surprising as it had to be parameterised for each particular slope case.
Simeoni et al. (2001a) acknowledged the inadequacies of the initial'reaction-diffusion' model and Simeoni et al. (2001a,b, 2002, 2003) undertook to improve the advection component of the UC model by reducing the physical modelling of the advection component of the work of Larini et al. (1998) and Porterie et al. (1998a,b) to two dimensions to link it to the UC model. It occurred in two parts: one as a conservation of momentum component that is included in the thermal balance equations (temperature evolution), and one as a velocity profile through the flaming zone. They assumed that buoyancy, drag and vertical variation are equivalent to a force proportional to the quantity of gas in the multi-phase volume and that all the forces are constant whatever the gas velocity. The net effect is that the horizontal velocity decreases through the flame to zero at the ignition interface and does not change with time. Again, the quasi-physical model was parameterised using a temperature-time curve from a laboratory experiment with no wind or slope. The modified model improves the performance only marginally, particularly in the no slope case but still underpredicts ROS.
The original UC model included only the thermal balance with a diffusion term encompassing convection, conduction and radiation. The inclusion of only short distance radiation interaction failed to properly model the pre-heating of fuel due to radiation prior to the arrival of the fire front. Morandini et al. (2001a,b, 2002) attempted to address this issue by improving the radiant heat transfer mechanism of the model. Surface emission from a vertical flame under no wind conditions is assumed and a flame tilt factor is included when under the influence of wind. The radiation transfer is based on the Stefan-Boltzmann radiation transfer equation where the view factor from the flame is simplified to the sum of vertical panels of given width. The length of each panel is assumed to be equal to the flame depth, mainly because flame height is not modelled.
In cases of combined slope and wind, it was assumed that the effects on flame angle are independent of slope. Morandini et al. (2002) approximate the effects of wind speed by an increase in flame angle in a manner similar to terrain slope by taking the inverse tangent of the flame angle of a series of experiments divided into the mid-flame wind speed. This is then considered a constant for a range of wind speeds and slopes. Again, the model is parameterised using a laboratory experiment in no-wind, no slope conditions.
Results are given for a range of slopes (-15 to +15\\({}^{\\circ}\\)) and wind speeds (1,2,3 m s\\({}^{-1}\\)). The prediction in no wind and slope is good, as are the predictions for wind and no slope. The model breaks down when slope is added to high wind (i.e. 3 m s\\({}^{-1}\\)). Here, however, they determine that their model only works for equivalent flame tilt angles (i.e. slope and wind angles) up to 40 degrees.
The model is computed on a fine-scale non-uniform grid using the same methods as Larini et al. (1998). In this case the smallest resolution is 1 cm\\({}^{3}\\) and 0.1 seconds. On a Sun Ultra II, the model took 114 s to compute 144 s of simulation. When the domain is reduced to just the fire itself, the time reduced to 18 s.
## ADFA II, Aust/USA
Catchpole et al. (2002) introduced a much refined and developed version of ADFA (de Mestre et al., 1989), here called ADFA II. Like ADFA I, it is a heat balance model of a fuel element located at the top of the fuel bed. The overall structure of the model is the same, with radiative heating and cooling of the fuel (from both the flames and the combustion zone), and convective heating and cooling. It is assumed that the flame emits as a surface and they use laboratory experiments to determine the emissive energy flux based on a Gaussian vertical flame profile and a maximum flame radiant intensity. It assumes infinite width for the radiative emissions.
Combustion zone radiation is treated similarly. Byram's convective number (Byram, 1959b) and fireline intensity (Byram, 1959a) are used to determine flame characteristics (angle, height, length, etc). Empirical models are used to determine gas temperature profile above and within the fuel as well as maximum gas temperature, etc. ADFA II utilises an iterative method to determine ROS, similar to that of Albini (1996), assuming that the fire is at a steady-state rate of spread.
No performance data are given.
## Coimbra (2004)
The aim of Vaz et al. (2004) was not to develop a new model of fire spread as such, but to combine the wide range of existing models in such a way as to create a seamless modular quasi-physical model of fire spread that can be tailor-made for particular situations by picking and choosing appropriate sub-models. The 'library' of models from which the authors pick and choose their sub-models are classified as: heat sink models (including Rothermel (1972); Albini (1985, 1986); de Mestre et al. (1989)), which consider the conservation of energy aspects of fuel heating and moisture loss and ignition criteria; heat flux models (including Albini (1985, 1986); Van Wagner (1967)), which consider the net exchange of radiative energy between fuel particles; and heat source models (including Thomas (1967, 1971)), that consider the generation of energy within the combustion zone and provide closure for the other two types of models.
The authors compared a fixed selection of sub-models against data gathered from a laboratory experiment conducted on a fuel bed 2 m wide by 0.8 m long under conditions of no wind and no slope. The set of models was found to underpredict ROS by 46%. This was improved to 6% when observed flame height was used in the prediction. Predicted flame height, upon which several of the sub-models depend directly, did not correspond with observations, regardless of the combination of sub-models selected. Rather than producing a fire behaviour prediction system that utilises the best aspects of its component models, the result appears to compound the inadequacies of each of the sub-models. None of the three classes of sub-models consider advection of hot gases in the heat transfer.
## Discussion and summary
The most distinguishing feature of a fully physical model of fire spread in comparison with one that is described as being quasi-physical is the presence of some form of combustion chemistry. These models determine the energy released from the fuel, and thus the amount of energy to be subsequently transferred to surrounding unburnt fuel and the atmosphere, etc., from a model of the fundamental chemistry of the fuel and its combustion. Quasi-physical models, on the other hand, rely upon a higher level model to determine the magnitude of energy to be transferred and generally require flame characteristics to be known _a priori_.
Physical models themselves can be divided primarily into two streams; those that are intended for operational or experimental use (or at least field validation) and those that are purely academic exercises. The latter are characterised by the lack of follow-up work (e.g. Weber, Forbes, UoS), although it is possible that components of such models may later find their way into models intended for operational or experimental use. Sometimes the nature of the model itself dictates its uses, rather than the authors' intention. Grishin, IUSTI, and LEMTA were all formulated with the intention of being useful models of fire spread but either due to the complex nature of the models, the reduced physical dimensionality of the model or the restricted domain over which the model can operate feasibly, the model has not and most probably will not be used operationally.
The remaining physical models, AIO LOS-F, FIRETEC, PIF97 and WFDS have all had extended and ongoing development and each are capable of modelling the behaviour ofa wildland fire of landscape scale (i.e. computational domains in excess of \\(\\simeq 100\\) m. However, in the effort to make this computationally feasible, each model significantly reduces both the resolution of the computational domain and the precision of the physical models implemented.
Each of these remaining physical models is also different from the others in that efforts to conduct validation of their performance against large scale wildland fires have been attempted. Difficulties abound in this endeavour. As is the case with any field experiment, it is very difficult to measure all required quantities to the degree of precision and accuracy required by the models. In the case of wildland fires, this difficulty is increased by two or three orders of magnitude. Boundary conditions are rarely known and other quantities are almost never measured at the site of the fire itself. Mapping of the spread of wildland fires is haphazard and highly subjective.
IUSTI and PIF97 undertook validation utilising laboratory experiments of suitable spatial scales in which the number and type of variables were strictly controlled. In many laboratory experiments, the standard condition is one of no wind and no slope. While wildland fires in flat terrain do occur, it is very rare (if not impossible) for these fires to occur in no wind. The ability to correctly model the behaviour of a fire in such conditions is only one step in the testing of the model. Both IUSTI and PIF97 (as well as a number of the quasi-physical models discussed here) were found wanting in conditions of wind and/or slope.
Morvan et al. (2004) argues that purely theoretical modelling with no regard for field observations is of less use than a field-based model for one particular set of circumstances. Validation against fire behaviour observed in artificial fuel beds under artificial conditions is only half the test of the worth of a model. The importance of comparison against field observation is not to be understated. For regardless of the conditions under which a field experiment (i.e. an experimental fire carried out in naturally occurring, albeit modified, conditions) is conducted, it is the real deal in terms of wildland fire behaviour and thus provides the complete set of interactions between fire, fuel, atmosphere and topography. Both Linn and Cunningham (2005) and Mell et al. (2006) identified significant deficiencies within their models (FIRETEC and WFDS, respectively) that only comparison against field observations could have revealed.
Both FIRETEC and WFDS attempted validation against large scale experimental grassland fires (Cheney et al., 1993) and thus avoided many of the issues of validation against wildfire observations. However, the issue of identifying the source of discrepancy in such complex models is just as difficult as obtaining suitable data against which to test the model.
Computationally feasible models can be either constructed from simple models or reduced from complete models (Sero-Guillaume and Margerit, 2002) and each of the preceding physical models are very much in the latter category. Quasi-physical models are very much of the former but suffer from the same difficulties in validation against large scale fires. Of the quasi-physical models discussed here, only those of Albini have been tested against wildland fires, the others against laboratory experimental fires.
However, being constructed from simple models may make the quasi-physical models less complex but does not necessarily make them any more computationally feasible. Table 5 shows a summary of the scope, resolution and computation time available in the literature for each of the models. Not all models have such information, concentrating primarily on the underlying basis of the models rather than their computational feasibility. But for those models whose _raison de tre_ is to be used actively for fire management purposes, computational feasibility is of prime concern. Here models such as AIGOLOS-F, FIRETEC, WFDS and UC standout from the others because of their stated aim to be a useful tool in fire management.
PIF97, WFDS and UC all give nominal computation times for a given period of simulation. Only UC, being a quasi-physical model reduced from a more fundamental model is better than realtime. PIF97 and WFDS, using the current level of hardware, are all much greater than realtime (in the order of 450 times realtime for WFDS on 11 processors (Mell et al., 2006). FIRETEC is described as being'several orders of magnitude slower than realtime'. FIRETEC, WFDS and UoS are significantly different from the other physical models (and most of the quasi-physical models for that matter) in that their resolutions are significantly larger (in some cases by two orders of magnitude). However, the time step used by FIRETEC (0.002 s) in the example given, means that the gains to be made by averaging the computations over a larger volume are lost in using a very short time interval.
The authors of FIRETEC are resolved to not being able to predict the behaviour of landscape wildland fires and suggest that the primary use of purely physical models of fire behaviour is the study of fires under a variety of conditions in a range of fuels and topographies in scenarios that are not amenable to field experimentation. This is a laudable aim, and in an increasingly litigious social and political environment, may be the only way to study large scale fire behaviour in the future, but this assumes that the physical model is complete, correct, validated and verified. Hanson et al. (2000) suggest that the operational fire behaviour models of the future will be reduced versions of the purely physical models being developed today.
It is obvious from the performance data volunteered in the literature, that the current approaches to modelling fire behaviour _on the hardware available today_ are not going to provide fire managers with the tools to enable them to conduct fire suppression planning based on the resultant predicted fire behaviour. The level of detail of data (type and resolution of parameters and variables) required for input into these models will not be generally available for some time and will necessarily have a high degree of imprecision.
The basis for fire behaviour models of operational use is unlikely to be one of purely physical origin, simply because of the computational requirements to solve the equations of motion at the resolutions necessary to ensure model stability. Approximations do and will abound in order to improve computational feasibility and it is these approximations that lessen the confidence users will have in the final results. Such approximations span the gamut of the chemical and physical processes involved in the spread of fire across the landscape; from the physical structure of the fuel itself, the combustion chemistry of the fuel, the fractions of species within a given volume, turbulence over the range of scales being considered, to the chemical and thermal feedbacks within the atmosphere.
It is most likely that for the foreseeable future operational models will continue to be of empirical origin. However, there may be a trend towards hybrid models of a more physical nature as the physical and quasi-physical models are further developed and refined.
## Acknowledgements
I would like to acknowledge Ensis Bushfire Research and the CSIRO Centre for Complex Systems Science for supporting this project; Jim Gould and Rowena Ball for comments on the draft manuscript; and the members of Ensis Bushfire Research who ably assisted in the refereeing process, namely Miguel Cruz, Stuart Matthews and Grant Pearce.
## References
* Albini (1979) Albini, F. (1979). Spot fire distance from burning trees-a predictive model. General Technical Report INT-56, USDA Forest Service, Intermountain Forest and Range Experimental Station, Odgen UT.
* Albini (1985) Albini, F. (1985). A model for fire spread in wildland fuels by radiation. _Combustion Science and Technology_, 42:229-258.
* Albini (1986) Albini, F. (1986). Wildland fire spread by radiation-a model including fuel cooling by natural convection. _Combustion Science and Technology_, 45:101-112.
* Albini (1996) Albini, F. (1996). Iterative solution of the radiation transport equations governing spread of fire in wildland fuel. _Combustion, Explosion and Shock Waves_, 32(5):534-543.
* Albini and Stocks (1986) Albini, F. and Stocks, B. (1986). Predicted and observed rates of spread of crown fires in immature jack pine. _Combustion Science and Technology_, 48:65-76.
* Anderson et al. (1982) Anderson, D., Catchpole, E., de Mestre, N., and Parkes, T. (1982). Modelling the spread of grass fires. _Journal of Australian Mathematics Society, Series B_, 23:451-466.
* Anderson and Jackson (1967) Anderson, T. B. and Jackson, R. (1967). Fluid mechanical description of fluidized beds: Equations of motion. _Industrial & Engineering Chemistry. Fundamentals_, 6(4):527-539.
* Asensio and Ferragut (2002) Asensio, M. and Ferragut, L. (2002). On a wildland fire model with radiation. _International Journal for Numerical Methods in Engineering_, 54(1):137-157.
* Asensio et al. (2005) Asensio, M., Ferragut, L., and Simon, J. (2005). A convection model for fire spread simulation. _Applied Mathematics Letters_, 18:673-677.
* Babrauskas (2003) Babrauskas, V. (2003). _Ignition Handbook_. Fire Science Publishers, Issaquah, WA.
* Balbi et al. (1999) Balbi, J., Santoni, P., and Dupuy, J. (1999). Dynamic modelling of fire spread across a fuel bed. _International Journal of Wildland Fire_, 9(4):275-284.
* Ball et al. (1999) Ball, R., McIntosh, A., and Brindley, J. (1999). The role of char-forming processes in the thermal decomposition of cellulose. _Physical Chemistry Chemical Physics_, 1:5035-5043.
* Ball et al. (2004) Ball, R., McIntosh, A. C., and Brindley, J. (2004). Feedback processes in cellulose thermal decomposition: implications for fire-retarding strategies and treatments. _Combustion Theory and Modelling_, 8(2):281-291.
* Batchelor (1967) Batchelor, G. (1967). _An Introduction to Fluid Mechanics_. Cambridge University Press, London, 1970 edition.
* Beall and Eickner (1970) Beall, F. and Eickner, H. (1970). Thermal degradation of wood components: A review of the literature. Research Paper FPL 130, USDA Forest Service, Madison, Wisconsin.
* Bossert et al. (2000) Bossert, J. E., Linn, R., Reisner, J., Winterkamp, J., Dennison, P., and Roberts, D. (2000). Coupled atmosphere-fire behaviour model sensitivity to spatial fuels characterisation. In _Third Symposium on Fire and Forest Meteorology, 9-14 January 2000, Long Beach, California._, pages 21-26. American Meteorological Society.
* Butler et al. (2004) Butler, B., Finney, M., Andrews, P., and Albini, F. (2004). A radiation-driven model for crown fire spread. _Canadian Journal of Forest Research_, 34(8):1588-1599.
* Butler et al. (2005)Byram, G. (1959a). Combustion of forest fuels. In Davis, K., editor, _Forest Fire Control and Use_, chapter 3, pages 61-89 pp. McGraw-Hill, New York.
* Byram (1959b) Byram, G. (1959b). Forest fire behaviour. In Davis, K., editor, _Forest Fire Control and Use_, chapter 4, pages 90-123. McGraw-Hill, New York.
* Carrier et al. (1991) Carrier, G., Fendell, F., and Wolff, M. (1991). Wind-aided firespread across arrays of discrete fuel elements. I. Theory. _Combustion Science and Technology_, 75:31-51.
* Catchpole et al. (2002) Catchpole, W., Catchpole, E., Tate, A., Butler, B., and Rothermel, R. (2002). A model for the steady spread of fire through a homogeneous fuel bed. In Viegas (2002), page 106. Proceedings of the IV International Conference on Forest Fire Research, Luso, Coimbra, Portugal 18-23 November 2002.
* Chandler et al. (1963) Chandler, C. C., Storey, T. G., and Tangren, C. D. (1963). Prediction of fire spread following nuclear explosions. Technical Report Research Paper PSW-5, USDA Forest Service, Berkeley, Calif.
* Cheney et al. (1993) Cheney, N., Gould, J., and Catchpole, W. (1993). The influence of fuel, weather and fire shape variables on fire-spread in grasslands. _International Journal of Wildland Fire_, 3(1):31-44.
* Cheney and Sullivan (1997) Cheney, P. and Sullivan, A. (1997). _Grassfires: Fuel, Weather and Fire Behaviour_. CSIRO Publishing, Collingwood, Australia.
* Chetehouna et al. (2004) Chetehouna, K., Er-Riani, M., and Sero-Guillaume, O. (2004). On the rate of spread for some reaction-diffusion models of forest fire propagation. _Numerical Heat Transfer Part A-Applications_, 46(8):765-784.
* Colman and Linn (2003) Colman, J. J. and Linn, R. R. (2003). Non-local chemistry implementation in HI-GRAD/FIRETEC. In _Fifth Symposium on Fire and Forest Meteorology, 16-20 November 2003, Orlando, Florida_. American Meteorological Society.
* Croba et al. (1994) Croba, D., Lalas, D., Papadopoulos, C., and Tryfonopoulos, D. (1994). Numerical simulation of forest fire propagation in complex terrain. In _Proceedings of the 2nd International Conference on Forest Fire Research, Coimbra, Portugal, Nov. 1994. Vol 1._, pages 491-500.
* Curry and Fons (1940) Curry, J. and Fons, W. (1940). Forest-fire behaviour studies. _Mechanical Engineering_, 62:219-225.
* Curry and Fons (1938) Curry, J. R. and Fons, W. L. (1938). Rate of spread of surface fires in the ponderosa pine type of california. _Journal of Agricultural Research_, 57(4):239-267.
* de Mestre et al. (1989) de Mestre, N., Catchpole, E., Anderson, D., and Rothermel, R. (1989). Uniform propagation of a planar fire front without wind. _Combustion Science and Technology_, 65:231-244.
* di Blasi (1998) di Blasi, C. (1998). Comparison of semi-global mechanisms for primary pyrolysis of lignocellulosic fuels. _Journal of Analytical and Applied Pyrolysis_, 47(1):43-64.
* Drysdale (1985) Drysdale, D. (1985). _An Introduction to Fire Dynamics_. John Wiley and Sons, Chichester.
* Dupuy (1995) Dupuy, J. (1995). Slope and fuel load effects on fire behaviour: Laboratory experiments in pine needle fuel beds. _International Journal of Wildland Fire_, 5(3):153-164.
* Dupuy et al. (2002)Dupuy, J. (2000). Testing two radiative physical models for fire spread through porous forest fuel beds. _Combustion Science and Technology_, 155:149-180.
* Dupuy and Larini (1999) Dupuy, J. and Larini, M. (1999). Fire spread through a porous forest fuel bed: a radiative and convective model including fire-induced flow effects. _International Journal of Wildland Fire_, 9(3):155-172.
* Dupuy and Morvan (2005) Dupuy, J. L. and Morvan, D. (2005). Numerical study of a crown fire spreading toward a fuel break using a multiphase physical model. _International Journal of Wildland Fire_, 14(2):141-151.
* Ellis (2000) Ellis, P. (2000). _The aerodynamic and combustion characteristics of eucalypt bark: A firebrand study_. PhD thesis, The Australian National University School of Forestry, Canberra.
* Emmons (1963) Emmons, H. (1963). Fire in the forest. _Fire Research Abstracts and Reviews_, 5(3):163-178.
* Emmons (1966) Emmons, H. (1966). Fundamental problems of the free burning fire. _Fire Research Abstracts and Reviews_, 8(1):1-17.
* Evans et al. (2003) Evans, D., Rehm, R., and McPherson, E. (2003). Physics-based modelling of wildland-urban intermix fires. In _Proceedings of the 3rd International Wildland Fire Conference, 3-6 October 2003, Sydney_.
* Fernandes (2001) Fernandes, P. (2001). Fire spread prediction in shrub fuels in Portugal. _Forest Ecology and Management_, 144(1-3):67-74.
* Fons (1946) Fons, W. L. (1946). Analysis of fire spread in light forest fuels. _Journal of Agricultural Research_, 72(3):93-121.
* Forbes (1997) Forbes, L. K. (1997). A two-dimensional model for large-scale bushfire spread. _Journal of the Australian Mathematical Society, Series B (Applied Mathematics)_, 39(2):171-194.
* Gaydon and Wolfhard (1960) Gaydon, A. and Wolfhard, H. (1960). _Flames: Their Structure, Radiation and Temperature_. Chapman and Hall Ltd, London, 2nd edition.
* Gisborne (1927) Gisborne, H. (1927). The objectives of forest fire-weather research. _Journal of Forestry_, 25(4):452-456.
* Gisborne (1929) Gisborne, H. (1929). The complicated controls of fire behaviour. _Journal of Forestry_, 27(3):311-312.
* Goldstein et al. (2006) Goldstein, R. J., Ibele, W. E., Patankar, S. V., Simon, T. W., Kuehn, T. H., Strykowski, P. J., Tamma, K. K., Heberlein, J. V. R., Davidson, J. H., Bischof, J., Kulacki, F. A., Kortshagen, U., Garrick, S., and Srinivasan, V. (2006). Heat transfer: A review of 2003 literature. _International Journal of Heat and Mass Transfer_, 49(3-4):451-534.
* Grishin (1984) Grishin, A. (1984). Steady-state propagation of the front of a high-level forest fire. _Soviet Physics Doklady_, 29(11):917-919.
* Grishin (1997) Grishin, A. (1997). _Mathematical modeling of forest fires and new methods of fighting them_. Publishing House of Tomsk State University, Tomsk, Russia, english translation edition. Translated from Russian by Marek Czuma, L Chikina and L Smokotina.
* Grishin et al. (2007)Grishin, A., Gruzin, A., and Gruzina, E. (1984). Aerodynamics and heat exchange between the front of a forest fire and the surface layer of the atmosphere. _Journal of Applied Mechanics and Technical Physics_, 25(6):889-894.
* Grishin et al. (1983) Grishin, A., Gruzin, A., and Zverev, V. (1983). Mathematical modeling of the spreading of high-level forest fires. _Soviet Physics Doklady_, 28(4):328-330.
* Grishin and Shipulina (2002) Grishin, A. and Shipulina, O. (2002). Mathematical model for spread of crown fires in homogeneous forests and along openings. _Combustion, Explosion, and Shock Waves_, 38(6):622-632.
* Hanson et al. (2000) Hanson, H., Bradley, M., Bossert, J., Linn, R., and Younker, L. (2000). The potential and promise of physics-based wildfire simulation. _Environmental Science & Policy_, 3(4):161-172.
* Hawley (1926) Hawley, L. (1926). Theoretical considerations regarding factors which influence forest fires. _Journal of Forestry_, 24(7):7.
* Karplus (1977) Karplus, W. J. (1977). The spectrum of mathematical modeling and systems simulation. _Mathematics and Computers in Simulation_, 19(1):3-10.
* Larini et al. (1998) Larini, M., Giroud, F., Porterie, B., and Loraud, J. (1998). A multiphase formulation for fire propagation in heterogeneous combustible media. _International Journal of Heat and Mass Transfer_, 41(6-7):881-897.
* Lawson (1954) Lawson, D. (1954). Fire and the atomic bomb. Fire Research Bulletin No. 1, Department of Scientific and Industrial Research and Fire Offices' Committee, London.
* Lee (1972) Lee, S. (1972). Fire research. _Applied Mechanical Reviews_, 25(3):503-509.
* Linn and Cunningham (2005) Linn, R. and Cunningham, P. (2005). Numerical simulations of grass fires using a coupled atmosphere-fire model: Basic fire behavior and dependence on wind speed. _Journal of Geophysical Research_, 110(D13107):19 pp.
* Linn and Harlow (1998a) Linn, R. and Harlow, F. (1998a). Use of transport models for wildfire behaviour simulations. In _III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998. Vol 1_, pages 363-372.
* Linn et al. (2001) Linn, R., Reisner, J., Colman, J., and Smith, S. (2001). Studying wildfire behaviour using FIRETEC. In _Fourth Symposium on Fire and Forest Meteorology, 13-15 November 2001, Reno, Nevada._, pages 51-55. American Meteorological Society.
* Linn et al. (2002a) Linn, R., Reisner, J., Colman, J. J., and Winterkamp, J. (2002a). Studying wildfire behavior using FIRETEC. _International Journal of Wildland Fire_, 11(3-4):233-246.
* Linn et al. (2003) Linn, R., Winterkamp, J., Edminster, C., Colman, J., and Steinzig, M. (2003). Modeling interactions between fire and atmosphere in discrete element fuel beds. In _Fifth Symposium on Fire and Forest Meteorology, 16-20 November 2003, Orlando, Florida._ American Meteorological Society.
* Linn (1997) Linn, R. R. (1997). A transport model for prediction of wildfire behaviour. PhD Thesis LA-13334-T, Los Alamos National Laboratory. Reissue of PhD Thesis accepted by Department of Mechanical Engineering, New Mexico State University.
* Linn et al. (2003)Linn, R. R. and Harlow, F. H. (1998b). FIRETEC: A transport description of wildfire behaviour. In _Second Symposium on Fire and Forest Meteorology, 11-16 January 1998, Phoenix, Arizona_, pages 14-19. American Meteorological Society.
* Linn et al. (2002b) Linn, R. R., Reisner, J. M., Winterkamp, J. L., and Edminster, C. (2002b). Utility of a physics-based wildfire model such as FIRETEC. In Viegas (2002), page 101. Proceedings of the IV International Conference on Forest Fire Research, Luso, Coimbra, Portugal 18-23 November 2002.
* Lymberopoulos et al. (1998) Lymberopoulos, N., Tryfonopoulos, T., and Lockwood, F. (1998). The study of small and meso-scale wind field-forest fire interaction and buoyancy effects using the aidos-f simulator. In _III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998. Vol 1._, pages 405-418.
* Margerit and Sero-Guillaume (2002) Margerit, J. and Sero-Guillaume, O. (2002). Modelling forest fires. Part II: Reduction to two-dimensional models and simulation of propagation. _International Journal of Heat and Mass Transfer_, 45(8):1723-1737.
* Mell et al. (2006) Mell, W., Jenkins, M., Gould, J., and Cheney, P. (2006). A physics based approach to modeling grassland fires. _International Journal of Wildland Fire_, 15(4):in press.
* Morandini et al. (2001a) Morandini, F., Santoni, P., and Balbi, J. (2001a). The contribution of radiant heat transfer to laboratory-scale fire spread under the influences of wind and slope. _Fire Safety Journal_, 36(6):519-543.
* Morandini et al. (2001b) Morandini, F., Santoni, P., and Balbi, J. (2001b). Fire front width effects on fire spread across a laboratory scale sloping fuel bed. _Combustion Science and Technology_, 166:67-90.
* Morandini et al. (2002) Morandini, F., Santoni, P., Balbi, J., Ventura, J., and Mendes-Lopes, J. (2002). A two-dimensional model of fire spread across a fuel bed including wind combined with slope conditions. _International Journal of Wildland Fire_, 11(1):53-63.
* Morvan and Dupuy (2001) Morvan, D. and Dupuy, J. (2001). Modeling of fire spread through a forest fuel bed using a multiphase formulation. _Combustion and Flame_, 127(1-2):1981-1994.
* Morvan and Dupuy (2004) Morvan, D. and Dupuy, J. (2004). Modeling the propagation of a wildfire through a mediterranean shrub using a multiphase formulation. _Combustion and Flame_, 138(3):199-210.
* Morvan and Larini (2001) Morvan, D. and Larini, M. (2001). Modeling of one dimensional fire spread in pine needles with opposing air flow. _Combustion Science and Technology_, 164(1):37-64.
* Morvan et al. (2004) Morvan, D., Larini, M., Dupuy, J., Fernandes, P., Miranda, A., Andre, J., Sero-Guillaume, O., Calogine, D., and Cuinas, P. (2004). Eufirelab: Behaviour modelling of wildland fires: a state of the art. Deliverable D-03-01, EUFIRELAB. 33 p.
* Morvan et al. (2004)Pastor, E., Zarate, L., Planas, E., and Arnaldos, J. (2003). Mathematical models and calculation systems for the study of wildland fire behaviour. _Progress in Energy and Combustion Science_, 29(2):139-153.
* Perry (1998) Perry, G. (1998). Current approaches to modelling the spread of wildland fire: a review. _Progress in Physical Geography_, 22(2):222-245.
* Plucinski (2003) Plucinski, M. P. (2003). _The investigation of factors governing ignition and development of fires in heathland vegetation_. PhD thesis, School of Mathematics and Statistics University of New South Wales, Australian Defence Force Academy, Canberra, ACT, Australia.
* Porterie et al. (1998a) Porterie, B., Morvan, D., Larini, M., and Loraud, J. (1998a). Wildfire propagation: A two-dimensional multiphase approach. _Combustion Explosion and Shock Waves_, 34(2):139-150.
* Porterie et al. (1998b) Porterie, B., Morvan, D., Loraud, J., and Larini, M. (1998b). A multiphase model for predicting line fire propagation. In _III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998. Vol 1._, pages 343-360.
* Porterie et al. (2000) Porterie, B., Morvan, D., Loraud, J., and Larini, M. (2000). Firespread through fuel beds: Modeling of wind-aided fires and induced hydrodynamics. _Physics of Fluids_, 12(7):1762-1782.
* Rehm et al. (2003) Rehm, R., Evans, D., Mell, W., Hostikka, S., McGrattan, K., Forney, G., Boulding, C., and Baker, E. (2003). Neighborhood-scale fire spread. In _Fifth Symposium on Fire and Forest Meteorology, 16-20 November 2003, Orlando, Florida._ paper J6E.7 8pp, unpiginated.
* Rehm and Baum (1978) Rehm, R. G. and Baum, H. R. (1978). The equations of motion for thermally driven, buoyant flows. _Journal of Research of the National Bureau of Standards_, 83(3):297-308.
* Reisner et al. (2000a) Reisner, J., Knoll, D., Mousseau, V., and Linn, R. (2000a). New numerical approaches for coupled atmosphere-fire models. In _Third Symposium on Fire and Forest Meteorology, 9-14 January 2000, Long Beach, California._, pages 11-14. American Meteorological Society.
* Reisner et al. (2000b) Reisner, J., Wynne, S., Margolin, L., and Linn, R. (2000b). Coupled atmospheric-fire modeling employing the method of averages. _Monthly Weather Review_, 128(10):3683-3691.
* Reisner et al. (1998) Reisner, J. M., Bossert, J. E., and Winterkamp, J. L. (1998). Numerical simulations of two wildfire events using a combined modeling system (HIGRAD/BEHAVE). In _Second Symposium on Fire and Forest Meteorology, 11-16 January 1998, Pheonix, Arizona_, pages 6-13. American Meteorological Society.
* Rogers and Miller (1963) Rogers, J. C. and Miller, T. (1963). Survey of the thermal threat of nuclear weapons. Technical Report SRI Project No. IMU-4201, Standford Research Institute, California. Unclassified version, Contract No. OCD-OS-62-135(111).
* Rothermel (1972) Rothermel, R. (1972). A mathematical model for predicting fire spread in wildland fuels. Research Paper INT-115, USDA Forest Service.
* Rothermel (1973)Sacadura, J. (2005). Radiative heat transfer in fire safety science. _Journal of Quantitative Spectroscopy & Radiative Transfer_, 93:5-24.
* Santoni and Balbi (1998a) Santoni, P. and Balbi, J. (1998a). Modelling of two-dimensional flame spread across a sloping fuel bed. _Fire Safety Journal_, 31(3):201-225.
* Santoni and Balbi (1998b) Santoni, P. and Balbi, J. (1998b). Numerical simulation of a fire spread model. In _III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998. Vol 1._, pages 295-310.
* Santoni et al. (1999) Santoni, P., Balbi, J., and Dupuy, J. (1999). Dynamic modelling of upslope fire growth. _International Journal of Wildland Fire_, 9(4):285-292.
* Sero-Guillaume and Margerit (2002) Sero-Guillaume, O. and Margerit, J. (2002). Modelling forest fires. Part I: A complete set of equations derived by extended irreversible thermodynamics. _International Journal of Heat and Mass Transfer_, 45(8):1705-1722.
* Shafizadeh (1982) Shafizadeh, F. (1982). Introduction to pyrolysis of biomass. _Journal of Analytical and Applied Pyrolysis_, 3(4):283-305.
* Simeoni et al. (2002) Simeoni, A., Larini, M., Santoni, P., and Balbi, J. (2002). Coupling of a simplified flow with a phenomenological fire spread model. _Comptes Rendus Mecanique_, 330(11):783-790.
* Simeoni et al. (2001a) Simeoni, A., Santoni, P., Larini, M., and Balbi, J. (2001a). On the wind advection influence on the fire spread across a fuel bed: modelling by a semi-physical approach and testing with experiments. _Fire Safety Journal_, 36(5):491-513.
* Simeoni et al. (2001b) Simeoni, A., Santoni, P., Larini, M., and Balbi, J. (2001b). Proposal for theoretical improvement of semi-physical forest fire spread models thanks to a multiphase approach: Application to a fire spread model across a fuel bed. _Combustion Science and Technology_, 162(1):59-83.
* Simeoni et al. (2003) Simeoni, A., Santoni, P., Larini, M., and Balbi, J. (2003). Reduction of a multiphase formulation to include a simplified flow in a semi-physical model of fire spread across a fuel bed. _International Journal of Thermal Sciences_, 42(1):95-105.
* Sullivan et al. (2003) Sullivan, A., Ellis, P., and Knight, I. (2003). A review of the use of radiant heat flux models in bushfire applications. _International Journal of Wildland Fire_, 12(1):101-110.
* Thomas (1967) Thomas, P. (1967). Some aspects of the growth and spread of fire in the open. _Journal of Forestry_, 40:139-164.
* Thomas (1971) Thomas, P. (1971). Rates of spread of some wind driven fires. _Journal of Forestry_, 44:155-175.
* Turner (1973) Turner, J. (1973). _Buoyancy effects in fluids_. Cambridge University Press, Cambridge. Paperback edition 1979.
* Van Wagner (1967) Van Wagner, C. (1967). Calculation on forest fire spread by flame radiation. Technical Report Departmental Publication No. 1185, Canadian Department of Forestry and Rural Development Forestry Branch.
* Van Wagner (1967)Vaz, G., Andre, J., and Viegas, D. (2004). Fire spread model for a linear front in a horizontal solid porous fuel bed in still air. _Combustion Science and Technology_, 176(2):135-182.
* Viegas and editor (2002) Viegas, D., editor (2002). _Forest Fire Research & Wildland Fire Safety_, Rotterdam, Netherlands. Millpress. Proceedings of the IV International Conference on Forest Fire Research, Luso, Coimbra, Portugal 18-23 November 2002.
* Weber (1989) Weber, R. (1989). Analytical models of fire spread due to radiation. _Combustion and Flame_, 78:398-408.
* Weber (1991a) Weber, R. (1991a). Modelling fire spread through fuel beds. _Progress in Energy Combustion Science_, 17(1):67-82.
* Weber (1991b) Weber, R. (1991b). Toward a comprehensive wildfire spread model. _International Journal of Wildland Fire_, 1(4):245-248.
* Williams (1982) Williams, F. (1982). Urban and wildland fire phenomenology. _Progress in Energy Combustion Science_, 8:317-354.
* Williams (1985) Williams, F. A. (1985). _Combustion Theory: The Fundamental Theory of Chemically Reacting Flow Systems_. Addison-Wesley Publishing Company, Massachusetts, 2nd edition, 1994 edition.
* Wolff et al. (1991) Wolff, M., Carrier, G., and Fendell, F. (1991). Wind-aided firespread across arrays of discrete fuel elements. II. Experiment. _Combustion Science and Technology_, 77:261-289.
* Wu et al. (2000) Wu, Y., Xing, H. J., and Atkinson, G. (2000). Interaction of fire plume with inclined surface. _Fire Safety Journal_, 35(4):391-403.
\\begin{table}
\\begin{tabular}{c c c c} \\hline Type & Time scale (s) & Vertical scale (m) & Horizontal scale (m) \\\\ \\hline Combustion reactions & 0.0001 - 0.01 & 0.0001 - 0.01 & 0.0001 - 0.01 \\\\ Fuel particles & - & 0.001 - 0.01 & 0.001 - 0.01 \\\\ Fuel complex & - & 1 - 20 & 1 - 100 \\\\ Flames & 0.1 - 30 & 0.1 - 10 & 0.1 - 2 \\\\ Radiation & 0.1 - 30 & 0.1 - 10 & 0.1-50 \\\\ Conduction & 0.01 - 10 & 0.01 - 0.1 & 0.01 - 0.1 \\\\ Convection & 1 - 100 & 0.1 - 100 & 0.1 - 10 \\\\ Turbulence & 0.1 - 1000 & 1 - 1000 & 1 - 1000 \\\\ Spotting & 1 - 100 & 1 - 3000 & 1 - 10000 \\\\ Plume & 1 - 10000 & 1 - 10000 & 1 - 100 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Outline of the major biological, physical and chemical components and processes occurring in a wildland fire and the temporal and spatial (vertical and horizontal) scales over which they operate.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline Species & Cellulose (\\%) & Hemicellulose (\\%) & Lignin (\\%) & Other (\\%) \\\\ \\hline Softwood & 41.0 & 24.0 & 27.8 & 7.2 \\\\ Hardwood & 39.0 & 35.0 & 19.5 & 6.5 \\\\ Wheat straw & 39.9 & 28.2 & 16.7 & 15.2 \\\\ Rice straw & 30.2 & 24.5 & 11.9 & 33.4 \\\\ Bagasse & 38.1 & 38.5 & 20.2 & 3.2 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Approximate analysis of some biomass species (Shafizadeh, 1982).
\\begin{table}
\\begin{tabular}{c c c c c} \\hline Model & Author & Year & Country & Dimensions & Plane \\\\ \\hline Weber & Weber & 1991 & Australia & 2 & XY \\\\ AIOLOS-F & Croba _et al._ & 1994 & Greece & 3 & - \\\\ FIRETEC & Linn & 1997 & USA & 3 & - \\\\ Forbes & Forbes & 1997 & Australia & 1 & X \\\\ Grishin & Grishin _et al._ & 1997 & Russia & 2 & XZ \\\\ IUSTI & Larini _et al._ & 1998 & France & 2 & XZ \\\\ PIF97 & Dupuy _et al._ & 1999 & France & 2 & XZ \\\\ LEMTA & Sero-Guillaume _et al._ & 2002 & France & 2(3) & XY \\\\ UoS & Asensio _et al._ & 2002 & Spain & 2 & XY \\\\ WFDS & Mell _et al._ & 2006 & USA & 3 & - \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Summary of physical models (1990-present) discussed here.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Model} & \\multirow{2}{*}{No.} & \\multirow{2}{*}{Domain Size} & \\multicolumn{2}{c}{Resolution (m)} & \\multicolumn{2}{c}{CPU} & \\multicolumn{1}{c}{Simulation} & \\multicolumn{1}{c}{Computation} & \\multicolumn{1}{c}{Comment} \\\\ & Dimensions & (x \\(\\times\\) y x z) & \\(\\Delta\\)x & \\(\\Delta\\)y & \\(\\Delta\\)x & \\(\\Delta\\)t & No. \\& Type & Time(s) & Time (s) & \\\\ \\hline _Physical_ & & & & & & & & & & \\\\ Weber & 2 &? &? & - & - &? &? &? &? & \\\\ AiOLOS-F & 3 & 10 \\(\\times\\) 10 \\(\\times\\) 7 km &? &? &? &? &? &? & \\(<\\) real time \\\\ FIRETEC & 3 & 320 \\(\\times\\) 160 \\(\\times\\) 615 m & 2 m & 2 m & 1.5 m & 0.002 s & 128 nodes &? &? & \\(>>\\) real time \\\\ Forbes & 2 &? &? &? & - &? &? &? &? &? \\\\ Grishin & 2 & 50 \\(\\times\\) \\(\\times\\) 12 m &? & - &? &? &? &? &? & 700 K isotherm \\\\ IUSTI & 2 & 2.2 \\(\\times\\) - \\(\\times\\) 0.9 m & 0.02 & - & 0.09 &? &? &? &? & 500 K isotherm \\\\ PIF97 & 2 & 200 \\(\\times\\) - \\(\\times\\) 50 m & 0.25 & & 0.25 & 1 s & P4 2GHz & 200 s & 48 h & 500 K isotherm \\\\ LEMTA & 2 &? &? &? & - &? &?PC? &? &? & \\(\\simeq\\) real time \\\\ UoS & 2 &? & 1.875 m & 1.875 m & - & 0.25\\(\\mu\\)s &? &? &? &? \\\\ WFDS & 3 & 1.5 \\(\\times\\) 1.5 \\(\\times\\) 0.2 km & 1.5 m & 1.5 m & 1.4 m & - & 11 nodes & 100 s & 25 h & \\\\ _Quasi-physical_ & & & & & & & & & \\\\ ADFA I & 1 &? &? & - &? &? &? &? & \\\\ TRW & 2 &? &? &? & - &? &? &? &? & \\\\ Albini & 2 &? &? & - &? &? &? &? &? &? \\\\ UC & 1 & 1 \\(\\times\\) 1 \\(\\times\\) m & 0.01 m & 0.01 & 0.01 s & Sun Ultra II & 144 s & 114 s & 500 K isotherm \\\\ ADFA II & 2 &? &? & - &? &? &? &? &? & \\\\ Coimbra & 2 &? &? &? &? &? &? &? &? &? \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 5: Summary of all modelsFigure 1: Schematic of chemical structure of portion of neighbouring cellulose chains, indicating some of the hydrogen bonds (dashed lines) that may stabilise the crystalline form of cellulose (Source: Ball et al. (1999)
Figure 2: Schematic of the competing paths possible in the thermal degradation of cellulose substrate (S). Volatilisation into levoglucosan (V) in the absence of moisture is endothermic. Subsequent oxidisation of levoglucosan into products is exothermic. Char formation (C) occurs at a lower activation energy in the presence of moisture. This path is exothermic and forms water. Chemical and thermal feedback paths (dashed lines) can encourage either volatilisation or charring. (After di Blasi (1998); Ball et al. (1999)) | In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behaviour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of a physical or quasi-physical nature. These models are based on the fundamental chemistry and/or physics of combustion and fire spread. Other papers in the series review models of an empirical or quasi-empirical nature, and mathematical analogues and simulation models. Many models are extensions or refinements of models developed before 1990. Where this is the case, these models are also discussed but much less comprehensively.
Ensis1 Bushfire Research 2
Footnote 1: A CSIRO/Scion Joint Venture
PO Box E4008, Kingston, ACT 2604, Australia
email: [email protected] or [email protected]
phone: +61 2 6125 1693, fax: +61 2 6125 4676
version 3.0 | Write a summary of the passage below. |
arxiv-format/0706_3712v4.md | # Resonant and Near-Resonant Internal Wave Interactions
Yuri V. Lvov\\({}^{1}\\), Kurt L. Polzin\\({}^{2}\\) and Naoto Yokoyama\\({}^{3}\\)
\\({}^{1}\\) Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy NY 12180
\\({}^{2}\\) Woods Hole Oceanographic Institution, MS#21, Woods Hole, MA 02543
\\({}^{3}\\) Department of Aeronautics and Astronautics, Kyoto University, Kyoto, Kyoto 606-8501 JAPAN
## 1 Introduction
Wave-wave interactions in stratified oceanic flows have been a subject of intensive research in the last four decades. Of particular importance is the existence of a \"universal\" internal-wave spectrum, the Garrett and Munk spectrum. It is generally perceived that the existence of a universal spectrum is, at least in part and perhaps even primarily, the result of nonlinear interactions of waves with different wavenumbers. Due to the quadratic nonlinearity of the underlying primitive equations and the fact that the linear internal-wave dispersion relation can satisfy a three-wave resonance condition, waves interact in triads. Therefore the question arises: how strongly do waves within a given triad interact? What are the oceanographic consequences of this interaction?
Wave-wave interactions can be rigorously characterized by deriving a closed equation representing the slow time evolution of the wavefield's wave action spectrum. Such an equation is called a _kinetic equation_ (Zakharov et al. 1992) and significant efforts in this regard are listed in Table 8.
A kinetic equation describes, under the assumption of weak nonlinearity, the resonant spectral energy transfer on the _resonant manifold_. The resonant manifold is a set of wavevectors \\({\\bf p}\\), \\({\\bf p_{1}}\\) and \\({\\bf p_{2}}\\) that satisfy
\\[{\\bf p}={\\bf p}_{1}+{\\bf p}_{2},\\ \\ \\ \\omega_{\\bf p}=\\omega_{{\\bf p}_{1}}+ \\omega_{{\\bf p}_{2}}, \\tag{1}\\]
where the frequency \\(\\omega\\) is given by a linear dispersion relation relating wave frequency \\(\\omega\\) with wavenumber \\({\\bf p}\\).
The reduction of all possible interactions between three wavevectors to a resonant manifold is a significant simplification. _Even further_ simplification can be achieved by taking into accountthat, of all interactions _on_ the resonant manifold, the most important are those which involve extreme scale separations McComas and Bretherton (1977) between interaction wavevectors. It is shown in McComas (1977) that Garrett and Munk spectrum of internal waves is stationary with respect to one class of such interactions, called Induced Diffusion. Furthermore, a comprehensive inertial-range theory with constant downscale transfer of energy was obtained by patching these mechanisms together in a solution that closely mimics the empirical universal spectrum (GM)(McComas and Muller 1981a). It was therefore concluded that that Garrett and Munk spectrum constitutes an approximate stationary state of the kinetic equation.
In this paper we revisit the question of relation between Garrett and Munk spectrum and the resonant kinetic equation. At the heart of this paper (Section a) are numerical evaluations of the Lvov and Tabak (2004) internal wave kinetic equation demonstrating changes in spectral amplitude at a rate less than an inverse wave period at high vertical wavenumber for the Garrett and Munk spectrum. This rapid temporal evolution implies that the GM spectrum is _not_ a stationary state and is contrary to the characterization of the GM spectrum as an inertial subrange. This result gave us cause to review published work concerning wave-wave interactions and compare results. The product of this work is presented in Sections 3\\(\\&\\)4. In particular, we concentrate on four different versions of the internal wave kinetic equation:
* a noncanonical description using Lagrangian coordinates (Olbers 1974, 1976; Muller and Olbers 1975),
* a canonical Hamiltonian description in Eulerian coordinates (Voronovich 1979),
* a dynamical derivation of a kinetic equation without use of Hamiltonian formalisms in Eulerian coordinates (Caillol and Zeitlin 2000),
* a canonical Hamiltonian description in isopycnal coordinates (Lvov and Tabak 2001, 2004).
We show in Section 3 that, without background rotation, all the listed approaches are _equivalent_ on the resonant manifold. In Section 4 we demonstrate that the two versions of the kinetic equation that consider non-zero rotation rates are again _equivalent_ on the resonant manifold. This presents us with our first paradox: if all these kinetic equations are the same on the resonant manifold and exhibit a rapid temporal evolution, then why is GM considered to be a stationary state? The resolution of this paradox, presented in Section 7, is that: (i) numerical evaluations of the McComas (1977) kinetic equation demonstrating the induced diffusion stationary states require damping in order to balance the fast temporal evolution at high vertical wavenumber, and (ii) the high wavenumber temporal evolution of the Lvov and Tabak (2004) kinetic equation is tentatively identified as being associated with the elastic scattering mechanism rather than induced diffusion.
Having clarified this, we proceed to the following observation: Not only do our numerical evaluations imply that the GM spectrum is _not_ a stationary state, the rapid evolution rates correspond to a strongly nonlinear system. Consequently the self-consistency of the kinetic equation, which is built on an assumption of weak nonlinearity, is at risk. Moreover, reduction of all _resonant_ wave-wave interactions exclusively to extreme scale separations is also not self-consistent.
Yet, we are not willing to give up on the kinetic equation. Our second paradox is that, in a companion paper (Lvov et al. tted) we show how a comprehensive theory built on a scale invariant _resonant_ kinetic equation helps to interpret the _observed variability_ of the background oceanic internal wavefield. The observed variability, in turn, is largely consistent with the induced diffusion mechanism being a stationary state!
Thus the resonant kinetic equation demonstrates promising predictive ability and it is there fore tempting to move towards a self-consistent internal wave turbulence theory. One possible route towards such theory is to include to the kinetic equation near-resonant interactions, defined as
\\[\\mathbf{p}=\\mathbf{p}_{1}+\\mathbf{p}_{2},\\ |\\ \\omega_{\\mathbf{p}}-\\omega_{ \\mathbf{p}_{1}}-\\omega_{\\mathbf{p}_{2}}\\ |<\\Gamma,\\]
where \\(\\Gamma\\) is the resonance width. We show in Section b that such resonant broadening leads to slower evolution rates, potentially leading to a more self consistent description of internal waves.
We conclude and list open questions in Section 8. Our numerical scheme for evaluating near-resonant interactions is discussed in Section 5. An appendix contains the interaction matrices used in this study.
## 2 Background
A kinetic equation is a closed equation for the time evolution of the wave action spectrum in a system of weakly interacting waves. It is usually derived as a central result of wave turbulence theory. The concepts of wave turbulence theory provide a fairly general framework for studying the statistical steady states in a large class of weakly interacting and weakly nonlinear many-body or many-wave systems. In its essence, classical wave turbulence theory (Zakharov et al., 1992) is a perturbation expansion in the amplitude of the nonlinearity, yielding, at the leading order, linear waves, with amplitudes slowly modulated at higher orders by resonant nonlinear interactions. This modulation leads to a redistribution of the spectral energy density among space- and time-scales.
While the route to deriving the spectral evolution equation from wave amplitude is fairly standardized (Section b), there are substantive differences in obtaining expressions for the evolution equations of wave amplitude \\(a\\). Section a describes various attempts to do so.
### Hamiltonian Structures and Field Variables
1. A canonical Hamiltonian formulation in isopycnal coordinates
Lvov and Tabak (2001, 2004) start from the primitive equations of motion written in isopycnal coordinates:
\\[\\frac{\\partial}{\\partial t}\\frac{\\partial z}{\\partial\\rho}+\
abla \\cdot\\left(\\frac{\\partial z}{\\partial\\rho}\\mathbf{u}\\right) = 0,\\] \\[\\frac{\\partial\\mathbf{u}}{\\partial t}+f\\mathbf{u}^{+}+\\mathbf{u} \\cdot\
abla\\mathbf{u}+\\frac{\
abla M}{\\rho_{0}} = 0,\\] \\[\\frac{\\partial M}{\\partial\\rho}-gz = 0. \\tag{2}\\]
representing mass conservation, horizontal momentum conservation under the Bousinesq approximation and hydrostatic balance. The velocity \\(\\mathbf{u}\\) is then represented as:
\\[\\mathbf{u}=\
abla\\phi+\
abla^{\\perp}\\psi,\\]
with \\(\
abla^{\\perp}=(-\\partial/\\partial y,\\partial/\\partial x)\\) and a normalized differential layer thickness is introduced:
\\[\\Pi=\\rho/g\\partial^{2}M/\\partial\\rho^{2}=\\rho\\partial z/\\partial\\rho \\tag{3}\\]
Since both potential vorticity and density are conserved along particle trajectories, an initial
Switching to Fourier space, and introducing a complex field variable \\(a_{\\bf p}\\) through the transformation
\\[\\phi_{\\bf p} = \\frac{iN\\sqrt{\\omega_{\\bf p}}}{\\sqrt{2g}|{\\bf k}|}\\left(a_{\\bf p}-a_ {-{\\bf p}}^{*}\\right)\\,,\\] \\[\\Pi_{\\bf p} = \\Pi_{0}-\\frac{N\\;\\Pi_{0}\\,|{\\bf k}|}{\\sqrt{2\\,g\\omega_{\\bf p}}} \\left(a_{\\bf p}+a_{-{\\bf p}}^{*}\\right)\\,, \\tag{8}\\]
where the frequency \\(\\omega\\) satisfies the linear dispersion relation
\\[\\omega_{\\bf p}=\\sqrt{f^{2}+\\frac{g^{2}}{\\rho_{0}^{2}N^{2}}\\frac{|{\\bf k}|^{2}} {m^{2}}}\\;\\;, \\tag{9}\\]
the equations of motion (2) adopt the canonical form
\\[i\\frac{\\partial}{\\partial t}a_{\\bf p}=\\frac{\\delta{\\cal H}}{\\delta a_{\\bf p}^{ *}}\\;, \\tag{10}\\]
with the Hamiltonian
\\[{\\cal H} =\\int d{\\bf p}\\,\\omega_{\\bf p}|a_{\\bf p}|^{2}\\] \\[+\\int d{\\bf p}_{012}\\left(\\delta_{{\\bf p}+{\\bf p}_{1}+{\\bf p}_{2} }(U_{{\\bf p},{\\bf p}_{1},{\\bf p}_{2}}a_{\\bf p}^{*}a_{{\\bf p}_{1}}^{*}a_{{\\bf p }_{2}}^{*}+{\\rm c.c.})+\\delta_{-{\\bf p}+{\\bf p}_{1}+{\\bf p}_{2}}(V_{{\\bf p}_{1},{\\bf p}_{2}}^{\\bf p}a_{\\bf p}^{*}a_{{\\bf p}_{1}}a_{{\\bf p}_{2}}+{\\rm c.c.})\\;.\\right)\\]
Eq. (10) is Hamilton's equation and (11) is the standard form of the Hamiltonian of a system dominated by three-wave interactions (Zakharov et al. 1992). Calculations of interaction coefficients \\(U\\) and \\(V\\) are tedious but straightforward task, completed in Lvov and Tabak (2001, 2004).
We emphasize that (10) is, with simply a Fourier decomposition and assumption of uniform potential vorticity on an isopycnal, _precisely equivalent_ to the fully nonlinear equations of motion in isopycnal coordinates [2]. All other formulations of an internal wave kinetic equation depend upon a linearization prior to the derivation of the kinetic equation via an assumption of weak nonlinearity.
The difficulty is that, in order to utilize Hamilton's Equation [10], the Hamiltonian [7] must _first_ be constructed as a function of the generalized coordinates and momenta (\\(\\Pi\\) and \\(\\phi\\) here). It is not always possible to do so _directly_, in which case one must set up the associated Lagrangian (\\(\\mathcal{L}\\) below) and then calculate the generalized coordinates and momenta.
#### 3.1.2 Hamiltonian formalism in Clebsch variables in (Voronovich 1979)
Voronovich starts from the non-rotating equations in Eulerian coordinates:
\\[\\frac{\\partial\\mathbf{u}}{\\partial t}+\\mathbf{u}\\cdot\
abla \\mathbf{u} = \\frac{-1}{\\rho}\
abla p-g\\hat{z}\\] \\[\
abla\\cdot\\mathbf{u}=0\\] \\[\\frac{\\partial\\rho}{\\partial t}+\\mathbf{u}\\cdot\
abla\\rho = 0\\;. \\tag{12}\\]
The Hamiltonian of the system is
\\[\\mathcal{H}=\\int\\left((\\rho_{0}+\\rho)\\frac{\\mathbf{v}^{2}}{2}+\\Pi(\\rho_{o}+ \\rho)-\\Pi(\\rho_{o})+\\rho gz\\right)d\\mathbf{r}, \\tag{13}\\]
where \\(\\rho_{0}(z)\\) is the equilibrium density profile, \\(\\rho\\) is the wave perturbation and \\(\\Pi\\) is a potential energy density function:
\\[\\Pi(\\rho_{o}+\\rho)-\\Pi(\\rho_{o})+\\rho gz=g\\int_{\\eta(\\rho_{o}+\\rho)}^{\\eta(\\rho_ {o})}[\\rho_{0}+\\rho-\\rho_{0}(\\xi)]d\\xi \\tag{14}\\]
with \\(\\eta(\\xi)\\) being the inverse of \\(\\rho_{o}(z)\\). The intent is to use \\(\\rho\\) and Lagrange multiplier \\(\\lambda\\) as the canonically conjugated Hamiltonian pair:
\\[\\dot{\\lambda}=\\frac{\\partial\\mathcal{H}}{\\partial\\rho} = -(\\mathbf{v}\
abla)\\lambda+g(z-\\eta(\\rho_{o}+\\rho)) \\tag{15}\\] \\[\\dot{\\rho}=-\\frac{\\partial\\mathcal{H}}{\\partial\\lambda} = -(\\mathbf{v}\
abla)(\\rho_{o}+\\rho)\\]
with \\(z-\\eta(\\rho_{o}+\\rho)\\) being the vertical displacement of a fluid parcel and the second equation representing continuity. The issue is to express the velocity \\(\\mathbf{v}\\) as a function of \\(\\lambda\\) and \\(\\rho\\), and to this end one introduces yet another function \\(\\Phi\\) with the harmonious feature
\\[\\frac{\\delta\\mathcal{H}}{\\delta\\Phi}=0 \\tag{17}\\]
and a constraint. That constraint is provided by:
\\[\
abla\\cdot\\mathbf{v}=-\\frac{\\delta\\mathcal{H}}{\\delta\\Phi}=0 \\tag{18}\\]
Voronovich (1979) then identifies the functional relationship:
\\[\\mathbf{v}=\\frac{1}{\\rho_{0}+\\rho}\\left(\
abla\\Phi+\\lambda\
abla(\\rho_{0}+ \\rho)\\right)\\cong\\frac{1}{\\overline{\\rho}}\\left(\
abla\\Phi+\\lambda\
abla(\\rho _{0}+\\rho)\\right), \\tag{19}\\]with the right-hand-side representing the Boussinesq approximation. The only thing stopping progress at this point is the explicit appearance of \\(\\xi\\) in (14), and to eliminate this explicit dependence a Taylor series in density perturbation \\(\\rho\\) relative to \\(\\rho_{0}\\) is used to express the potential energy in terms of \\(\\rho\\) and \\(\\lambda\\). The resulting Hamiltonian \\({\\cal H}\\) is
\\[{\\cal H}=\\int[\\frac{v^{2}}{2}+\\Pi(\\rho_{o}+\\rho)-\\Pi(\\rho_{o})+\\rho gz]d{\\bf r} \\cong\\frac{1}{2}\\int[\\lambda\
abla(\\rho_{o}+\\rho)(\
abla\\Phi+\\lambda\
abla( \\rho_{o}+\\rho))-\\frac{g}{\\rho_{o}^{\\prime}}\\rho^{2}+\\frac{g\\rho_{o}^{\\prime \\prime}}{\\rho_{o}^{\\prime 3}}\\frac{\\rho^{3}}{3}]d{\\bf r} \\tag{20}\\]
with primes indicating \\(\\partial/\\partial z\\).
The only approximations that have been made to obtain (20) are the Boussinesq approximation in the nonrotating limit, the specification that the velocity be represented as (19) and a Taylor series expansion. The Taylor series expansion is used to express the Hamiltonian in terms of canonically conjugated variables \\(\\rho\\) and \\(\\lambda\\). Truncation of this Taylor series is the essence of the slowly varying (WKB) approximation that the vertical scale of the internal wave is smaller than the vertical scale of the background stratification, which requires, for consistency sake, the hydrostatic approximation.
The procedure of introducing additional functionals (\\(\\Phi\\)) and constraints (18) originates in Clebsch (1959). See Seliger and Witham (9968) for an discussion of Clebsch variables and also Section 7.1 of the textbook Miropolsky (1981). Finally, the evolution equation for wave amplitude \\(a_{k}\\) is produced by expressing the cubic terms in the Hamiltonian with solutions to the linear problem represented by the quadratic components of the Hamiltonian. This is an explicit linearization of the problem prior to the formulation of the kinetic equation.
3) Olbers, McComas and Meiss
Derivations presented in Olbers (1974), McComas (1975), and Meiss et al. (1979) are based upon the Lagrangian equations of motion:
\\[\\ddot{x}-f\\dot{y} = \\frac{-1}{\\rho}p_{x}\\] \\[\\ddot{y}+f\\dot{x} = \\frac{-1}{\\rho}p_{y}\\] \\[\\ddot{z}+g = \\frac{-1}{\\rho}p_{z}\\] \\[\\partial(x_{1},x_{2},x_{3})/\\partial(r_{1},r_{2},r_{3})=1 \\tag{21}\\]
expressing momentum conservation and incompressibility. Here \\({\\bf r}\\) is the initial position of a fluid parcel at \\({\\bf x}\\): these are Lagrangian coordinates. In the context of Hamiltonian mechanics, the associated Lagrangian density is:
\\[{\\cal L}=\\frac{1}{2}\\rho\\left(\\dot{x}_{i}\\dot{x}_{j}+\\epsilon_{jkl}f_{i}\\dot{ x}_{k}x_{l}\\right)-\\rho g\\delta_{j3}x
the Boussinesq Lagrangian density \\({\\cal L}\\) for slow variations in background density \\(\\rho\\) is:
\\[{\\cal L}=\\frac{1}{2}[\\xi_{i}^{2}+\\epsilon_{jkl}f_{i}\\dot{\\xi}_{k}\\xi_{l}-N^{2} \\xi_{3}^{2}+\\pi(\\frac{\\partial\\xi_{i}}{\\partial x_{i}}+\\Delta_{ii}+\\Delta)] \\tag{22}\\]
with \\(\\frac{\\partial\\xi_{i}}{\\partial x_{i}}+\\Delta_{ii}+\\Delta\\) representing the continuity equation where \\(\\Delta=det(\\partial\\xi_{i}/\\partial x_{j})\\).
This Lagrangian is then projected onto a single wave amplitude variable \\(a\\) using the linear internal wave constancy relations1 based upon plane wave solutions [e.g. Muller (1976), (2.26)] and a perturbation expansion in wave amplitude is proposed. This process has two consequences: The use of internal wave consistency relations places a condition of zero perturbation potential vorticity upon the result, and the expansion places a small amplitude approximation upon the result with ill defined domain of validity relative to the (later) assertion of weak interactions.
Footnote 1: Wave amplitude \\(a\\) is defined so that \\(a^{*}a\\) is proportional to wave energy.
The evolution equation for wave amplitude is Lagrange's equation:
\\[\\frac{d}{dt}\\frac{\\partial{\\cal L}}{\\partial\\dot{a}_{0}}-\\frac{\\partial{\\cal L }}{\\partial a_{0}}=0 \\tag{23}\\]
in which \\(a_{0}\\) is the zeroth order wave amplitude. After a series of approximations, this equation is cast into a field variable equation similar to (10). We emphasize that to get there small displacement of parcel of fluid was used, together with the built in assumption of resonant interactions between internal wave modes. The (Lvov and Tabak 2001, 2004) approach is free from such limitations.
4) Caillol and Zeitlin
A non-Hamiltonian kinetic equation for internal waves was derived in Caillol and Zeitlin (2000), their (61) directly from the dynamical equations of motion, without the use of the Hamiltonian structure. Caillol and Zeitlin (2000) invoke the Craya-Herring decomposition for non-rotating flows which enforces a condition of zero perturbation vorticity on the result.
5) Kenyon and Hasselmann
The first kinetic equations for wave-wave interactions in a continuously stratified ocean appear in Kenyon (1966), Hasselmann (1966) and Kenyon (1968). Kenyon (1968) states (without detail) that Kenyon (1966) and Hasselmann (1966) give numerically similar results. We have found that Kenyon (1966) differs from the four approaches examined below on one of the resonant manifolds, but have not pursued the question further. It is possible this difference results from a typographical error in Kenyon (1966). We have not rederived this non-Hamiltonian representation and thus exclude it from this study.
6) Pelinovsky and Raevsky
An important paper on internal waves is Pelinovsky and Raevsky (1977). Clebsch variables are used to obtain the interaction matrix elements for both constant stratification rates, \\(N={\\rm const.}\\), and arbitrary buoyancy profiles, \\(N=N(z)\\), in a Lagrangian coordinate representation. Not much details are given, but there are some similarities in appearance with the Eulerian coordinate representation of Voronovich (1979). The most significant result is the identification of a scale invariant (non-rotating, hydrostatic) stationary state which we refer to as the PelinovskyRaevsky in the companion paper (Lvov et al. 1964). It is stated in Pelinovsky and Raevsky (1977) that their matrix elements are equivalent to those derived in their citation [11], which is Brehovski (1975). Because Brehovski (1975) and Pelinovsky and Raevsky (1977) are in Russian and not generally available, we refrain from including them in this comparison.
#### ii.1.7 Milder
An alternative Hamiltonian description was developed in Milder (1982), in isopycnal coordinates without assuming a hydrostatic balance. The resulting Hamiltonian is an iterative expansion in powers of a small parameter, similar to the case of surface gravity waves. In principle, that approach may also be used to calculate wave-wave interaction amplitudes. Since those calculations were not done in Milder (1982), we do not pursue the comparison further.
#### ii.1.2 Weak Turbulence
Here we derive the kinetic equation following Zakharov et al. (1992). We introduce wave action as
\\[n_{\\mathbf{p}}=\\langle a_{\\mathbf{p}}^{*}a_{\\mathbf{p}}\\rangle, \\tag{24}\\]
where \\(\\langle\\dots\\rangle\\) means the averaging over statistical ensemble of many realizations of the internal waves. To derive the time evolution of \\(n_{\\mathbf{p}}\\) we multiply the amplitude equation (10) with Hamiltonian (11) by \\(a_{\\mathbf{p}}^{*}\\), multiply the amplitude evolution equation of \\(a_{\\mathbf{p}}^{*}\\) by \\(a\\), subtract the two equations and average \\(\\langle\\dots\\rangle\\) the result. We get
\\[\\frac{\\partial n_{\\bf p}}{\\partial t} = \\Im\\int\\left(V^{\\bf p}_{{\\bf p}_{1}{\\bf p}_{2}}J^{\\bf p}_{{\\bf p}_{ 1}{\\bf p}_{2}}\\delta({\\bf p}-{\\bf p}_{1}-{\\bf p}_{2})\\right. \\tag{25}\\] \\[\\left.-V^{{\\bf p}_{2}}_{{\\bf p}{\\bf p}_{1}}J^{{\\bf p}_{2}}_{{\\bf p }{\\bf p}_{1}}\\delta({\\bf p}_{2}-{\\bf p}-{\\bf p}_{1})\\right)d{\\bf p}_{1}d{\\bf p} _{2}\\] \\[\\left.-V^{{\\bf p}_{1}}_{{\\bf p}{\\bf p}_{2}}J^{{\\bf p}_{1}}_{{\\bf p }{\\bf p}_{2}}\\delta({\\bf p}_{1}-{\\bf p}_{2}-{\\bf p})\\right)d{\\bf p}_{1}d{\\bf p} _{2},\\]
where we introduced a triple correlation function
\\[J^{\\bf p}_{{\\bf p}_{1}{\\bf p}_{2}}\\delta({\\bf p}_{1}-{\\bf p}-{\\bf p}_{2})\\equiv \\langle a^{*}_{{\\bf p}}a_{{\\bf p}_{1}}a_{{\\bf p}_{2}}\\rangle. \\tag{26}\\]
If we were to have non-interacting fields, i.e. fields with \\(V^{\\bf p}_{{\\bf p}_{1}{\\bf p}_{2}}\\) being zero, this triple correlation function would be zero. We then use perturbation expansion in smallness of interactions to calculate the triple correlation at first order. The first order expression for \\(\\partial n_{\\bf p}/\\partial t\\) therefore requires computing \\(\\partial J^{\\bf p}_{{\\bf p}_{1}{\\bf p}_{2}}/\\partial t\\) to first order. To do so we take definition (26) and use (10) with Hamiltonian (11) and apply \\(\\langle\\dots\\rangle\\) averaging. We get
\\[\\left(i\\frac{\\partial}{\\partial t}+(\\omega_{{\\bf p}_{1}}-\\omega_ {{\\bf p}_{2}}-\\omega_{{\\bf p}_{3}})\\right)J^{\\bf p}_{{\\bf p}_{2}{\\bf p}_{3}} \\tag{27}\\] \\[= \\int\\left[-\\frac{1}{2}(V^{{\\bf p}_{1}}_{{\\bf p}_{4}{\\bf p}_{5}})^ {*}J^{{\\bf p}_{4}{\\bf p}_{2}^{\\bf q}{\\bf p}_{3}}\\delta({\\bf p}_{1}-{\\bf p}_{4} -{\\bf p}_{5})\\right.\\] \\[\\left.+(V^{{\\bf p}_{4}}_{{\\bf p}_{2}{\\bf p}_{5}})^{*}J^{{\\bf p}_{ 3}{\\bf p}_{4}}_{{\\bf p}_{3}{\\bf p}_{4}}\\delta({\\bf p}_{4}-{\\bf p}_{2}-{\\bf p}_{ 5})\\right.\\] \\[\\left.+V^{{\\bf p}_{4}}_{{\\bf p}_{3}{\\bf p}_{5}}J^{{\\bf p}_{1}{\\bf p }_{5}}_{{\\bf p}_{2}{\\bf p}_{4}}\\delta({\\bf p}_{4}-{\\bf p}_{3}-{\\bf p}_{5}) \\right]d{\\bf p}_{4}d{\\bf p}_{5}.\\]
Here we introduced the quadruple correlation function
\\[J^{{\\bf p}_{1}{\\bf p}_{2}}_{{\\bf p}_{3}{\\bf p}_{4}}\\delta({\\bf p}_{1}+{\\bf p}_{ 2}-{\\bf p}_{3}-{\\bf p}_{4})\\equiv\\langle a^{*}_{{\\bf p}_{1}}a^{*}_{{\\bf p}_{2} }a_{{\\bf p}_{3}}a_{{\\bf p}_{4}}\\rangle. \\tag{28}\\]The next step is to assume Gaussian statistics, and to express \\(J^{\\mathbf{p}_{1}\\mathbf{p}_{2}}_{\\mathbf{p}_{3}\\mathbf{p}_{4}}\\) as a product of two two-point correlators as
\\[J^{\\mathbf{p}_{1}\\mathbf{p}_{2}}_{\\mathbf{p}_{3}\\mathbf{p}_{4}}=n_{\\mathbf{p}_{ 1}}n_{\\mathbf{p}_{2}}\\Big{[}\\delta(\\mathbf{p}_{1}-\\mathbf{p}_{3})\\delta( \\mathbf{p}_{2}-\\mathbf{p}_{4})+\\delta(\\mathbf{p}_{1}-\\mathbf{p}_{4})\\delta( \\mathbf{p}_{2}-\\mathbf{p}_{3})\\Big{]}.\\]
Then
\\[\\left[i\\frac{\\partial}{\\partial t}+\\left(\\omega_{\\mathbf{p}_{1}}-\\omega_{ \\mathbf{p}_{2}}-\\omega_{\\mathbf{p}_{3}}\\right)\\right]J^{\\mathbf{p}_{1}}_{ \\mathbf{p}_{2}\\mathbf{p}_{3}}=\\left(V^{\\mathbf{p}_{1}}_{\\mathbf{p}_{2}\\mathbf{p }_{3}}\\right)^{*}\\left(n_{1}n_{3}+n_{1}n_{2}-n_{2}n_{3}\\right). \\tag{29}\\]
Time integration of the equation for \\(J^{\\mathbf{p}_{1}}_{\\mathbf{p}_{2}\\mathbf{p}_{3}}\\) will contain fast oscillations due to initial value of \\(J^{\\mathbf{p}_{1}}_{\\mathbf{p}_{2}\\mathbf{p}_{3}}\\) and slow evolution due to the nonlinear wave interactions. Contribution from first term will rapidly decrease with time, so neglecting these terms we get
\\[J^{\\mathbf{p}_{1}}_{\\mathbf{p}_{2}\\mathbf{p}_{3}}=\\frac{\\left(V^{\\mathbf{p}_{1 }}_{\\mathbf{p}_{2}\\mathbf{p}_{3}}\\right)^{*}\\left(n_{1}n_{3}+n_{1}n_{2}-n_{2}n _{3}\\right)}{\\omega_{\\mathbf{p}_{1}}-\\omega_{\\mathbf{p}_{2}}-\\omega_{\\mathbf{p }_{3}}+i\\Gamma_{\\mathbf{p}_{1}\\mathbf{p}_{2}\\mathbf{p}_{3}}}. \\tag{30}\\]
Here we introduced the nonlinear damping of the waves \\(\\Gamma_{\\mathbf{p}_{1}\\mathbf{p}_{2}\\mathbf{p}_{3}}\\). We will elaborate on \\(\\Gamma_{\\mathbf{p}_{1}\\mathbf{p}_{2}\\mathbf{p}_{3}}\\) in Section (a). We now substitute (30) into (25), assume for now that the damping of the wave is small, and use
\\[\\lim_{\\Delta\\to 0}\\Im\\left[\\frac{1}{\\Delta+i\\Gamma}\\right]=-\\pi\\delta(\\Delta). \\tag{31}\\]We then obtain the three-wave kinetic equation (Zakharov et al., 1992; Lvov and Nazarenko, 2004; Lvov et al., 1997):
\\[\\frac{dn_{\\mathbf{p}}}{dt}=4\\pi\\int|V_{\\mathbf{p}_{1},\\mathbf{p}_{2 }}^{\\mathbf{p}}|^{2}\\,f_{p12}\\,\\delta_{\\mathbf{p}-\\mathbf{p}_{1}-\\mathbf{p}_{ 2}}\\,\\delta(\\omega_{\\mathbf{p}}-\\omega_{\\mathbf{p}_{1}}-\\omega_{\\mathbf{p}_{2}} )d\\mathbf{p}_{12}\\] \\[-4\\pi\\int\\,|V_{\\mathbf{p}_{2},\\mathbf{p}}^{\\mathbf{p}_{1}}|^{2} \\,f_{12p}\\,\\delta_{\\mathbf{p}_{1}-\\mathbf{p}_{2}-\\mathbf{p}}\\,\\delta(\\omega_{ \\mathbf{p}_{1}}-\\omega_{\\mathbf{p}_{2}}-\\omega_{\\mathbf{p}})\\,d\\mathbf{p}_{12}\\] \\[-4\\pi\\int\\,|V_{\\mathbf{p},\\mathbf{p}_{1}}^{\\mathbf{p}_{2}}|^{2} \\,f_{2p1}\\,\\delta_{\\mathbf{p}_{2}-\\mathbf{p}-\\mathbf{p}_{1}}\\,\\delta(\\omega_{ \\mathbf{p}_{2}}-\\omega_{\\mathbf{p}}-\\omega_{\\mathbf{p}_{1}})\\,d\\mathbf{p}_{12}\\,,\\] \\[\\mathrm{with}\\,\\,\\,f_{p12}=n_{\\mathbf{p}_{1}}n_{\\mathbf{p}_{2}}- n_{\\mathbf{p}}(n_{\\mathbf{p}_{1}}+n_{\\mathbf{p}_{2}})\\,. \\tag{32}\\]
Here \\(n_{\\mathbf{p}}=n(\\mathbf{p})\\) is a three-dimensional wave action spectrum (spectral energy density divided by frequency) and the interacting wavevectors \\(\\mathbf{p}\\), \\(\\mathbf{p}_{1}\\) and \\(\\mathbf{p}_{2}\\) are given by
\\[\\mathbf{p}=(\\mathbf{k},m),\\]
i.e. \\(\\mathbf{k}\\) is the horizontal part of \\(\\mathbf{p}\\) and \\(m\\) is its vertical component. We assume the wavevectors are signed variables and wave frequencies \\(\\omega_{\\mathbf{p}}\\) are restricted to be positive. The magnitude of wave-wave interactions \\(V_{\\mathbf{p},\\mathbf{p}_{1}}^{\\mathbf{p}_{2}}\\) is a matrix representation of the coupling between triad members. It serves as a multiplier in the nonlinear convolution term in what is now commonly called the Zakharov equation - equation in the Fourier space for the waves field variable. This is also an expression that multiplies the cubic convolution term in the three-wave Hamiltonian.
We re-iterate that typical assumptions needed for the derivation of kinetic equations are:
* Weak nonlinearity,
* Gaussian statistics of the interacting wave field in wavenumber space and
* Resonant wave-wave interactionsWe note that the derivation given here is schematic. A more systematic derivation can be obtained using only an assumption of weak nonlinearity.
### The Boltzmann Rate
The kinetic equation allows us to numerically estimate the life time of any given spectrum. In particular, we can define a wavenumber dependent nonlinear time scale proportional to the inverse Boltzmann rate:
\\[\\tau_{\\mathbf{p}}^{\\mathrm{NL}}=\\frac{n_{\\mathbf{p}}}{\\dot{n}_{\\mathbf{p}}}\\;. \\tag{33}\\]
This time scale characterizes the net rate at which the spectrum changes and can be directly calculated from the kinetic equation.
One can also define the characteristic linear time scale, equal to a wave period
\\[\\tau_{\\mathbf{p}}^{\\mathrm{L}}=2\\pi/\\omega_{\\mathbf{p}}.\\]
The non-dimensional ratio of these time scales can characterize the level of nonlinearity in the nonlinear system:
\\[\\epsilon_{\\mathbf{p}}=\\frac{\\tau_{\\mathbf{p}}^{\\mathrm{L}}}{\\tau_{\\mathbf{p}}^ {\\mathrm{NL}}}=\\frac{2\\pi\\dot{n}_{\\mathbf{p}}}{n_{\\mathbf{p}}\\omega_{\\mathbf{p}}} \\tag{34}\\]
We refer to (34) as a normalized Boltzmann rate.
The normalized Boltzmann rate serves as a low order consistency check for the various kinetic equation derivations. An \\(O(1)\\) value of \\(\\epsilon_{\\mathbf{p}}\\) implies that the derivation of the kinetic equation is internally inconsistent. The Boltzmann rate represents the _net_ rate of transfer for wavenumber \\(\\mathbf{p}\\). The individual rates of transfer into and out of \\(\\mathbf{p}\\) (called Langevin rates) are typically greater than the Boltzmann rate, (Muller et al., 1986; Pomphrey et al., 1980). This is particularly true in the Induced Diffusion regime (defined below in Section 3) in which the rates of transfer into and out of \\({\\bf p}\\) are one to three orders of magnitude larger than their residual and the Boltzmann rates we calculate are not appropriate for either spectral spike or potentially for smooth, homogeneous but anisotropic spectra (Muller et al., 1986). Estimates of the individual rates of transfer into and out of \\({\\bf p}\\) can be addressed through Langevin methods (Pomphrey et al., 1980). We focus here simply on the Boltzmann rate to demonstrate inconsistencies with the assumption of a slow time evolution. Estimates of the Boltzmann rate and \\(\\epsilon_{\\bf p}\\) require integration of (32). In this manuscript such integration is performed numerically.
## 3 Resonant wave-wave interactions - nonrotational limit
How one can compare the function of two vectors \\({\\bf p}_{1}\\) and \\({\\bf p}_{2}\\), and their sum or difference? First one realizes that out of 6 components of \\({\\bf p}_{1}\\) and \\({\\bf p}_{2}\\), only relative angles between wavevectors enter into the equation for matrix elements. That is because the matrix elements depend on the inner and outer products of wavevectors. The overall horizontal orientation of the wavevectors does not matter: relative angles can be determined from a triangle inequality and the magnitudes of the horizontal wavevectors \\({\\bf k}\\), \\({\\bf k}_{1}\\) and \\({\\bf k}_{2}\\). Thus the only needed components are \\(|{\\bf k}|\\), \\(|{\\bf k}_{1}|\\), \\(|{\\bf k}_{2}|\\), \\(m\\) and \\(m_{1}\\) (\\(m_{2}\\) is computed from \\(m\\) and \\(m_{1}\\)). Further note that in the \\(f=0\\) and hydrostatic limit, all matrix elements become scale invariant functions. It is therefore sufficient to choose an arbitrary scalar value for \\(|{\\bf k}|\\), and \\(m\\), since only \\(|{\\bf k}_{1}|/|{\\bf k}|\\), \\(|{\\bf k}_{2}|/|{\\bf k}|\\) and \\(m_{1}/m\\) enter the expressions for matrix elements. We make the particular (arbitrary) choice that \\(|{\\bf k}|=m=1\\) for the purpose of numerical evaluation, and thus the only independent variables to consider are \\(|\\mathbf{k}_{1}|\\), \\(|\\mathbf{k}_{2}|\\) and \\(m_{1}\\). Finally, \\(m_{1}\\) is determined from the resonance conditions, as explained in the next subsection below. As a result, we are left with a matrix element as a function of only two parameters, \\(k_{1}\\) and \\(k_{2}\\). This allows us to easily compare the values of matrix elements on the resonant manifold by plotting the values as a function of the two parameters.
### Reduction to the Resonant Manifold
When confined to the traditional form of the kinetic equation, wave-wave interactions are constrained to the resonant manifolds defined by
\\[a)\\ \\begin{cases}\\mathbf{p}=\\mathbf{p}_{1}+\\mathbf{p}_{2}\\\\ \\omega=\\omega_{1}+\\omega_{2}\\end{cases}\\ \\begin{cases}\\mathbf{p}_{1}=\\mathbf{p}_{2}+ \\mathbf{p}\\\\ \\omega_{1}=\\omega_{2}+\\omega\\end{cases}\\ \\begin{cases}\\mathbf{p}_{2}=\\mathbf{p}+ \\mathbf{p}_{1}\\\\ \\omega_{2}=\\omega+\\omega_{1}\\end{cases}. \\tag{35}\\]
To compare matrix elements on the resonant manifold we are going to use the above resonant conditions and the internal-wave dispersion relation (51). To determine vertical components \\(m_{1}\\) and \\(m_{2}\\) of the interacting wavevectors, one has to solve the resulting quadratic equations. Without restricting generality we choose \\(m>0\\). There are two solutions for \\(m_{1}\\) and \\(m_{2}\\) given below for each of the three resonance types described above.
Resonances of type (35a) give
\\[\\begin{cases}m_{1}=\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|+| \\mathbf{k}_{1}|+|\\mathbf{k}_{2}|+\\sqrt{(|\\mathbf{k}|+|\\mathbf{k}_{1}|+|\\mathbf{ k}_{2}|)^{2}-4|\\mathbf{k}||\\mathbf{k}_{1}|}\\right)\\\\ m_{2}=m-m_{1}.\\end{cases}, \\tag{36a}\\] \\[\\begin{cases}m_{1}=\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|-| \\mathbf{k}_{1}|-|\\mathbf{k}_{2}|-\\sqrt{(|\\mathbf{k}|-|\\mathbf{k}_{1}|-|\\mathbf{ k}_{2}|)^{2}+4|\\mathbf{k}||\\mathbf{k}_{1}|}\\right)\\\\ m_{2}=m-m_{1}.\\end{cases}, \\tag{36b}\\]Note that because of the symmetry, (36a) translates to (36b) if wavenumbers \\(1\\) and \\(2\\) are exchanged.
Resonances of type (35b) give
\\[\\begin{cases}m_{2}=-\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|-|\\mathbf{k}_{1}|-| \\mathbf{k}_{2}|+\\sqrt{(|\\mathbf{k}|-|\\mathbf{k}_{1}|-|\\mathbf{k}_{2}|)^{2}+4| \\mathbf{k}||\\mathbf{k}_{2}|}\\right)\\\\ m_{1}=m+m_{2}.\\end{cases}, \\tag{37a}\\] \\[\\begin{cases}m_{2}=-\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|+|\\mathbf{k}_{1}|-| \\mathbf{k}_{2}|+\\sqrt{(|\\mathbf{k}|+|\\mathbf{k}_{1}|-|\\mathbf{k}_{2}|)^{2}+4| \\mathbf{k}||\\mathbf{k}_{2}|}\\right)\\\\ m_{1}=m+m_{2}.\\end{cases}, \\tag{37b}\\]
Resonances of type (35c) give
\\[\\begin{cases}m_{1}=-\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|-|\\mathbf{k}_{1}| -|\\mathbf{k}_{2}|+\\sqrt{(|\\mathbf{k}|-|\\mathbf{k}_{1}|-|\\mathbf{k}_{2}|)^{2}+ 4|\\mathbf{k}||\\mathbf{k}_{1}|}\\right)\\\\ m_{2}=m+m_{1}.\\end{cases}, \\tag{38a}\\] \\[\\begin{cases}m_{1}=-\\frac{m}{2|\\mathbf{k}|}\\left(|\\mathbf{k}|-|\\mathbf{k}_{1}|+| \\mathbf{k}_{2}|+\\sqrt{(|\\mathbf{k}|-|\\mathbf{k}_{1}|+|\\mathbf{k}_{2}|)^{2}+4| \\mathbf{k}||\\mathbf{k}_{1}|}\\right)\\\\ m_{2}=m+m_{1}.\\end{cases}. \\tag{38b}\\]
Because of the symmetries of the problem, (37a) is equivalent to (38a), and (37b) is equivalent to (38b) if wavenumbers \\(1\\) and \\(2\\) are exchanged.
### Comparison of matrix elements
As explained above, we assume \\(f=0\\) and hydrostatic balance. Such a choice makes the matrix elements to be scale-invariant functions that depend only upon \\(|\\mathbf{k}_{1}|\\) and \\(|\\mathbf{k}_{2}|\\). As a consequence of the triangle inequality we need to consider matrix elements only within a \"kinematic box\" defined by
\\[||{\\bf k}_{1}|-|{\\bf k}_{2}||<|{\\bf k}|<|{\\bf k}_{1}|+|{\\bf k}_{2}|.\\]
The matrix elements will have different values depending on the dimensions so that isopycnal and Eulerian approaches will give different values (49)-(50). To address this issue in the simplest possible way, we multiply each matrix element by a dimensional number chosen so that all matrix elements are equivalent for some specific wavevector. In particular, we choose the scaling constant so that \\(|V(|{\\bf k}_{1}|=1,|{\\bf k}_{2}|=1)|^{2}=1\\). This allows a transparent comparison without worrying about dimensional differences between various formulations.
1) Resonances of the \"sum\" type (35a)
Figure 1 presents the values of the matrix element \\(|V_{{\\bf p}_{1},{\\bf p}_{2}(\\mbox{36b})}^{\\rm P}|^{2}\\) on the resonant sub-manifold given explicitly by (36b). All approaches give equivalent results. This is confirmed by plotting the relative ratio between these approaches, and it is given by numerical noise (not shown). The solution (36a) gives the same matrix elements but with \\(|{\\bf k}_{1}|\\) and \\(|{\\bf k}_{2}|\\) exchanged owing to their symmetries.
2) Resonances of the \"difference\" type (35b) and (35c)
We then turn our attention to resonances of \"difference\" type (35b) for which (35c) could be obtained by symmetrical exchange of the indices. All the matrix elements \\(|V_{{\\bf p}_{2},{\\bf p}(\\mbox{37a})}^{\\rm P_{1}}|^{2}\\) on the resonant sub-manifold (37a), are shown in Fig. 2. All the matrix elements are equivalent. The relative differences between different approaches are given by numerical noise (not shown).
Finally, \\(|V_{\\mathbf{p}_{2},\\mathbf{p}_{1}}^{\\mathbf{p}_{1}}(\\mathbf{37b})|^{2}\\) on the resonant sub-manifold (37b) are shown in Fig. 3. Again, all the matrix elements are equivalent.
The solutions (38a) and (38b) give the same matrix elements but with \\(|\\mathbf{k}_{1}|\\) and \\(|\\mathbf{k}_{2}|\\) exchanged as the solutions (37a) and (37b) owing to their symmetries.
#### iv.2.3 Special triads
Three simple interaction mechanisms are identified by McComas and Bretherton (1977) in the limit of an extreme scale separation. In this subsection we look in closer detail at these special limiting triads to confirm that all matrix elements are indeed asymptotically consistent. The limiting cases are:
* the vertical backscattering of a high-frequency wave by a low frequency wave of twice the vertical wavenumber into a second high-frequency wave of oppositely signed vertical wavenumber and nearly the same wavenumber magnitude. This type of scattering is called elastic scattering (ES). The solution (36a) in the limit \\(|\\mathbf{k}_{1}|\\to 0\\) corresponds to this type of special triad.
* The scattering of a high-frequency wave by a low-frequency, small-wavenumber wave into a second, nearly identical, high-frequency large-wavenumber wave. This type of scattering is called induced diffusion (ID). The solution (36b) in the limit that \\(|\\mathbf{k}_{1}|\\to 0\\) corresponds to this type of special triad.
* The decay of a low wavenumber wave into two high vertical wavenumber waves of approximately one-half the frequency. This is called parametric subharmonic instability (PSI). The solution (37a) in the limit that \\(|\\mathbf{k}_{1}|\\to 0\\) corresponds to this type of triad.
To study the detailed behavior of the matrix elements in the special triad cases, we choose to present the matrix elements along a straight line defined by
\\[(|{\\bf k}_{1}|,|{\\bf k}_{2}|)=(\\epsilon,\\epsilon/3+1)|{\\bf k}|.\\]
This line is defined in such a way so that it originates from the corner of the kinematic box in Figs. 1-3 at \\((|{\\bf k}_{1}|,|{\\bf k}_{2}|)=(0,|{\\bf k}|)\\) and has a slope of 1/3. The slope of this line is arbitrary. We could have taken \\(\\epsilon/4\\) or \\(\\epsilon/2\\). The matrix elements here are shown as functions of \\(\\epsilon\\) in Fig. 4. We see that all four approaches are again _equivalent_ on the resonant manifold for the case of special triads.
In this section we demonstrated that all four approaches we considered produce _equivalent_ results on the resonant manifold in the absence of background rotation. This statement is not trivial, given the different assumptions and coordinate systems that have been used for the various kinetic equation derivations.
## 4 Resonant wave-wave interactions - in the presence of Background Rotations
In the presence of background rotation, the matrix elements loose their scale invariance due to the introduction of an additional time scale (\\(1/f\\)) in the system. Consequently the comparison of matrix elements is performed as a function of four independent parameters.
We perform this comparison in the frequency-vertical wavenumber domain. In particular, for arbitrary \\(\\omega\\), \\(\\omega_{1}\\), \\(m\\) and \\(m_{1}\\), \\(\\omega_{2}\\) and \\(m_{2}\\) can be calculated by requiring that they satisfy the resonant conditions \\(\\omega=\\omega_{1}+\\omega_{2}\\) and \\(m=m_{1}+m_{2}\\). We then can check whether the corresponding horizontal wavenumber magnitudes \\(k\\), given by
\\[k_{i} = \\frac{m_{i}N\\rho_{o}}{g}\\sqrt{\\omega_{i}^{2}-f^{2}}\\;\\;({\\rm isopycnal \\;coordinates})\\;\\;{\\rm and}\\] \\[k_{i} = m_{i}\\frac{\\sqrt{\\omega_{i}^{2}-f^{2}}}{N}\\;\\;({\\rm Lagrangian\\; coordinates}) \\tag{39}\\]
satisfy the triangle inequality. The matrix elements of the isopycnal and Lagrangian coordinate representations are then calculated. We are performed this comparison for \\(10^{12}\\) points on the resonant manifold. After being multiplied by an appropriate dimensional number to convert between Eulerian and isopycnal coordinate systems, the two matrix elements coincide up to machine precision.
One might, with sufficient experience, regard this as an intuitive statement. It is, however, far from trivial given the different assumptions and coordinate representations. In particular, we note that derivations of the wave amplitude evolution equation in Lagrangian coordinates (Olbers 1976; McComas 1975; Meiss et al. 1979) do not explicitly contain a potential vorticity conservation statement corresponding to assumption (4) in the isopycnal coordinate (Lvov and Tabak 2004) derivation. We have inferred that the Lagrangian coordinate derivation conserves potential vorticity as that system is projected upon the linear modes of the system having zero perturbation potential vorticity.
## 5 Resonance Broadening and Numerical Methods
### Nonlinear frequency renormalization as a result of nonlinear wave-wave interactions
The resonant interaction approximation is a self-consistent mathematical simplification which reduces the complexity of the problem for weakly nonlinear systems. As nonlinearity increases, near-resonant interactions become more and more pronounced and need to be addressed. Moreover, near-resonant interactions play a major role in numerical simulations on a discrete grids (Lvov et al., 2006), for time evolution of discrete systems (Gershgorin et al., 2007), in acoustic turbulence (Lvov et al., 1997), surface gravity waves (Janssen, 2003; Yuen and Lake, 1982), and internal waves (Voronovich et al., 2006; Annenkov and Shrira, 2006).
To take into account the effects of near-resonant interactions self-consistently, we revisit Section b. Now we _do not_ take the limit \\(\\Gamma_{\\mathbf{p}\\mathbf{p}_{1}\\mathbf{p}_{2}}\\to 0\\). Then, instead of the kinetic equation with the frequency conserving delta-function, we obtain the _generalized_ kinetic equation
\\[\\begin{split}\\frac{dn_{\\mathbf{p}}}{dt}&=4\\int|V_{ \\mathbf{p}_{1},\\mathbf{p}_{2}}^{\\mathbf{p}}|^{2}\\,f_{p12}\\,\\delta_{\\mathbf{p}- \\mathbf{p}_{1}-\\mathbf{p}_{2}}\\,\\mathcal{L}(\\omega_{\\mathbf{
below.
The difference between kinetic equation (32) and the generalized kinetic equation (40) is that the energy conserving delta-functions in Eq. (32), \\(\\delta(\\omega_{\\bf p}-\\omega_{{\\bf p}_{1}}-\\omega_{{\\bf p}_{2}})\\), was \"broadened\". The physical motivation for this broadening is the following: when the resonant kinetic equation is derived, it is assumed that the amplitude of each plane wave is constant in time, or, in other words, that the lifetime of single plane wave is infinite. The resulting kinetic equation, nevertheless, predicts that wave amplitude changes. Consequently the wave lifetime is finite. For small level of nonlinearity this distinction is not significant, and resonant kinetic equation constitutes a self-consistent description. For larger values of nonliterary this is no longer the case, and the wave lifetime is finite and amplitude changes need to be taken into account. Consequently interactions may not be strictly resonant. This statement also follows from the Fourier uncertainty principle. Waves with varying amplitude can not be represented by a single Fourier component. This effect is larger for larger normalized Boltzmann rates.
If the nonlinear frequency renormalization tends to zero, i.e. \\(\\Gamma_{k12}\\to 0\\), \\({\\cal L}\\) reduces to the delta function (compare to (31):
\\[\\lim_{\\Gamma_{k12}\\to 0}{\\cal L}(\\Delta\\omega)=\\pi\\delta(\\Delta\\omega).\\]
Consequently, in the limit resonant interactions (i.e. no broadening) (40) reduces to (32).
If, on the other hand, one does not take the \\(\\Gamma_{{\\bf pp}_{1}{\\bf p}_{2}}\\to 0\\) limit, then one has to calculate \\(\\Gamma_{{\\bf pp}_{1}{\\bf p}_{2}}\\) self-consistently. To achieve this we realize that by deriving the generalized kinetic equation (40) we allow changes in wave amplitude. The rate of change can be identified from equation (40) in the following way. Let us go through (40) term by term, and identify all term that multiply the \\(n_{\\bf p}\\) on the right-hand-side. Those terms can be loosely interpreted as a nonlinear wave damping acting on the given wavenumber:
\\[\\gamma_{\\mathbf{p}} =4\\int|V_{\\mathbf{p}_{1},\\mathbf{p}_{2}}^{\\mathbf{p}}|^{2}\\left(n_{ \\mathbf{p}_{1}}+n_{\\mathbf{p}_{2}}\\right)\\delta_{\\mathbf{p}-\\mathbf{p}_{1}- \\mathbf{p}_{2}}\\,\\mathcal{L}(\\omega_{\\mathbf{p}}-\\omega_{\\mathbf{p}_{1}}- \\omega_{\\mathbf{p}_{2}})d\\mathbf{p}_{12}\\] \\[-4\\int\\,|V_{\\mathbf{p}_{2},\\mathbf{p}_{1}}^{\\mathbf{p}_{1}}|^{2} \\left(n_{\\mathbf{p}_{2}}-n_{\\mathbf{p}_{1}}\\right)\\delta_{\\mathbf{p}_{1}- \\mathbf{p}_{2}-\\mathbf{p}}\\,\\mathcal{L}(\\omega_{\\mathbf{p}_{1}}-\\omega_{ \\mathbf{p}_{2}}-\\omega_{\\mathbf{p}})\\,d\\mathbf{p}_{12}\\] \\[-4\\int\\,|V_{\\mathbf{p},\\mathbf{p}_{1}}^{\\mathbf{p}_{2}}|^{2} \\left(n_{\\mathbf{p}_{1}}-n_{\\mathbf{p}_{2}}\\right)\\delta_{\\mathbf{p}_{2}- \\mathbf{p}-\\mathbf{p}_{1}}\\,\\mathcal{L}(\\omega_{\\mathbf{p}_{2}}-\\omega_{ \\mathbf{p}}-\\omega_{\\mathbf{p}_{1}})\\,d\\mathbf{p}_{12}\\;.\\]
The interpretation of this formula is the following: nonlinear wave-wave interactions lead to the change of wave amplitude, which in turn makes the lifetime of the waves to be finite. This, in turn, makes the interactions to be near-resonant.
The next question is how to relate the individual wave damping \\(\\gamma_{\\mathbf{p}}\\) with the overall broadening of the resonances of three interacting waves. As we have rigorously shown in (Lvov et al. (1997)) the errors add up, so that
\\[\\Gamma_{k12}=\\gamma_{\\mathbf{p}}+\\gamma_{\\mathbf{p}_{1}}+\\gamma_{\\mathbf{p}_{ 2}}. \\tag{43}\\]
It means that the total resonance broadening is the sum of individual frequency broadening, and can be thus seen as the \"triad interaction\" frequency.
A rigorous derivation of the kinetic equation with a broadened delta function is given in details for a general three-wave Hamiltonian system in (Lvov et al., 1997). The derivation is based upon the Wyld diagrammatic technique for non-equilibrium wave systems and utilizes the Dyson-Wyld line resummation. This resummation permits an analytical resummation of the infinite series of reducible diagrams for Greens functions and double correlators. Consequently, the resulting kinetic equation is not limited to the Direct Interaction Approximation (DIA), but also includes higher order effects coming from infinite diagrammatic series. We emphasize, however, that the approach _is_ perturbative in nature and there are neglected parts of the infinite diagrammatic series. The reader is referred to Lvov et al. (1997) for details of that derivation. The resulting formulas are given by (40)-(43).
A self-consistent estimate of \\(\\gamma_{\\bf p}\\) requires an iterative solution of (40) and (42) over the entire field: the width of the resonance (42) depends on the lifetime of an individual wave [from (40)], which in turn depends on the width of the resonance (43). This numerically intensive computation is beyond the scope of this manuscript. Instead, we make the uncontrolled approximation that:
\\[\\gamma_{\\bf p}=\\delta\\omega_{\\bf p}. \\tag{44}\\]
We note that this choice is made for illustration purposes only, we certainly do not claim that it represents a self consistent choice. Below, we will take \\(\\delta\\) to be \\(10^{-3}\\) and \\(10^{-2}\\) and \\(10^{-1}\\). These values are rather small, therefore we remain in the closest proximity to the resonant interactions. To show the effect of strong resonant manifold smearing we also investigate the case with \\(\\delta=0.5\\).
We note in passing that the near-resonant interactions of the waves were also considered in the (Janssen, 2003). There, instead of our \\({\\cal L}(x)\\) function, given by (41), the corresponding function was given by \\(\\sin(\\pi x)/x\\). We have shown in Kramer et al. (2003) that the resulting kinetic equation does _not_ retain positive definite values of wave action. To get around that difficulty, self-consistent formula for broadening or rigorous diagrammatic resummation should be used.
### Numerical Methods
Estimates of near-resonant transfers are obtained by assuming horizontal isotropy and integrating (40) over horizontal azimuth:
\\[\\frac{\\partial n_{\\mathbf{p}}}{\\partial t} = 4\\pi\\int\\frac{k_{1}k_{2}}{S_{p12}}|V_{\\mathbf{p}_{1},\\mathbf{p}_{2} }^{\\mathbf{p}}|^{2}\\,f_{p12}\\,\\delta_{\\mathbf{p}-\\mathbf{p}_{1}-\\mathbf{p}_{2} }\\,\\mathcal{L}(\\omega_{\\mathbf{p}}-\\omega_{\\mathbf{p}_{1}}-\\omega_{\\mathbf{p}_ {2}})dk_{12}dm_{1} \\tag{45}\\] \\[-4\\pi\\int\\,\\frac{k_{1}k_{2}}{S_{12p}}|V_{\\mathbf{p}_{2},\\mathbf{p} _{1}}^{\\mathbf{p}_{2}}|^{2}\\,f_{12p}\\,\\delta_{\\mathbf{p}_{1}-\\mathbf{p}_{2}- \\mathbf{p}_{1}}\\,\\mathcal{L}(\\omega_{\\mathbf{p}_{2}}-\\omega_{\\mathbf{p}}- \\omega_{\\mathbf{p}_{1}})\\,dk_{12}dm_{1}\\,,\\]
where \\(S_{p12}\\) is the area of the triangle \\(\\mathbf{k}=\\mathbf{k}_{1}+\\mathbf{k}_{2}\\). We numerically integrated (45) for \\(\\mathbf{p}\\)'s which have frequencies from \\(f\\) to \\(N\\) and vertical wavenumbers from \\(2\\pi/(2b)\\) to \\(260\\pi/(2b)\\). The limits of integration are restricted by horizontal wavenumbers from \\(2\\pi/10^{5}\\) to \\(2\\pi/5\\) meters\\({}^{-1}\\), vertical wavenumbers from \\(2\\pi/(2b)\\) to \\(2\\pi/5\\) meters\\({}^{-1}\\), and frequencies from \\(f\\) to \\(N\\). The integrals over \\(k_{1}\\) and \\(k_{2}\\) are obtained in the kinematic box in \\(k_{1}-k_{2}\\) space. The grids in the \\(k_{1}-k_{2}\\) domain have \\(2^{17}\\) points that are distributed heavily around the corner of the kinematic box. The integral over \\(m_{1}\\) is obtained with \\(2^{13}\\) grid points, which are also distributed heavily for the small vertical wavenumbers whose absolute values are less than \\(5m\\), where \\(m\\) is the vertical wavenumber.
To estimate the normalized Boltzmann rate we need to choose a form of spectral energy density of internal waves. We utilize the Garrett and Munk spectrum as an agreed-upon representation of the internal waves:
\\[E(\\omega,m)=\\frac{4f}{\\pi^{2}m_{*}}E_{0}\\frac{1}{1+(\\frac{m}{m_{*}})^{2}}\\frac {1}{\\omega\\sqrt{\\omega^{2}-f^{2}}}\\,. \\tag{46}\\]Here the reference wavenumber is given by
\\[m_{*}=\\pi j_{*}/b, \\tag{47}\\]
in which the variable \\(j\\) represents the vertical mode number of an ocean with an exponential buoyancy frequency profile having a scale height of \\(b\\).
We choose the following set of parameters:
* \\(b=\\) 1300 m in the GM model
* The total energy is set as: \\[E_{0}=30\\times 10^{-4}\\;\\mathrm{m}^{2}\\;\\mathrm{s}^{-2}.\\]
* Inertial frequency is given by \\(f=10^{-4}\\)rad/s, and buoyancy frequency is given by \\(N_{0}=5\\times 10^{-3}\\)rad/s.
* The reference density is taken to be \\(\\rho_{0}=10^{3}\\)kg/m\\({}^{3}\\).
* A roll-off corresponding to \\(j_{*}=3\\).
We then calculate the normalized Boltzmann rate (34) using four values of \\(\\delta\\) in (44): \\(\\delta=10^{-3}\\), \\(\\delta=10^{-2}\\), \\(\\delta=10^{-1}\\) and \\(\\delta=0.5\\).
## 6 Time Scales
### Resonant Interactions
Here we present evaluations of the Lvov and Tabak (2004) kinetic equation. These estimates differ from evaluations presented in Olbers (1976); McComas (1977); McComas and Muller(1981a); Pomphrey et al. (1980) in that the numerical algorithm includes a finite breadth to the resonance surface whereas previous evaluations have been _exactly_ resonant. Results discussed in this section are as close to resonant as we can make (\\(\\delta=1\\times 10^{-3}\\)).
We see that for small vertical wavenumbers the normalized Boltzmann rate is of the order of tenth of the wave period. This can be argued to be relatively within the domain of weak nonlinearity. However for increased wavenumbers the level of nonlinearity increases and reaches the level of wave-period (red, or dark blue). There is also a white region indicating values smaller than minus one.
We also define a \"zero curve\" - It is the locus of wavenumber-frequency where the normalized Boltzmann rate and time-derivative of waveaction is exactly zero. The zero curve clearly delineates a pattern of energy gain for frequencies \\(f<\\omega<2f\\), energy loss for frequencies \\(2f<\\omega<5f\\) and energy gain for frequencies \\(5f<\\omega<N\\). We interpret the relatively sharp boundary between energy gain and energy loss across \\(\\omega=2f\\) as being related to the Parameteric Subharmonic Instability and the transition from energy loss to energy gain at \\(\\omega=5f\\) as a transition from energy loss associated with the Parametric Subharmonic Instability to energy gain associated with the Elastic Scattering mechanism. See Section 7 for further details about this high frequency interpretation.
The \\(O(1)\\) normalized Boltzmann rates at high vertical wavenumber are surprising given the substantial literature that regards the GM spectrum as a stationary state. We do not believe this to be an artifact of the numerical scheme for the following reasons. First, numerical evaluations of the integrand conserve energy to within numerical precision as the resonance surface is approached, consistent with energy conservation property associated with the frequency delta function. Second, the time scales converge as the resonant width is reduced, as demonstratedby the minimal difference in time scales using \\(\\delta=1\\times 10^{-3}\\) and \\(1\\times 10^{-2}\\). Third, our results are consistent with approximate analytic expressions (e.g. McComas and Muller (1981b)) for the Boltzmann rate. Finally, in view of the differences in the representation of the wavefield, numerical codes and display of results, we interpret our resonant (\\(\\delta=0.001\\)) results as being consistent with numerical evaluations of the resonant kinetic equations presented in Olbers (1976); McComas (1977); McComas and Muller (1981a); Pomphrey et al. (1980).
As a quantitative example, consider estimates of the time rate of change of low-mode energy appearing in Table 1 of Pomphrey et al. (1980), repeated as row 3 of our Table 82. We find agreement to better than a factor of two. In order to explain the remaining differences, you have to examine the details: Pomphrey et al. (1980) use a Coriolis frequency corresponding to \\(30^{\\circ}\\) latitude, neglect internal waves having horizontal wavelengths greater than 100 km (same as here) and exclude frequencies \\(\\omega>N_{o}/3\\), with \\(N_{o}=3\\) cph. We include frequencies \\(f<\\omega<N_{o}\\) with Coriolis frequency corresponding to \\(45^{\\circ}\\) latitude. Of possible significance is that Pomphrey et al. (1980) use a vertical mode decomposition with exponential stratification with scale height \\(b=1200\\) m (we use \\(b=1300\\) m). Table 8 presents estimates of the energy transfer rate by taking the depth integrated transfer rates of Pomphrey et al. (1980), assuming \\(\\dot{E}\\propto N^{2}\\) and normalizing to \\(N=\\) 3 cph. While this accounts for the nominal buoyancy scaling of the energy transport rate, it does _not_ account for variations in the distribution of \\(\\dot{E}(m)\\) associated with variations in \\(N\\) via \\(m_{*}=\\frac{N}{N_{o}}j_{*}/b\\) in their model. Finally, their estimates of \\(\\dot{E}(m)\\) are arrived at by integrating only over regions of the spectral domain for which \\(\\dot{E}(m,\\omega)\\) is negative.
### Near-Resonant Interactions
Substantial motivation for this work is the question of whether the GM76 spectrum represents a stationary state. We have seen that numerical evaluations of a resonant kinetic equation return \\(O(1)\\) normalized Boltzmann rates and hence we are lead to conclude that GM76 is _not_ a stationary state with respect to resonant interactions. But the inclusion of near-resonant interactions could alter this judgement.
Our investigation of this question is currently limited by the absence of an iterative solution to (40) and (42) and consequent choice to parameterize the resonance broadening in terms of (44). However, as we go from nearly resonant evaluations (\\(10^{-3}\\) and \\(10^{-2}\\)) to incorporating significant broadening (\\(10^{-1}\\) and 0.5), we find a significant decreases in the normalized Boltzmann rate. The largest decreases are associated with an expanded region of energy loss associated the Parametric Subharmonic Instability, in which minimum normalized Boltzmann rates change from -3.38 to -0.45 at \\((\\omega,mb/2\\pi)=(2.5f,150)\\). Large decreases here are not surprising given the sharp boundary between regions of loss and gain in the resonant calculations. Smaller changes are noted within the Induced Diffusion regime. Maximum normalized Boltzmann rates change from 2.6 to 1.5 at \\((\\omega,mb/2\\pi)=(8f,260)\\). Broadening of the resonances to exceed the boundaries of the spectral domain could be making a contribution to such changes.
We regard our calculations here as a preliminary step to answering the question of whether the GM76 spectrum represents a stationary state with respect to nonlinear interactions. Complementary studies could include comparison with analyses of numerical solutions of the equations of motion.
## 7 Discussion
### Resonant Interactions
Several loose ends need to be tied up regarding the assertion that the GM76 spectrum does not constitute a stationary state with respect to resonant interactions. The first is the interpretation of McComas and Muller (1981a)'s inertial-range theory with constant downscale transfer of energy. This constant downscale transfer of energy was obtained by patching together the induced diffusion and parametric subharmonic instability mechanisms and is attended by the following caveats: First, the inertial subrange solution is found only after integrating over the frequency domain and numerical evaluations of the kinetic equation demonstrate that the \"inertial subrange\" solution also requires dissipation to balance energy gain at high vertical wavenumber. It takes a good deal of patience to wade through their figures to understand how figures in McComas and Muller (1981a) plots relate to the initial tendency estimates in Figure 5. Second, Pomphrey et al. (1980) argue that GM76 is an near-equilibrium state because of a 1-3 order of magnitude cancellation between the Langevin rates in the induced diffusion regime. But this is just the \\(\\omega^{2}/f^{2}\\) difference between the fast and slow induced diffusion time scales. It does NOT imply small values of the slow induced diffusion time scale, which are equivalent to the normalized Boltzmann rates. Third, the large normalized Boltzmann rates determined by our numerical procedure are associated with the elastic scattering mechanism rather than induced diffusion. Normalized Boltzmann rates for the induced diffusion and elastic scatteringmechanisms are:
\\[\\epsilon_{id} =\\frac{\\pi^{2}}{20}\\frac{m}{m_{c}}\\frac{m^{2}}{m^{2}+m_{*}^{2}\\frac{ \\omega^{2}}{f^{2}}}\\] \\[\\epsilon_{es} =\\frac{\\pi^{2}}{20}\\frac{m}{m_{c}}\\frac{m^{2}}{m^{2}+0.25m_{*}^{2}}\\]
in which \\(m_{*}\\) represents the low wavenumber roll-off of the vertical wavenumber spectrum (vertical mode-3 equivalent here), \\(m_{c}\\) is the high wavenumber cutoff, nominally at 10 m wavelengths and the GM76 spectrum has been assumed. The normalized Boltzmann rates for ES and ID are virtually identical at high wavenumber. They differ only in how their respective triads connect to the \\(\\omega=f\\) boundary. Induced diffusion connects along a curve whose resonance condition is approximately that the high frequency group velocity match the near-inertial vertical phase speed, \\(\\omega/m=f/m_{ni}\\). Elastic scattering connects along a simpler \\(m=2m_{ni}\\). Evaluations of the kinetic equation reveal nearly vertical contours throughout the vertical wavenumber domain, consistent with ES, rather than sloped along contours of \\(\\omega\\propto m\\) emanating from \\(m=m_{*}\\) as expected with the ID mechanism.
The identification of the ES mechanism as being responsible for the large normalized Boltzmann rates at high vertical wavenumber requires further explanation. The role assigned to the ES mechanism by McComas and Bretherton (1977) is the equilibration of a vertically anisotropic field. This can be seen by taking the near-inertial component of a triad to represent \\(\\mathbf{p}_{1}\\), assuming that the action density of the near-inertial field is much larger than the high frequency fields, and taking the limit \\((k,l,m)=(k_{2},l_{2},-m_{2})\\equiv\\mathbf{p}^{-}\\). Thus:
\\[f_{p12}=n_{\\mathbf{p}_{1}}n_{\\mathbf{p}_{2}}-n_{\\mathbf{p}}(n_{\\mathbf{p}_{1}} +n_{\\mathbf{p}_{2}})\\cong n_{\\mathbf{p}_{1}}[n_{\\mathbf{p}^{-}}-n_{\\mathbf{p}}]\\]and transfers proceed until the field is isotropic: \\(n_{\\mathbf{p}^{-}}=n_{\\mathbf{p}}\\). But this is **not** the complete story. A more precise characterization of the resonance surface takes into account the frequency resonance requiring \\(\\omega-\\omega_{2}=\\omega_{1}\\cong f\\) requires \\(O(\\omega/f)\\) differences in \\(m\\) and \\(-m_{2}\\) if \\(k=k_{2}\\) and \\(O(\\omega/f)\\) differences in \\(k\\) and \\(k_{2}\\) if \\(m=-m_{2}\\). For an isotropic field:
\\[f_{p12}=n_{\\mathbf{p}_{1}}n_{\\mathbf{p}_{2}}-n_{\\mathbf{p}}(n_{\\mathbf{p}_{1}}+ n_{\\mathbf{p}_{2}})\\cong n_{p_{1}}[n_{\\mathbf{p}+\\delta\\mathbf{p}}-n_{ \\mathbf{p}}]\\cong n_{\\mathbf{p}_{1}}[\\delta\\mathbf{p}\\cdot\
abla n_{\\mathbf{p}}]\\]
and due care needs to be taken that \\(\\mathbf{p}_{1}\\) is on the resonance surface in the vicinity of the inertial cusp.
### Near-Resonant Interactions
The idea of trying to self consistently find the smearing of the delta-functions is not new. For internal waves it appears in DeWitt and Wright (1982); Carnevale and Frederiksen (1983); DeWitt and Wright (1984).
DeWitt and Wright (1982) set up a general framework for a self consistent calculation similar in spirit to Lvov et al. (1997), using a path-integral formulation of the diagrammatic technique. The paper makes an uncontrolled approximation that their nonlinear frequency renormalization \\(\\Sigma(\\mathbf{p},\\omega)\\) is independent of \\(\\omega\\), and shows that this assumption is not self-consistent. Lvov et al. (1997) present a more sophisticated approach to a self-consistent approximation to the operator \\(\\Sigma(\\mathbf{p},\\omega)\\). In particular, DeWitt and Wright (1982) suggests
\\[\\Sigma(\\mathbf{p},\\omega)=\\Sigma(\\mathbf{p},\\omega_{\\mathbf{p}}),\\]while Lvov et al. (1997) propose a more self-consistent
\\[\\Sigma({\\bf p},\\omega)=\\Sigma[{\\bf p},\\omega_{\\bf p}+i\\Im\\Sigma({\\bf p},\\Omega_{ \\bf p})].\\]
DeWitt and Wright (1984) evaluate the self-consistency of the resonant interaction approximation and find that for high-frequency-high-wavenumbers, the resonant interaction representation is not self-consistent. A possible critique of these papers is that they use resonant matrix elements given by Muller and Olbers (1975) with out appreciating that those elements can only be used strictly on the resonant manifold.
Carnevale and Frederiksen (1983) present similar expressions for two-dimensional stratified internal waves. There the kinetic equation is (7.4) with the triple correlation time given by \\(\\Theta\\) (our \\({\\cal L}\\)) of their (8.7). The key step is to find the level of smearing of the delta-function, denoted as \\(\\mu_{k}\\) in their (8.7) (our \\(\\gamma\\)). This can be achieved by their (8.6), which is similar to our (42). The only difference is that (8.6) hFas slightly different positions of the poles \\(i(\\gamma_{{\\bf p_{1}}}+\\gamma_{{\\bf p_{2}}})\\), instead of ours \\(i(\\gamma_{{\\bf p_{1}}}+\\gamma_{{\\bf p_{2}}}+\\gamma_{{\\bf p}})\\). Carnavale points out that the Direct Interaction Approximation leads to his expression, not the sum of all three \\(\\gamma\\)'s. We respectfully disagree. However, this is irrelevant for the purpose of this paper, since we do not solve it self consistently anyway, but propose an uncontrolled approximation (44). The main advantage of our approach over Carnevale and Frederiksen (1983) is that we use systematic Hamiltonian structures which are equivalent to the primitive equations of motion, rather than a simplified two-dimensional model.
## 8 Conclusion
Our fundamental result is that the GM spectrum is _not_ stationary with respect to the resonant interaction approximation. This result is contrary to much of the perceived wisdom and gave us cause to review published results concerning resonant internal wave interactions. We then included near-resonant interactions and found significant reductions in the temporal evolution of the GM spectrum.
We compared the interaction matrices for three different Hamiltonian formulations and one non-Hamiltonian formulation in the resonant limit. Two of the Hamiltonian formulations are canonical and one (Lvov and Tabak 2004) avoids a linearization of the Hamiltonian prior to assuming an expansion in terms of weak nonlinearity. Formulations in Eulerian, isopycnal and Lagrangian coordinate systems were considered. All four representations lead to _equivalent_ results on the resonant manifold in the absence of background rotation. The two representations that include background rotation, a canonical Hamiltonian formulation in isopycnal coordinates and a non-canonical Hamiltonian formulation in Lagrangian coordinates, also lead to _equivalent_ results on the resonant manifold. This statement is not trivial given the different assumptions and coordinate systems that have been used for the derivation of the various kinetic equations. It points to an internal consistency on the resonant manifold that we still do not completely understand and appreciate.
We rationalize the consistent results as being associated with potential vorticity conservation. In the isopycnal coordinate canonical Hamilton formulation potential vorticity conservation is explicit. In the Lagrangian coordinate non-canonical Hamiltonian, potential vorticity conservation results from a projection onto the linear modes of the system. The two non-rotating formulations prohibit relative vorticity variations by casting the velocity as a the gradient of ascalar streamfunction.
We infer that the non-stationary results for the GM spectrum are related to a higher order approximation of the elastic scattering mechanism than considered in McComas and Bretherton (1977) and McComas and Muller (1981b).
Our numerical results indicate evolution rates of a wave period at high vertical wavenumber, signifying a system which is not weakly nonlinear. To understand whether such non-weak conditions could give rise to competing effects that render the system stationary, we considered resonance broadening. We used a kinetic equation with broadened frequency delta function derived for a generalized three-wave Hamiltonian system in (Lvov et al. 1997). The derivation is based upon the Wyld diagrammatic technique for non-equilibrium wave systems and utilizes the Dyson-Wyld line resummation. This broadened kinetic equation is perceived to be more sophisticated than the two-dimensional direct interaction approximation representation pursued in Carnevale and Frederiksen (1983) and the self-consistent calculations of DeWitt and Wright (1984) which utilized the resonant interaction matrix of Olbers (1976). We find a tendency of resonance broadening to lead to more stationary conditions. However, our results are limited by an uncontrolled approximation concerning the width of the resonance surface.
Reductions in the temporal evolution of the internal wave spectrum at high vertical wavenumber were greatest for those frequencies associated with the PSI mechanism, i.e. \\(f<\\omega<5f\\). Smaller reductions were noted at high frequencies.
A common theme in the development of a kinetic equation is a perturbation expansion permitting the wave interactions and evolution of the spectrum on a slow time scale, e.g. Section b. An assumption of Gaussian statistics at zeroth order permits a solution of the first order triple correlations in terms of the zeroth order quadruple correlations. Assessing the adequacy of this assumption for the zeroth order high frequency wavefield is a challenge for future efforts. Such departures from Guassianity could have implications for the stationarity at high frequencies.
Nontrivial aspects of our work are that we utilize the canonical Hamiltonian representation of Lvov and Tabak (2004) which results in a kinetic equation without first linearizing to obtain interaction coefficients defined only on the resonance surface and that the broadened closure scheme of Lvov et al. (1997) is more sophisticated than the Direct Interaction Approximation. Inclusion of interactions between internal waves and modes of motion associated with zero eigen frequency, i.e. the vortical motion field, is a challenge for future efforts.
We found no coordinate dependent (i.e. Eulerian, isopycnal or Lagrangian) differences between interaction matrices on the resonant surface. We regard it as intuitive that there will be coordinate dependent differences off the resonant surface. It is a robust observational fact that Eulerian frequency spectra at high vertical wavenumber are contaminated by vertical Doppler shifting: near-inertial frequency energy is Doppler shifted to higher frequency at approximately the same vertical wavelength. Use of an isopycnal coordinate system considerably reduces this artifact (Sherman and Pinkel 1991). Further differences are anticipated in a fully Lagrangian coordinate system (Pinkel 2008). Thus differences in the approaches may represent physical effects and what is a stationary state in one coordinate system may not be a stationary state in another. Obtaining canonical coordinates in an Eulerian coordinate system with rotation and in the Lagrangian coordinate system are challenges for future efforts.
_Acknowledgments._
We thank V. E. Zakharov for presenting us with a book (Miropolsky 1981) and for encouragement. We also thank E. N. Pelinovsky for providing us with Pelinovsky and Raevsky (1977). We greatfully acknowledge funding provided by a Collaborations in Mathematical Geosciences (CMG) grant from the National Science Foundation. YL is also supported by NSF DMS grant 0807871 and ONR Award N00014-09-1-0515. We are grateful to YITP in Kyoto University for permitting use of their facility.
**Matrix Elements**
Our attention is restricted to the hydrostatic balance case, for which
\\[\\mid{\\bf k}\\mid\\ll\\mid m\\mid. \\tag{48}\\]
A minor detail is that the linear frequency has different algebraic representations in isopycnal and Cartesian coordinates. The Cartesian vertical wavenumber, \\(k_{z}\\), and the density wavenumber, \\(m\\), are related as \\(m=-g/(\\rho_{0}N^{2})k_{z}\\) where \\(g\\) is gravity, \\(\\rho\\) is density with reference value \\(\\rho_{0}\\), \\(N\\) is the buoyancy (Brunt-Vaisala) frequency and \\(f\\) is the Coriolis frequency. In isopycnal coordinates the dispersion relation is given by,
\\[\\omega({\\bf p})=\\sqrt{f^{2}+\\frac{g^{2}}{\\rho_{0}^{2}N^{2}}\\frac{\\mid{\\bf k} \\mid^{2}}{m^{2}}}. \\tag{49}\\]
In Cartesian coordinates,
\\[\\omega({\\bf p})=\\sqrt{f^{2}+N^{2}\\frac{\\mid{\\bf k}\\mid^{2}}{k_{z}^{2}}}. \\tag{50}\\]
In the limit of \\(f=0\\) these dispersion relations assume the form
\\[\\omega_{\\bf p}\\propto\\frac{\\mid{\\bf k}\\mid}{\\mid m\\mid}\\propto\\frac{\\mid{\\bf k }\\mid}{\\mid k_{z}\\mid} \\tag{51}\\]Matrix elements derived in Olbers (1974) are given by \\(\\mid V_{{\\bf p}_{1},{\\bf p}_{2}}^{\\rm~{}~{}~{}MO}\\mid^{2}=T^{+}/(4\\pi)\\) and \\(\\mid V_{{\\bf p}_{2},{\\bf p}}^{\\rm~{}~{}~{}~{}MO}\\mid^{2}=T^{-}/(4\\pi)\\). We extracted \\(T^{\\pm}\\) from the Appendix of Muller and Olbers (1975). In our notation, in the hydrostatic balance approximation, their matrix elements are given by
\\[|V_{{\\bf p}_{1},{\\bf p}_{2}}^{\\rm~{}~{}~{}~{}MO}|^{2}=\\frac{(N_{0} ^{2}-f^{2})^{2}}{32\\rho_{0}}\\omega\\omega_{1}\\omega
We formulate the matrix elements for Voronovich's Hamiltonian using his formula (A.1). This formula is derived for general boundary conditions. To compare with other matrix elements of this paper, we assume a constant stratification profile and Fourier basis as the vertical structure function \\(\\phi(z)\\). That allows us to solve for the matrix elements defined via Eq. (11) and above it in his paper. Then the convolutions of the basis functions give delta-functions in vertical wavenumbers. Voronovich's equation (A.1) transforms into:
\\[|V^{\\mathbf{p}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}}|^{2}\\propto\\frac{ |\\mathbf{k}||\\mathbf{k}_{1}||\\mathbf{k}_{2}|}{|mm_{1}m_{2}|}\\left(-m\\left( \\frac{1}{|\\mathbf{k}||m|}\\left(\\frac{\\mathbf{k}\\cdot\\mathbf{k}_{1}|m_{1}|}{| \\mathbf{k}_{1}|}+\\frac{\\mathbf{k}\\cdot\\mathbf{k}_{2}|m_{2}|}{|\\mathbf{k}_{2}|} \\right)+\\frac{\\omega_{1}+\\omega_{2}-\\omega}{\\omega}\\right)\\right.\\] \\[\\left.+m_{1}\\left(\\frac{1}{|\\mathbf{k}_{1}||m_{1}|}\\left(\\frac{ \\mathbf{k}\\cdot\\mathbf{k}_{1}|m|}{|\\mathbf{k}|}+\\frac{\\mathbf{k}_{1}\\cdot \\mathbf{k}_{2}|m_{2}|}{|\\mathbf{k}_{2}|}\\right)-\\frac{\\omega_{1}+\\omega_{2}- \\omega}{\\omega_{1}}\\right)\\right.\\] \\[\\left.+m_{2}\\left(\\frac{1}{|\\mathbf{k}_{2}||m_{2}|}\\left(\\frac{ \\mathbf{k}\\cdot\\mathbf{k}_{2}|m|}{|\\mathbf{k}|}+\\frac{\\mathbf{k}_{2}\\cdot \\mathbf{k}_{1}|m_{1}|}{|\\mathbf{k}_{1}|}\\right)-\\frac{\\omega_{1}+\\omega_{2}- \\omega}{\\omega_{2}}\\right)\\right)^{2}. \\tag{54}\\]
Note that Eq. (54) shares structural similarities with the interaction matrix elements in _isopy-cnal_ coordinates, Eq. (57) below.
_Caillol and Zeitlin_
A non-Hamiltonian kinetic equation for internal waves was derived in Caillol and Zeitlin (2000), Eq. (61) directly from the dynamical equations of motion, without the use of the Hamiltonian structure.
To make it appear equivalent to more traditional form of kinetic equation, as in Zakharov et al. (1992), we make a change of variables \\(1\\rightarrow-1\\) in the second line, and \\(\\mathbf{k}\\rightarrow-\\mathbf{k}\\) in the third line of (61) of Caillol and Zeitlin (2000). If we further assume that all spectra are symmetric, \\(n(-{\\bf p})=n({\\bf p})\\), then the kinetic equation assumes traditional form, as in Eq. (32), see Muller and Olbers (1975); Zakharov et al. (1992); Lvov and Tabak (2001, 2004).
The matrix elements according to Caillol and Zeitlin (2000) are shown as \\(X_{k,l,p}\\) and \\(Y_{k,l,p}^{\\pm}\\) in Eqs. (62) and (63), where \\(|V_{{\\bf p}_{1},{\\bf p}_{2}}^{\\rm{\\bf p}}|^{\\rm{\\bf CZ}}|^{2}=X_{{\\bf p}_{1},{ \\bf p}_{2},{\\bf p}}\\) and \\(|V_{{\\bf p}_{2},{\\bf p}}^{\\rm{\\bf p}_{1}}|^{\\rm{\\bf CZ}}|^{2}=Y_{{\\bf p}_{1},-{ \\bf p}_{2},{\\bf p}}^{+}\\). In our notation it reads
\\[|V_{{\\bf p}_{1},{\\bf p}_{2}}^{\\rm{\\bf P}}|^{\\rm{\\bf CZ}}|^{2} \\propto (|{\\bf k}|{\\rm sgn}(m)+|{\\bf k}_{1}|{\\rm sgn}(m_{1})+|{\\bf k}_{2} |{\\rm sgn}(m_{2}))^{2}\\frac{(m^{2}-m_{1}m_{2})^{2}}{|m||m_{1}||m_{2}||{\\bf k} ||{\\bf k}_{1}||{\\bf k}_{2}|}\\] \\[\\times\\left(\\frac{|{\\bf k}|^{2}-|{\\bf k}_{1}|{\\rm sgn}(m_{1})|{ \\bf k}_{2}|{\\rm sgn}(m_{2})}{m^{2}-m_{1}m_{2}}m-\\frac{|{\\bf k}_{1}|^{2}}{m_{1}} -\\frac{|{\\bf k}_{2}|^{2}}{m_{2}}\\right)^{2}\\] \\[.\\]
This expression is going to be used for comparison of approaches in section (3).
_Isopycnal Hamiltonian_
Finally, in Lvov and Tabak (2004) the following wave-wave interaction matrix element was derived based on a canonical Hamiltonian formulation in isopycnal coordinates:
\\[|V_{1,2}^{0\\ {\\rm H}}|^{2}=\\frac{N^{2}}{32g}\\left(\\left(\\frac{k{ \\bf k}_{1}\\cdot{\\bf k}_{2}}{k_{1}k_{2}}\\sqrt{\\frac{\\omega_{1}\\omega_{2}}{ \\omega}}+\\frac{k_{1}{\\bf k}_{2}\\cdot{\\bf k}}{k_{2}k}\\sqrt{\\frac{\\omega_{2} \\omega}{\\omega_{1}}}+\\frac{k_{2}{\\bf k}\\cdot{\\bf k}_{1}}{kk_{1}}\\sqrt{\\frac{ \\omega\\omega_{1}}{\\omega_{2}}}\\right.\\right.\\] \\[\\left.\\left.+\\frac{f^{2}}{\\sqrt{\\omega\\omega_{1}\\omega_{2}}}\\frac {k_{1}^{2}{\\bf k}_{2}\\cdot{\\bf k}-k_{2}^{2}{\\bf k}\\cdot{\\bf k}_{1}-k^{2}{\\bf k }_{1}\\cdot{\\bf k}_{2}}{kk_{1}k_{2}}\\right)^{2}\\right.\\] \\[\\left.+\\left(f\\frac{{\\bf k}_{1}\\cdot{\\bf k}_{2}^{\\perp}}{kk_{1}k _{2}}\\left(\\sqrt{\\frac{\\omega}{\\omega_{1}\\omega_{2}}}(k_{1}^{2}-k_{2}^{2})- \\sqrt{\\frac{\\omega_{1}}{\\omega_{2}\\omega}}(k_{2}^{2}-k^{2})-\\sqrt{\\frac{\\omega _{2}}{\\omega\\omega_{1}}}(k^{2}-k_{1}^{2})\\right)\\right)^{2}\\right)\\.\\]Lov and Tabak (2001) is a rotationless limit of Lvov and Tabak (2004). Taking the \\(f\\to 0\\) limit, the Lvov and Tabak (2004) reduces to Lvov and Tabak (2001), and (56) reduces to
\\[|V_{\\mathbf{p}_{1},\\mathbf{p}_{2}}^{\\mathrm{P}}|^{2}\\propto\\frac{1}{|\\mathbf{k }||\\mathbf{k}_{1}||\\mathbf{k}_{2}|}\\left(|\\mathbf{k}|\\mathbf{k}_{1}\\cdot \\mathbf{k}_{2}\\sqrt{\\left|\\frac{m}{m_{1}m_{2}}\\right|}+|\\mathbf{k}_{1}| \\mathbf{k}_{2}\\cdot\\mathbf{k}\\sqrt{\\left|\\frac{m_{1}}{m_{2}m}\\right|}+|\\mathbf{ k}_{2}|\\mathbf{k}\\cdot\\mathbf{k}_{1}\\sqrt{\\left|\\frac{m_{2}}{mm_{1}}\\right|} \\right)^{2}. \\tag{57}\\]
Observe that in this form, these equations share structural similarities with Eq. (54).
## References
* Annenkov and Shrira (2006) Annenkov, S. and V. I. Shrira, 2006: Role of non-resonant interactions in the evolution of nonlinear random water wave fields. _J. Fluid Mech._, **561**, 181-207.
* Brehovski (1975) Brehovski, 1975: On interactions of internal and surface waves in the ocean. _Oceanology_, **15**.
* Caillol and Zeitlin (2000) Caillol, P. and V. Zeitlin, 2000: Kinetic equations and stationary energy spectra of weakly nonlinear internal gravity waves. _Dynamics of Atmospheres and Oceans_, **32**, 81-112.
* Carnevale and Frederiksen (1983) Carnevale, G. F. and J. S. Frederiksen, 1983: A statistical dynamical theory of strongly nonlinear internal gravity waves. _Geophys Astrophys Fluid Dyn._, **23**, 175-207.
* Clebsch (1959) Clebsch, A., 1959: Unk. _J. Reine Angew. Math._, **56**, 1.
* DeWitt and Wright (1982) DeWitt, R. J. and J. Wright, 1982: Self-consistent effective medium theory of random internal waves. _J. Fluid Mech._, **115**, 283-.
* DeWitt and Wright (1984) ------, 1984: Self-consistent effective medium parameters for oceanic internal waves. _J. Fluid Mech._, **146**, 252-.
* Gershgorin et al. (2007) Gershgorin, B., Y. V. Lvov, and D. Cai, 2007: Interactions of renormalized waves in thermalized fermi-pasta-ulam chains. _Phys. Rev. E_, **75**, 046 603.
* Hasselmann (1966) Hasselmann, K., 1966: Feynman diagrams and interaction rules of wave-wave scattering processes. _Rev. Geophys._, **4**, 1-32.
* Janssen (2003) Janssen, P. A. E. M., 2003: Nonlinear four-wave interactions and freak waves. _J. Phys. Oceanogr._, **33**, 863-884.
* Hasegawa and Hasegawa (1983)Kenyon, K. E., 1966: Wave-wave scattering for gravity waves and rossby waves. Ph.D. thesis, UCSD.
* Kramers (1968) ------, 1968: Wave-wave interactions of surface and internal waves. _J. Mar. Res._, **26**, 208-231.
* Kramer et al. (2003) Kramer, P. R., J. A. Biello, and Y. V. Lvov, 2003: Application of weak turbulence theory to fpu model. _Discrete and Continuous Dynamical Systems. Expanded Volume for the Proceedings of the Fourth International Conference on Dynamical Systems and Differential Equations_, 482.
* Lvov et al. (1997) Lvov, V. S., Y. V. Lvov, A. C. Newell, and V. E. Zakharov, 1997: Statistical description of acoustic turbulence. _Phys. Rev. E_, **56**, 390-405.
* Lvov and Nazarenko (2004) Lvov, Y. V. and S. Nazarenko, 2004: Noisy spectra, long correlations, and intermittency in wave turbulence. _Physical Review E_, **69**, 066 608.
* Lvov et al. (2006) Lvov, Y. V., S. Nazarenko, and B. Pokorni, 2006: Discreteness and its effect on the water-wave turbulence. _Physica D_, **218**, 24-35.
* Lvov et al. (2001) Lvov, Y. V., K. L. Polzin, E. G. Tabak, and N. Yokoyama, submitted: Oceanic internal wavefield: Theory of scale-invariant spectra. _J. Phys. Oceanogr._
* Lvov and Tabak (2001) Lvov, Y. V. and E. G. Tabak, 2001: Hamiltonian formalism and the garrett and munk spectrum of internal waves in the ocean. _Phys. Rev. Lett._, **87**, 169 501.
* Lvov and Tabak (2004) ------, 2004: A hamiltonian formulation for long internal waves. _Physica D_, **195**, 106-122.
* McComas (1975) McComas, C. H., 1975: Nonlinear interactions of internal gravity waves. Ph.D. thesis, The Johns Hopkins University.
* McComas et al. (2006)* ------, 1977: Equilibrium mechanisms within the oceanic internal wavefield. _J. Phys. Oceanogr._, **7**, 836-845.
* McComas and Bretherton (1977) McComas, C. H. and F. P. Bretherton, 1977: Resonant interaction of oceanic internal waves. _J. Geophys. Res._, **82**, 1397-1412.
* McComas and Muller (1981a) McComas, C. H. and P. Muller, 1981a: The dynamic balance of internal waves. _J. Phys. Oceanogr._, **11**, 970-986.
* McComas and Muller (1981b) ------, 1981b: Time scales of resonant interactions among oceanic internal waves. _J. Phys. Oceanogr._, **11**, 139-147.
* Meiss et al. (1979) Meiss, J. D., N. Pomphrey, and K. M. Watson, 1979: Numerical analysis of weakly nonlinear wave turbulence. _Proc. Nat. Acad. Sci. U.S._, **76**, 2109-2113.
* Milder (1982) Milder, M., 1982: Hamiltonian description of internal waves. _J. Fluid Mech._, **119**, 269-282.
* Miropolsky (1981) Miropolsky, Y. Z., 1981: _Dinamika vnutrennih gravitacionnih voln v okeane_. Gidrometeroizdat.
* Muller (1976) Muller, P., 1976: On the diffusion of momentum and mass by internal gravity waves. _J. Fluid Mech._, **77**, 789-823.
* Muller et al. (1986) Muller, P., G. Holloway, F. Henyey, and N. Pomphrey, 1986: Nonlinear interactions among internal gravity waves. _Rev. Geophys._, **24**, 493-536.
* Muller and Olbers (1975) Muller, P. and D. J. Olbers, 1975: On the dynamics of internal waves in the deep ocean. _J. Geophys. Res._, **80**, 3848-3860.
* Olbers (1974) Olbers, D. J., 1974: On the energy balance of small scale internal waves in the deep sea. _Hamburg, Geophys. Einzelschriftern_, **27**.
* O'Hagan et al. (1993)* Dempster and Pinson (1976) Dempster, J. T. and Pinson, J. A. 1976: Nonlinear energy transfer and the energy balance of the internal wave field in the deep ocean. _J. Fluid Mech._, **74**, 375-399.
* Pelinovsky and Raevsky (1977) Pelinovsky, E. N. and M. A. Raevsky, 1977: Weak turbulence of the internal waves of the ocean. _Amm. Ocean Phys.-Izvestija_, **13**, 187-193.
* Pinkel (2008) Pinkel, R., 2008: Advection, phase distortion, and the frequency spectrum of finescale fields in the sea. _J. Phys. Oceanogr._, **38**, 291-313.
* Pomphrey et al. (1980) Pomphrey, N., J. D. Meiss, and K. D. Watson, 1980: Description of nonlinear internal wave interactions using langevin methods. _J. Geophys. Res._, **85**, 1085-1094.
* Seliger and Witham (19968) Seliger, R. L. and G. B. Witham, 19968: Variational principles in continuum mechanics. _Proc. Royal Soc., Ser. A_, **305**, 1-25.
* Sherman and Pinkel (1991) Sherman, J. T. and R. Pinkel, 1991: Estimates of the vertical wavenumber-frequency spectra of vertical shear and strain. _J. Phys. Ocean._, **21**, 292-303.
* Voronovich (1979) Voronovich, A. G., 1979: Hamiltonian formalism for internal waves in the ocean. _Izvestiya, Atmospheric and Oceanic Physics_, **16**, 52-57.
* Voronovich et al. (2006) Voronovich, V. V., I. A. Sazonov, and V. I. Shrira, 2006: On radiating solitons in a model of the internal wave shear flow resonance. _J. Fluid Mech._, **568**, 273-301.
* Yuen and Lake (1982) Yuen, H. C. and B. M. Lake, 1982: Nonlinear dynamics of deep-water gravity waves. _Adv. Appl. Mech._, **22**, 67-229.
* Zakharov et al. (1992) Zakharov, V. E., V. S. Lvov, and G. Falkovich, 1992: _Kolmogorov Spectra of Turbulence_. Springer-Verlag.
* Zakharov et al. (1993)List of Figures
* 1 Matrix elements \\(\\mid V^{\\mathbf{P}_{1},\\mathbf{p}_{2}(36\\text{b})}_{\\mathbf{p}_{1},\\mathbf{p}_{2} (36\\text{b})}\\mid^{2}\\) given by the solution (36b). upper left: \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}(36\\text{b})}\\mid^{2}\\) according to Muller and Olbers (1975), upper right: \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}(36\\text{b})}\\mid^{2}\\) according to Voronovich (1979), bottom left: \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}(36\\text{b})}\\mid^{2}\\) according to Caillol and Zeitlin (2000), bottom right: \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}(36\\text{b})}\\mid^{2}\\) according to Lvov and Tabak (2001).
* 2 Matrix elements \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{a})}\\mid^{2}\\) given by the solution (37a). upper left: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{a})}\\mid^{2}\\) according to Muller and Olbers (1975), upper right: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{a})}\\mid^{2}\\) according to Caillol and Zeitlin (2000), bottom right: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{a})}\\mid^{2}\\) according to Lvov and Tabak (2001).
* 3 Matrix elements \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{b})}\\mid^{2}\\) given by the solution (37b). upper left: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{b})}\\mid^{2}\\) according to Muller and Olbers (1975), upper right: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{b})}\\mid^{2}\\) according to Voronovich (1979), bottom left: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{b})}\\mid^{2}\\) according to Caillol and Zeitlin (2000), bottom right: \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}(37\\text{b})}\\mid^{2}\\) according to Lvov and Tabak (2001).
* 4 upper: Matrix elements \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}\\mathrm{ES}}\\mid^{2}\\) given by the solution (36a). middle: Matrix elements \\(\\mid V^{\\mathbf{P}}_{\\mathbf{p}_{1},\\mathbf{p}_{2}\\mathrm{ID}}\\mid^{2}\\) given by the solution (36b). bottom: Matrix elements \\(\\mid V^{\\mathbf{P}_{1}}_{\\mathbf{p}_{2},\\mathbf{p}\\mathrm{PSI}}\\mid^{2}\\) given by the solution (37a), which gives PSI as \\(\\mid\\mathbf{k}_{1}\\mid\\to 0\\) (\\(\\epsilon\\to 0\\)). The matrix elements here are shown as functions of \\(\\epsilon\\) such that \\((|\\mathbf{k}_{1}|,|\\mathbf{k}_{2}|)=(\\epsilon,\\epsilon/3+1)|\\mathbf{k}|\\). All four versions of the Matrix elements are plotted here: the appearance of a single line in each figure panel testifies to the similarity of the elements on the resonant manifold.
Normalized Boltzmann rates (34) for the Garrett and Munk spectrum (46) calculated via (40). Figures represent normalized Boltzmann rate calculated using Lvov and Tabak (2004), equation (56) with \\(\\delta=10^{-3}\\), (upper-left), \\(\\delta=10^{-2}\\), (upper-right), \\(\\delta=10^{-1}\\), (lower-left), \\(\\delta=0.5\\), (lower-right). On these figures white region on the figure corresponds to extremely fast time scales, faster then a linear time scale.
Figure 5: Normalized Boltzmann rates (34) for the Garrett and Munk spectrum (46) calculated via (40). Figures represent normalized Boltzmann rate calculated using Lvov and Tabak (2004), equation (56) with \\(\\delta=10^{-3}\\), (upper-left), \\(\\delta=10^{-2}\\), (upper-right), \\(\\delta=10^{-1}\\), (lower-left), \\(\\delta=0.5\\), (lower-right). On these figures white region on the figure corresponds to extremely fast time scales, faster then a linear time scale.
List of Tables
* 1 A list of various kinetic equations. Results from Olbers (1976), McComas and Bretherton (1977) and Pomphrey et al. (1980) are reviewed in Muller et al. (1986), who state that Olbers (1976), McComas and Bretherton (1977) and an unspecified Eulerian representation are consistent on the resonant manifold. Pomphrey et al. (1980) utilizes Langevin techniques to assess nonlinear transports. Muller et al. (1986) characterizes those Langevin results as being mutually consistent with the direct evaluations of kinetic equations presented in Olbers (1976) and McComas and Bretherton (1977). Kenyon (1968) states (without detail) that Kenyon (1966) and Hasselmann (1966) give numerically similar results. A formulation in terms of discrete modes will typically permit an arbitrary buoyancy profile, but obtaining results requires specification of the profile. Of the discrete formulations, Pomphrey et al. (1980) use an exponential profile and the others assume a constant stratification rate. The kinetic equations marked by \\({}^{\\dagger}\\) are investigated in Section 3, while kinetic equations marked by \\({}^{\\ddagger}\\) are investigated further in Section a.
* 2 Numerical evaluations of \\(\\int_{f}^{N}E(m,\\omega)d\\omega\\) for vertical mode numbers 1-8. The sum is given in the right-most column.
\\begin{table}
\\begin{tabular}{r c c c c c} \\hline source & coordinate & vertical & rotation & hydro- & special \\\\ & system & structure & & static & \\\\ \\hline Hasselmann (1966) & Lagrangian & discrete & no & no & \\\\ Kenyon (1966, 1968) & Eulerian & discrete & no & no & non-Hamiltonian \\\\ Müller and Olbers (1975)\\({}^{\\dagger\\ddagger}\\) & Lagrangian & cont. & yes & no & \\\\ McComas (1975, 1977) & Lagrangian & cont. & yes & yes & \\\\ Pelinovsky and Raevsky (1977) & Lagrangian & cont. & no & no & Clebsch \\\\ Voronoi (1979)\\({}^{\\dagger}\\) & Eulerian & cont. & no & yes & Clebsch \\\\ Pomphrey et al. (1980) & Lagrangian & discrete & yes & no & Langevin \\\\ Milder (1982) & Isopycnal & n/a & no & no & \\\\ Caillol and Zeitlin (2000)\\({}^{\\dagger}\\) & Eulerian & cont. & no & no & non-Hamiltonian \\\\ Lvov and Tabak (2001)\\({}^{\\dagger}\\) & Isopycnal & cont. & no & yes & canonical \\\\ Lvov and Tabak (2004)\\({}^{\\ddagger}\\) & Isopycnal & cont. & yes & yes & canonical \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: A list of various kinetic equations. Results from Olbers (1976), McComas and Bretherton (1977) and Pomphrey et al. (1980) are reviewed in Müller et al. (1986), who state that Olbers (1976), McComas and Bretherton (1977) and an unspecified Eulerian representation are consistent on the resonant manifold. Pomphrey et al. (1980) utilizes Langevin techniques to assess nonlinear transports. Müller et al. (1986) characterizes those Langevin results as being mutually consistent with the direct evaluations of kinetic equations presented in Olbers (1976) and McComas and Bretherton (1977). Kenyon (1968) states (without detail) that Kenyon (1966) and Hasselmann (1966) give numerically similar results. A formulation in terms of discrete modes will typically permit an arbitrary buoyancy profile, but obtaining results requires specification of the profile. Of the discrete formulations, Pomphrey et al. (1980) use an exponential profile and the others assume a constant stratification rate. The kinetic equations marked by \\({}^{\\dagger}\\) are investigated in Section 3, while kinetic equations marked by \\({}^{\\ddagger}\\) are investigated further in Section a.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline \\(\\dot{E}\\times 10^{-10}\\) W/kg & & mode-1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & \\(\\Sigma\\) \\\\ \\hline Lvov and Tabak (2004) & GM76 & -1.46 & -1.72 & -1.76 & -1.69 & -1.57 & -1.40 & -1.08 & -0.81 & -11.5 \\\\ \\hline Pomphrey et al. (1980) & GM76 & -1.83 & -2.17 & -2.17 & -1.83 & -1.67 & -1.00 & & & -10.7 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Numerical evaluations of \\(\\int_{f}^{N}E(m,\\omega)d\\omega\\) for vertical mode numbers 1-8. The sum is given in the right-most column. | We report evaluations of a resonant kinetic equation that suggest the slow time evolution of the Garrett and Munk spectrum is _not_, in fact, slow. Instead nonlinear transfers lead to evolution time scales that are smaller than one wave period at high vertical wavenumber. Such values of the transfer rates are inconsistent with conventional wisdom that regards the Garrett and Munk spectrum as an approximate stationary state and puts the self-consistency of a resonant kinetic equation at a serious risk. We explore possible reasons for and resolutions of this paradox.
Inclusion of near-resonant interactions decreases the rate at which the spectrum evolves. This leads to improved self-consistency of the kinetic equation. | Condense the content of the following passage. |
arxiv-format/0706_3730v1.md | # Narrowing Constraints with Type Ia Supernovae: Converging on a Cosmological Constant
Scott Sullivan\\({}^{1}\\), Asantha Cooray\\({}^{1}\\), Daniel E. Holz\\({}^{2,3}\\)
\\({}^{1}\\)Center for Cosmology, Department of Physics & Astronomy, University of California, Irvine, 92697 \\({}^{2}\\)Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 \\({}^{3}\\) Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL 60637
November 3, 2021
## 1 Introduction
Distance estimates to Type Ia supernovae (SNe) are currently a preferred probe of the expansion history of the Universe [1], and have led to the discovery that the expansion is accelerating [2, 3, 4, 5]. It is now believed that a mysterious dark energy component, with an energy density \\(\\sim\\)70% of the total energy density of the universe, is responsible for the accelerated expansion [6, 7]. While the presence of acceleration is now well established by various cosmological probes, the underlying physics remains a complete mystery [8]. As the precise nature of the dark energy has profound implications, understanding its properties is one of the biggest challenges of modern physics.
With the advent of large surveys for Type Ia supernovae, such as the Supernova Legacy Survey (SNLS)1[9] and Essence2[4], among others, it is now possible to consider detailed studies of the expansion history of the Universe, and shed light on the underlying physics responsible for the acceleration. Although the dark energy may be complex, thus far it is generally described by a cosmological constant, or through a simple dynamical component such as a single scalar field rolling down a potential [10]. The observational data is then used to constrain these simple models, generally in the from of determining a dark energy equation-of-state (EOS) describing the ratio of its pressure to its density [11], or by measuring dynamical parameters such as the cosmic jerk [12]. Using the EOS as the primary variable, several studies have considered how current and future data might be used to make statements on the physics responsible for dark energy [13], including attempts to establish the shape of the scalar field potential [14, 15].
Footnote 1: [http://www.cfht.hawaii.edu/SNLS/](http://www.cfht.hawaii.edu/SNLS/)
When model fitting data it is generally assumed that the dark energy EOS as a function of redshift, \\(w(z)\\), follows a certain predetermined evolutionary history. Common parameterizations include a linear variations with redshift, \\(w(z)=w_{0}+w_{z}z\\)[16], an evolution that asymptotes to a constant \\(w\\) at high redshift, \\(w(a)=w_{0}+w_{a}(1-a)\\) with \\(a\\) as the scale factor [17], or an evolution with an EOS of the form \\(w(z)=w_{0}-\\alpha\\ln(1+z)\\)[18]. Unfortunately, fitting data to an assumed functional form leads to possible biases in statements one makes about the dark energy and its evolution, especially if the true behavior of the dark energy EOS differs significantly from the assumed functional form [15]. Moreover, statements related to the dark energy EOS are often made under the assumption of a spatially flat universe, while there still exists percent-level uncertainties on the curvature.
Instead of using a parameterized form for \\(w(z)\\), one can also utilize a variant of the principal component analysis advocated in Ref. [19] to establish the EOS with redshift. This was first applied in Ref. [20] to a set of supernova data from Ref. [1]. Recently, Riess et al. [3] analyzed a new set of \\(z>1\\) SNe from the Hubble Space Telescope combined with low redshift SNe, and by making use of the same technique of decorrelating the \\(w_{i}\\) binned estimates, established that the EOS is not strongly evolving. Since the analysis of Riess et al. [3], the supernovae sample size has increased by at least by a factor of 2through the Essence survey [4]. The SNe light curves from several independent datasets have been analyzed with a common method to extract distance moduli in Ref. [5], and we use this publicly available data2 to extract the EOS.
Footnote 2: [http://www.ctio.noao.edu/wproject/wresults/](http://www.ctio.noao.edu/wproject/wresults/)
We combine the different distance measurements to extract independent estimates of the EOS when binned in several redshift bins in the redshift range \\(0<z<1.8\\). We make use of these binned and uncorrelated estimates of the EOS to address a simple question: is the dark energy consistent with a cosmological constant? For a cosmological constant \\(w(z)=-1\\) exactly, while dynamical dark energy models lead to an EOS that either evolves from a large value to -1 today (models which are categorized as \"freezing\" in Ref. [21]) or from -1 at high redshift to large values today (\"thawing\" models of Ref. [21]). In our analysis we also allow departures from a flat universe, but allow curvature to be constrained based on complementary information from the cosmic microwave background (_WMAP_; [7]) and baryon acoustic oscillation distance scales [26]. As discussed in Ref. [20], our analysis is facilitated by the fact that our measurements of the EOS are completely uncorrelated. Although we focus on the EOS, we note that one can also extract uncorrelated estimates of other parameters related to the expansion history and dark energy, such as the dark energy density [22]. However, unlike the case with the EOS, such estimates cannot be used to directly address the simple question of whether the dark energy is a cosmological constant.
We make use of our binned estimates to comment on several new developments related to dark energy studies. First, in addition to addressing the extent to which \\(w(z)=-1\\) in present data, we also comment on the extent to which dynamical dark energy models, such as \"freezing\" and \"thawing\" models [21], may be distinguished or ruled out with current distance data. We also discuss the role uncorrelated, independent EOS measurements can play in furthering our understanding of the dark energy. Recently, the Dark Energy Task Force (DETF) [23] has suggested a figure-of-merit to compare the abilities of different experiments to extract information related to dark energy. This is done in terms of the inverse area of the error ellipse of the equation of state and its evolution with redshift, utilizing either the Linder parameterization [17] or with two parameters of the form \\(w(a)=w_{p}+(a_{p}-a)w_{a}\\)[13]. The discussion is then restricted to a two parameter description of the dark energy equation of state, assuming a very specific evolutionary behavior. The ability to extract information about dark energy from current and future experiments thus becomes a model dependent statement. Using uncorrelated, independent equation of state estimates, we propose a model independent figure-of-merit. Our approach involves the inverse of the sum of inverse variances of uncorrelated \\(w(z_{i})\\) bins as an way to capture all of the available information related to dark energy in a given data set or experiment.
The paper is organized as follows: In the next Section we review techniques for reconstructing the EOS, we following the methods of Ref. [20] and Ref. [3]. In Section 3 we present our results, addressing whether dark energy is a cosmological constant or hasdynamical behavior that leads to an evolution in the EOS. We show that existing data rule out extreme forms of dynamical behavior at the 68% confidence level, including models that either start with \\(w(z)>-0.4\\) at \\(z>1.0\\) or models that asymptote to values of \\(w(z)>-0.85\\) at \\(z<0.2\\). In Section 3.1 we outline a new figure of merit to assess the dark energy information content of future experiments, based on uncorrelated binned estimates of the EOS. We conclude with a summary of results in Section 4.
## 2 Methodology
A simple way to model the dark component of the universe credited for the accelerating expansion is through a modification of the standard cosmological model. We utilize the Friedmann equations, and specify the dark energy density and its equation-of-state
Figure 1: Hubble diagram for type Ia supernova data used for the present analysis. The dataset includes a total of 192 SNe from the recent analysis of light curves in Ref. [5]. For comparison, we also separately analyze a subset of 104 SNe within this sample that were also used in Riess et al. [3]. The distance moduli used here for this subset, however, are different from this original study due to the reanalysis of light curves in Davis et al. [5]. The orange line is for our best fit.
(EOS). We assume a piecewise constant EOS, with value \\(w_{i}\\) in each \\(i^{\\rm th}\\) redshift bin (defined by an upper boundary at \\(z_{i}\\)). We fit the observational data to the luminosity distance as a function of redshift. The expression for luminosity distance, \\(d_{L}(z)\\), depends on whether the universe is flat, positively, or negatively curved (i.e., the sign of \\(\\Omega_{k}\\)), and is given by
\\[d_{L}(z)=(1+z)\\frac{c}{H_{0}}\\times\\left\\{\\begin{array}{ll}\\frac{1}{\\sqrt{| \\Omega_{k}|}}{\\rm sinh}\\Big{(}\\sqrt{|\\Omega_{k}|}\\int_{0}^{z}\\frac{dz^{\\prime} }{E(z^{\\prime})}\\Big{)}&\\quad\\mbox{if $\\Omega_{k}>0$,}\\\\ \\int_{0}^{z}\\frac{dz^{\\prime}}{E(z^{\\prime})}&\\quad\\mbox{if $\\Omega_{k}=0$,}\\\\ \\frac{1}{\\sqrt{|\\Omega_{k}|}}\\sin\\Big{(}\\sqrt{|\\Omega_{k}|}\\int_{0}^{z}\\frac{ dz^{\\prime}}{E(z^{\\prime})}\\Big{)}&\\quad\\mbox{if $\\Omega_{k}<0$,}\\end{array}\\right. \\tag{1}\\]
where
\\[E(z)=\\left[\\Omega_{M}(1+z)^{3}+\\Omega_{k}(1+z)^{2}+(1-\\Omega_{k}-\\Omega_{M})F (z)\\right]^{\\frac{1}{2}}, \\tag{2}\\]
and \\(F(z)\\) depends on the binning of \\(w(z)\\). For the \\(n^{\\rm th}\\) redshift bin \\(F(z)\\) has the form
\\[F(z_{n}>z>z_{n-1})=(1+z)^{3(1+w_{n})}\\prod_{i=0}^{n-1}(1+z_{i})^{3(w_{i}-w_{i+ 1})}. \\tag{3}\\]
We define the zeroth bin as \\(z_{0}=0\\), so the product is unity for redshift \\(z\\) in the first bin. For our primary analysis we set \\(z_{1}=0.2\\), \\(z_{2}=0.5\\), \\(z_{3}=1.8\\), and \\(z_{4}\\) extends beyond the surface of last scattering at \\(z_{CMB}=1089\\). We assume \\(w(z>1.8)=-1\\) and allow variation within the remaining three redshift bins. Selecting the cutoff point for \\(z_{3}\\) is fairly arbitrary; we found that pushing it back as far as \\(z_{3}=2.5\\) does not substantially alter the outcome of our analysis.
In addition to SNe, we also make use of four primary constraints from the literature following the analysis of Riess et al.[3], modifying these to account for variations in curvature since the analysis in Ref. [3] assumed an _a priori_ flat cosmological model. These constraints are:
* The product of the Hubble parameter \\(h\\equiv H_{0}/100\\) and the present local mass density \\(\\Omega_{m}\\) from SDSS large scale structure measurements [38], given by \\(\\Omega_{m}h=0.213\\pm 0.023\\). In cases where we allow curvature to vary, we either take a flat, broad prior in curvature or, to highlight the result under the assumption of a measured value for the curvature, we take \\(\\Omega_{k}=-0.014\\pm 0.017\\) as derived by _WMAP_ analysis by combining _WMAP_ and the Hubble constant [7].
* The SDSS luminous red galaxy baryon acoustic oscillation (BAO) distance estimate to redshift \\(z_{\\rm BAO}=0.35\\). Here the constraint is on the overall parameter \\(A\\equiv\\frac{\\sqrt{\\Omega_{M}H_{0}^{2}}}{cz_{\\rm BAO}}\\left[r^{2}(z_{\\rm BAO}) \\frac{cz_{\\rm BAO}}{H_{0}E(z_{\\rm BAO})}\\right]^{1/3}\\), where \\(r(z)=d_{L}(z)/(1+z)\\) is the angular diameter distance. The angular correlation function of red galaxies in the SDSS spectroscopy survey leads to \\(A=0.469(\\frac{n}{0.98})^{-0.35}\\pm 0.017\\)[26]. Following Riess et al. [3], we use the WMAP estimate for the scalar tilt with \\(n=0.95\\)[7].
* The distance to last scattering, at \\(z_{\\rm CMB}=1089\\), written in the dimensionless form \\(R_{CMB}\\equiv\\frac{\\sqrt{\\Omega_{M}H_{0}^{2}}}{c}r(z_{\\rm CMB})\\) where \\(r(z_{\\rm CMB})\\) is the angular diameter distance to the CMB last scattering surface. We use the dark energy and curvature independent estimate with \\(R=1.70\\pm 0.03\\)[27].
* The distance ratio between \\(z=0.35\\) and last scattering \\(z=1089\\) as measured by the SDSS BAO analysis [26]: \\[R_{0.35}=\\frac{\\left[r^{2}(z_{\\rm BAO})\\frac{cz_{\\rm BAO}}{H_{0}E(z_{\\rm BAO})} \\right]^{1/3}}{r(z_{\\rm CMB})},\\] (4) with the value of \\(R_{0.35}=0.0979\\pm 0.0036\\).
When estimating parameters we make use of the \\(\\chi^{2}\\) statistic for a particular model with parameter set \\(\\theta\\) (\\(w_{i}\\), \\(H_{0}\\), \\(\\Omega_{m}\\), and in some cases \\(\\Omega_{k}\\)):
\\[\\chi^{2}(\\theta)=\\sum_{n=1}^{N}\\left(\\frac{\\mu_{i}^{\\rm theory}-\\mu_{i}^{\\rm data }}{\\sigma_{i}^{2}+\\sigma_{\\rm int}^{2}}\\right), \\tag{5}\\]
where \\(N\\) is the total number of supernovae in the sample. While our total sample includes 192 supernovae, we also extract a subset of 104 supernovae that was analyzed previously [3]. While the subsample is for comparison with previous results, our distance estimates differ from the original analysis due to a reanalysis of light curves by Davis et al. [5] using a common light curve fitting method. When estimating \\(\\chi^{2}\\) we set an intrinsic dispersion of order \\(\\sigma_{\\rm int}\\sim 0.1\\), such that the minimum \\(\\chi^{2}\\) value for the best fit model comes to be about one. Using \\(\\chi^{2}\\) we calculate the probability \\(P(\\theta)\\propto\\exp(-\\frac{\\chi^{2}(\\theta)}{2})\\) and derive constraints by generating Markov chains through a Monte-Carlo algorithm. The algorithm generates a set of models whose members appear in the set (or chain) a number of times proportional to their likelihood of being a good fit to the observed data, after marginalizing over other priors. The likelihood probability functions for each independent parameter are generated by simply taking a histogram over the chain. We marginalize over \\(H_{0}\\) assuming a broad uniform prior over the range \\([30,85]\\) km s\\({}^{-1}\\) Mpc\\({}^{-1}\\). We also marginalize over \\(\\Omega_{m}\\) assuming the quoted prior above for \\(\\Omega_{m}h\\) using SDSS large scale structure measurements.
In the case of dark energy parameters, the redshift binned EOS estimates are correlated such that their errors depend upon each other. These correlations in the redshift binned EOS estimates, \\(w_{i}\\), are captured by the covariance matrix, and this matrix can be generated by taking the average over the chain such that:
\\[C=\\left\\langle{\\bf ww}^{T}\\right\\rangle-\\left\\langle{\\bf w}\\right\\rangle \\left\\langle{\\bf w}^{T}\\right\\rangle. \\tag{6}\\]
This covariance matrix is not diagonal as the values found for the various EOS estimates are correlated. This is expected, as the integration over low redshift bins in Eq. (1) obviously affects the model fit in middle and higher redshift bins. The behavior is also such that the best constraints are found for the lowest redshift bin, with higher bins having progressively weaker constraints. With the addition of the distance scale to the last scattering surface from CMB data, the constraint in the higher redshift bins are improved, as seen in prior analyses [3].
Instead of discussing \\(w(z)\\) in correlated bins, we follow Huterer & Cooray [20] and transform the covariance matrix to decorrelate the EOS estimates. This is achievedby changing the basis through an orthogonal matrix rotation that diagonalizes the covariance matrix. We start by the definition of the Fisher matrix,
\\[\\mathbf{F}\\equiv C^{-1}=\\mathbf{O^{T}}\\Lambda\\mathbf{O} \\tag{7}\\]
where the matrix \\(\\Lambda\\) is the diagonalized covariance for transformed bins. The uncorrelated parameters are then defined by the rotation performed by the orthogonal rotation matrix \\(\\mathbf{q}=\\mathbf{O}\\mathbf{w}\\) and has the covariance matrix of \\(\\Lambda^{-1}\\). There is freedom of choice in what orthogonal matrix is used to perform this transformation, and we use the particular choice that was advocated in Ref. [20] and write the weight transformation matrix:
\\[\\tilde{W}=\\mathbf{O}^{T}\\Lambda^{\\frac{1}{2}}\\mathbf{O} \\tag{8}\\]
where the rows are then summed such that the weights from each band add up to unity. This choice ensures we have mostly positive contributions across all bands, an intuitively pleasing result, so for example we can interpret the weighting matrix as an indication of how much a measurement in the third bin is influenced by SNe in the first and second bins. We apply the transformation \\(\\tilde{W}\\) to each link in the Markov chain to generate a set of independent, uncorrelated measures of the EOS and its probability distributions as determined by the observables.
In addition to probability distribution functions for each of the uncorrelated EOS estimates, to study the redshift evolution of \\(w(z)\\), we also study the differences between these uncorrelated estimates. The probability distribution functions of the difference between estimate \\(w_{i}\\) and \\(w_{j}\\) is generated through \\(P(w_{\\mathrm{diff}})\\propto\\int P_{w_{i}}(w)P_{w_{j}}(w+w_{\\mathrm{diff}})dw\\). While in the uncorrelated case an estimate \\(w_{i}\\) is not necessarily associated with a single redshift bin, due to support from adjacent redshift bins due to the transformation, any significant difference between \\(w_{i}\\) and \\(w_{j}\\) can be considered to be evidence for an evolution in the dark energy EOS. Moreover, if the dark energy is associated with a cosmological constant, then these difference estimates should be precisely zero.
## 3 Results
The results presented in this paper are derived from the statistical analysis of a combination of recent supernova surveys, including the Supernova Legacy Survey (SNLS) [9], the ESSENCE survey [4], and high-\\(z\\) supernovae discovered by the Hubble Space Telescope (HST) [3]. In particular, we use a total of 192 SNe Ia measurements taken from a combination of supernovae analyzed in Ref. [5] using a common light curve fitting method (Figure 1). Also, for comparison, results are presented for the 104 SNe Ia that overlap with the \"Gold\" data set presented in Riess et al. [3], although the distance moduli values we use here for the same subsample are slightly different from the values published in the original analysis due to variations in the light curve fitting. This analysis includes the four external constraints outlined in the previous section. In addition to the standard flat cosmological model generally assumed when making fitsto dark energy EOS, we also allow for variations in the curvature, both with a prior on \\(\\Omega_{k}\\) based on _WMAP_ and Hubble constant measurements, and with no prior.
In Figure 2 we highlight our results for \\(w(z)\\), in the redshift bins \\(z<0.2\\), \\(0.2<z<0.5\\) and \\(1.8>z>0.5\\), with \\(w(z>1.8)=-1\\). These binned estimates
Figure 2: Uncorrelated estimates of the dark energy equation of state using a combined sample of supernova data and constraints from _WMAP_ and BAO measurements. In the top panel a flat universe prior is assumed (\\(\\Omega_{k}=0\\)), and the filled and open symbols show the constraint with the total sample of 192 SNe from Ref. [5] and a subset of 104 SNe corresponding to a previous analysis [3], respectively. In the bottom panel we show \\(w(z)\\) estimates without the flat universe prior; open and filled symbols showing constraints with a broad prior for \\(\\Omega_{k}\\) and \\(\\Omega_{k}=-0.014\\pm 0.017\\), respectively.
bin. The third measure was typically 54% determined by the third bin, with a 6% contribution from the first bin.
These results do not presume a particular evolutionary history, as opposed to model fitting to a specific form, e.g. \\(w(z)=w_{0}+w_{1}z\\)[16] or \\(w(z)=w_{0}+w_{a}(1-a)\\). Our procedure, which fits binned values of \\(w(z_{i})\\) and then decorrelates them, has the advantage that one can extract redshift evolution independent of a model. This is particularly effective if the model to be assumed turns out not to be an accurate representation of the true underlying EOS.
As shown in Figure 2, the dark energy EOS as a function of redshift is fully consistent with \\(w(z)=-1\\) at the \\(1\\sigma\\) confidence level, for both the full sample of 192 SNe, and the subset of 104 SNe corresponding to the earlier analysis [3]. As shown in the lower panel of Figure 2, this conclusion is unchanged when we drop the assumption related to a flat cosmological model, regardless of the assumed prior on \\(\\Omega_{k}\\). Our external constraints related to distance to the last scattering surface from CMB, and BAO distance scale to \\(z=0.35\\), provide a strong constraint on \\(\\Omega_{k}\\). We explicitly include an additional flatness constraint to allow comparison with earlier work [5, 3]. To highlight the extent to which EOS estimates are consistent with \\(w(z)=-1\\), in Figure 3 we plot the probability distribution functions \\(P(w_{i})\\) both for flat cosmologies and for the case with curvature allowed to vary. Except for the third bin, which still remains mostly undefined with a very broad probability function, the first two bins are peaked and allow constraints
Figure 4: Probability distribution function of cosmic curvature by combining supernova data with additional constraints outlined in the paper. We assume a broad, flat prior for \\(\\Omega_{k}\\), but the combination of SN data and CMB and BAO distance scales results in a tight constraint on curvature. We find \\(\\Omega_{k}=0.004\\pm 0.019\\) (192 SNe) and \\(\\Omega_{k}=0.002\\pm 0.022\\) (104 SNe), which is fully consistent with estimates made by the _WMAP_ analysis by combining _WMAP_ data with supernova data from the SNLS survey.
at a high confidence level over \\(-2<w<0\\). These show a clear consistency with \\(w=-1\\), and hence are completely consistent with a cosmological constant. While the allowed range for \\(w(z_{3})\\) is broad, at the 68% confidence level we find that \\(w(z_{3})<-0.2\\), suggesting that we can rule out a large EOS even at \\(z>0.5\\).
The uncorrelated binned estimates of \\(w(z)\\) derived in Huterer & Cooray [20] using an earlier \"Gold\" sample from Riess et al. [29] showed an equation of state that varied significantly between the lowest-redshift bin and the second bin. This difference decreased in the most recent \"Gold\" sample as analyzed by Riess et al. [3]. In the current work, utilizing an extended sample of supernovae, we no longer find evidence for a variation in the dark energy EOS between the first and the second bin. To show this explicitly, we also plot the probability distribution function of the difference between binned estimates of \\(w_{i}(z)\\) in Figure 3. Between the first and the second bin we find the difference to be \\(w_{2}-w_{1}=0.06\\pm 0.26\\) at the 68% confidence level for the 192 SNe sample with a flat model. Current data is thus completely consistent with a cosmological constant. Previous estimates of a large and a statistically significant value for \\(w_{2}-w_{1}\\), with \\(w_{1}<-1\\) and \\(w_{2}>-0.8\\), led to suggestions in the literature for a physical mechanism called dark energy \"metamorphosis\" [30]. While earlier conclusions were limited to a small set of supernovae data, with the larger sample it is now clear that there is little evidence for a sudden transition in the EOS around \\(z\\sim 0.2\\). Future data could tighten these constraints, either further narrowing down to a cosmological constant, or providing evidence for small variations in the EOS with redshift.
Figure 5: _Left_: A comparison of \\(w(z)\\) estimates and dynamical dark energy models, based on the Monte-Carlo modeling approach of Ref. [28], that are inconsistent with current estimates. We show both cases of “thawing” and “freezing” models inconsistent with current data (see text for details). _Right:_ The shapes of potentials generally corresponding to \\(w(z)\\) models shown in the left panel which are inconsistent with our estimates of the binned EOS from a combined sample of supernovae and cosmological distance scale measurements. We show \\(V(\\phi)\\) as a function of the scalar field \\(\\phi\\) when potentials are normalized to the value at \\(z=3\\) (\\(V_{0}\\equiv V(z=3)\\)), which we take to be \\(\\phi=0.2\\).
We show that our conclusions are generally unchanged by assumptions related to the curvature. This is because we constrain the EOS using a combination of supernova data and existing measurements of the cosmic distance scale out to \\(z=1089\\) and \\(z=0.35\\) with CMB and BAO, respectively. The combination of supernova data and these measurements, combined with our prior on the Hubble constant, leads to a strong independent constraint on the curvature parameter \\(\\Omega_{k}\\). We show the probability distribution \\(P(\\Omega_{k})\\) in Figure 4 for both the full sample and the subset of 104 supernovae. In both cases \\(\\Omega_{k}\\) is consistent with zero; with the full supernovae sample, we find \\(\\Omega_{k}=0.004\\pm 0.019\\). This is about \\(1\\sigma\\) away from the combined _WMAP_ and Hubble
Figure 6: A comparison of \\(w(z)\\) values from dynamical dark energy models, based on the Monte-Carlo modeling approach of Ref. [28]. Each dot represents a potential dynamical dark energy model. The data points are the allowed range from our analysis.
constant estimate of \\(\\Omega_{k}=-0.014\\pm 0.017\\) or the combined _WMAP_+SNLS estimate of \\(\\Omega_{k}=-0.011\\pm 0.012\\)[7]. On the other hand, the combined _WMAP_+SDSS estimate from the same analysis is \\(\\Omega_{K}=-0.0053^{+0.0068}_{-0.0060}\\), which is a shift in the direction of the \\(\\Omega_{k}\\) value we find when the combined SNe dataset is analyzed with the _WMAP_ and BAO priors.
To demonstrate how our estimates of \\(w(z_{i})\\) can be used to understand the redshift evolution of the dark energy component, in Figure 5 we show a sample of predictions related to dynamical dark energy models that are ruled out with present estimates of \\(w_{i}(z)\\) at the 68% confidence level. These cases generally involve a dark energy EOS that starts as \\(w>-0.2\\) at high redshifts, or an EOS that stars with a value around -1 at high redshift but evolve to a value greater than -0.85 when \\(z<0.2\\). The former models belong to the general category of \"freezing\" models described in Caldwell & Linder [21], while the latter models are categorized as \"thawing\" models. We generate these models following the numerical technique of Huterer & Peiris [28], writing the Klein-Gordon equation for the evolution related to a scalar field as \\(\\ddot{\\phi}+3H\\dot{\\phi}+dV/d\\phi=0\\), and then numerically generating a large number of models following the Monte-Carlo flow approach as used for numerical models of inflation [31, 32, 33, 34, 35]. We do not reproduce details as the process is similar to the modeling of Ref. [28, 36].
As a further application of our estimates of the EOS, and the difference between two binned estimates of EOS, in Figure 6 we compare our measured value with values expected for a large number of models. Again, we rule out certain extreme models where \\(w(z_{i})\\) varies rapidly between adjacent bins. In dynamical dark energy models, models that generally lead to a large variation in \\(w\\) between two adjacent bins also have a value significantly different from -1 in one of the bins. Thus most models are currently ruled out from the value of \\(w\\) in a single bin, rather than through the difference of \\(w_{2}-w_{1}\\) or \\(w_{3}-w_{2}\\), since the the latter are still largely uncertain. While we can use the numerical modeling technique of Huterer & Peiris [28] to make qualitative statements about the EOS, and to rule out extreme possibilities for its dynamical evolution, given the stochastic nature of model generation we cannot use this sort of method to make detailed statements about, for example, the scalar field (quintessence) potential responsible for dark energy. Instead, it is necessary to directly reconstruct the scalar field potential from supernovae distance data. While there are attempts to recover the potential by directly model fitting various parametric forms of the potential as a function of the scalar field, such as power-law or polynomial functions of the scalar field \\(\\phi\\), model independent binned estimates of the potential are preferable [37].
### A New Figure of Merit
As discussed in the previous sections, our binned estimates allow us to study the redshift evolution of the dark energy EOS without the need to assume an underlying model. This is to be contrasted with the usual approach, in which a parameterized form for \\(w(z)\\) is required to fit the data. With an increasing supernova sample size, and improvement in other cosmological observations, we may be able to recover three or more binned values at the 10% level or better. In the context of planning dark energy experiments, and assessing the constraining power of future data, it may be advantageous to consider binned estimates of the dark energy EOS, rather than the error associated with a specific and arbitrary two parameter model of the equation of state. The latter is the approach adopted to quote the \"figure of merit\" (FOM) of an experiment, based on the inverse of the area of the ellipse of the two parameters describing the EOS with redshift (introduced in [23]). In the case of binned estimates, once uncorrelated, we can quote an alternative FOM as \\(\\left[\\sum_{i}1/\\sigma^{2}(w_{z_{i}})\\right]^{1/2}\\), which takes into account the combined inverse variance of all independent estimates of the EOS. In an upcoming paper we will quantify the exact number of \\(w(z)\\) estimates that can be determined with future experiments involving supernovae and large-scale structure (weak lensing, baryon acoustic oscillations over a wide range in redshift), and we will compare this alternative FOM to the method of Ref. [23].
## 4 Summary
We use a sample of 192 SNe Ia (and a subset for comparison) to constrain the dark energy equation-of-state parameter and its variation as a function of redshift. We use a model independent approach, providing uncorrelated measurements across three redshift bins below \\(z=1.8\\), and find that \\(w(z)\\) is consistent with a cosmological constant (\\(w(z)=-1\\)) at the 68% confidence level. At the same confidence level we find that the EOS is greater than \\(-0.2\\) over the redshift range of \\(0<z<1.8\\). Overall, there is no strong evidence against the assumption of a flat \\(\\Omega_{k}=0\\) universe, especially when recent supernova data are combined with cosmological distance scale measurements from _WMAP_ and BAO experiments. We argue against previous claims in the literature of evolving dark energy, such as dark energy \"metamorphosis\", where the EOS changes significantly from \\(w>-1\\) at \\(z>0.2\\) to \\(w<-1\\) at \\(z<0.2\\). Instead, we find consistency with a cosmological constant, encapsulated in the 68% level constraint: \\(w_{2}-w_{1}=0.06\\pm 0.26\\), where \\(w_{1}\\) is the value of the dark energy EOS in the \\(z<0.2\\) bin, and \\(w_{2}\\) is the value in the bin \\(0.2<z<0.5\\). A transition in the EOS can also be ruled out between our second and third binned estimates of EOS, although we still find large uncertainties in our determination of the EOS at \\(z>0.5\\), and we are insensitive to rapid variations at \\(z>1\\). We compare our EOS estimates to Monte Carlo generated dynamical dark energy models associated with a single scalar field potential. Our EOS estimates generally allow us to rule out extreme \"thawing\" and \"freezing\" models, though a large number of potential shapes remain in agreement with current data.
We also suggest an alternative, parameter independent figure-of-merit, with which to evaluate the potential of future missions to constrain properties of the dark energy.
## 5 Acknowledgments
AC and DEH are partially supported by the DOE at LANL through IGPP grant Astro-1603-07. AC acknowledges funding for undergraduate research, at UC Irvine as part of NSF Career AST-0645427, which was used to support SS, and thanks UCI Undergraduate Research Opportunities Program (UROP) for additional support in the form of a UCI Chancellor's Award for Fostering of Distinguished Undergraduate Research. DEH acknowledges a Richard P. Feynman Fellowship from LANL.
## References
* [1] A. G. Riess _et al._, B. J. Barris _et al._, arXiv:astro-ph/0310843; R. A. Knop _et al._, arXiv:astro-ph/0309368; J. L. Tonry _et al._, Astrophys. J. **594**, 1 (2003).
* [2] S. Perlmutter _et al._, Astrophys. J. **517**, 565 (1999); A. Riess _et al._, Astron. J. **116**, 1009 (1998).
* [3] A. G. Riess _et al._, arXiv:astro-ph/0611572.
* [4] W. M. Wood-Vasey _et al._, arXiv:astro-ph/0701041.
* [5] T. M. Davis _et al._, arXiv:astro-ph/0701510.
* [6] D. N. Spergel _et al._ [WMAP Collaboration], Astrophys. J. Suppl. **148**, 175 (2003);
* [7] D. N. Spergel _et al._ [WMAP Collaboration], arXiv:astro-ph/0603449.
* [8] T. Padmanabhan, Phys. Rept. **380**, 235 (2003) [arXiv:hep-th/0212290].
* [9] P. Astier et al., Astron. Astrophys. **447** 31 (2006), astro-ph/0510447.
* [10] R. R. Caldwell, R. Dave and P. J. Steinhardt, Phys. Rev. Lett. **80**, 1582 (1998) [arXiv:astro-ph/9708069].
* [11] P. M. Garnavich _et al._ [Supernova Search Team Collaboration], Astrophys. J. **509**, 74 (1998) [arXiv:astro-ph/9806396].
* [12] M. Visser, Class. Quant. Grav. **21**, 2603 (2004) [arXiv:gr-qc/0309109].
* [13] D. Huterer and M. S. Turner, Phys. Rev. D**60** 081301 (1999).
* [14] M. Sahlen, A. R. Liddle and D. Parkinson, Phys. Rev. D **72**, 083511 (2005) [arXiv:astro-ph/0506696].
* [15] C. Li, D. E. Holz and A. Cooray, Phys. Rev. D **75**, 103503 (2007) [arXiv:astro-ph/0611093].
* [16] A. R. Cooray and D. Huterer, Astrophys. J. **513**, L95 (1999) [arXiv:astro-ph/9901097].
* [17] E.V. Linder, _Phys. Rev. Lett._ **90** 091301 (2003).
* [18] B. F. Gerke and G. Efstathiou, \"Probing quintessence: Reconstruction and parameter estimation from Mon. Not. Roy. Astron. Soc. **335**, 33 (2002) [arXiv:astro-ph/0201336].
* [19] D. Huterer and G. Starkman, Phys. Rev. Lett. **90**, 031301 (2003).
* [20] D. Huterer and A. Cooray, Phys. Rev. D **71**, 023506 (2005) [arXiv:astro-ph/0404062].
* [21] R. R. Caldwell and E. V. Linder, Phys. Rev. Lett. **95**, 141301 (2005) [arXiv:astro-ph/0505494].
* [22] Y. Wang and M. Tegmark, Phys. Rev. D **71**, 103513 (2005) [arXiv:astro-ph/0501351].
* [23] A. Albrecht _et al._, arXiv:astro-ph/0609591.
* [24] M. Tegmark _et al._ [SDSS Collaboration], Phys. Rev. D **69**, 103501 (2004) [arXiv:astro-ph/0310723].
* [25] A. G. Riess _et al._, Astrophys. J. **627**, 579 (2005) [arXiv:astro-ph/0503159].
* [26] D. J. Eisenstein _et al._ [SDSS Collaboration], Astrophys. J. **633**, 560 (2005) [arXiv:astro-ph/0501171].
* [27] Y. Wang and P. Mukherjee, arXiv:astro-ph/0703780.
* [28] D. Huterer and H. V. Peiris, arXiv:astro-ph/0610427.
* [29] A. G. Riess _et al._ [Supernova Search Team Collaboration], Astrophys. J. **607**, 665 (2004) [arXiv:astro-ph/0402512].
* [30] U. Alam, V. Sahni, T. D. Saini and A. A. Starobinsky, Mon. Not. Roy. Astron. Soc. **354**, 275 (2004) [arXiv:astro-ph/0311364].
* [31] W. H. Kinney, Phys. Rev. D **66**, 083508 (2002) [arXiv:astro-ph/0206032].
* [32] A. R. Liddle, Phys. Rev. D **68**, 103504 (2003) [arXiv:astro-ph/0307286].
* [33] H. Peiris and R. Easther, JCAP **0607**, 002 (2006) [arXiv:astro-ph/0603587].
* [34] S. Chongchitnan and G. Efstathiou, Phys. Rev. D **72**, 083520 (2005) [arXiv:astro-ph/0508355].
* [35] B. C. Friedman, A. Cooray and A. Melchiorri, Phys. Rev. D **74**, 123509 (2006) [arXiv:astro-ph/0610220].
* [36] S. Chongchitnan and G. Efstathiou, arXiv:0705.1955 [astro-ph].
* [37] C. Li, D. E. Holz and A. Cooray, Phys. Rev. D **75**, 103503 (2007) [arXiv:astro-ph/0611093].
* [38] M. Tegmark _et al._ [SDSS Collaboration], Astrophys. J. **606**, 702 (2004) [arXiv:astro-ph/0310725]. | We apply a parameterization-independent approach to fitting the dark energy equation-of-state (EOS). Utilizing the latest type Ia supernova data, combined with results from the cosmic microwave background and baryon acoustic oscillations, we find that the dark energy is consistent with a cosmological constant. We establish independent estimates of the evolution of the dark energy EOS by diagonalizing the covariance matrix. We find three independent constraints, which taken together imply that the equation of state is more negative than -0.2 at the 68% confidence level in the redshift range \\(0<z<1.8\\), independent of the flat universe assumption. Our estimates argue against previous claims of dark energy \"metamorphosis,\" where the EOS was found to be strongly varying at low redshifts. Our results are inconsistent with extreme models of dynamical dark energy, both in the form of \"freezing\" models where the dark energy EOS begins with a value greater than -0.2 at \\(z>1.2\\) and rolls to a value of -1 today, and \"thawing\" models where the EOS is near -1 at high redshifts, but rapidly evolves to values greater than -0.85 at \\(z<0.2\\). Finally, we propose a parameterization-independent figure-of-merit, to help assess the ability of future surveys to constrain dark energy. While previous figures-of-merit presume specific dark energy parameterizations, we suggest a binning approach to evaluate dark energy constraints with a minimum number of assumptions. | Write a summary of the passage below. |
arxiv-format/0706_4046v3.md | **THE THERMAL REGULATION OF GRAVITATIONAL INSTABILITIES IN PROTOPLANETARY DISKS. IV. SIMULATIONS WITH ENVELOPE IRRADIATION**
Kai Cai
Department of Physics and Astronomy, McMaster University, Hamilton, ON, Canada
L8S 4M1
Richard H. Durisen, Aaron C. Boley
Department of Astronomy, Indiana University, 727 E. 3\\({}^{\\rm rd}\\) St., Bloomington, IN 47405
Megan K. Pickett
Department of Physics, Lawrence University, Box 599, Appleton, WI 54911
and
Annie C. Mejia
Museum of Flight, Exhibits Department, 9404 E. Marginal Way South, Seattle, WA 98108
######
accretion, accretion disks - hydrodynamics - instabilities - planetary systems: formation - planetary systems: protoplanetary disks
## 1 Motivation
By the early 1990s, it was clear that infalling dusty envelopes must be invoked to explain the large infrared (IR) excesses present in the observed spectral energy distributions (SEDs) of Class I YSOs and most of the \"flat-spectrum\" T Tauri stars (Kenyon et al. 1993a, Kenyon et al. 1993b, Calvet et al. 1994; however, see Chiang et al. 2001 for a different view). D'Alessio et al. (1997) explored the effects of irradiation from infalling optically thick protostellar envelopes on a standard viscous accretion disk and found that, although the outer disk will be heated significantly, it may still be unstable against gravitational instabilities (GIs), at least in the case of HL Tau. In contrast, the outer disk is generally stabilized against GIs by the more intense irradiation from the central star (D'Alessio et al. 1998).
Over the past few years, millimeter surveys of embedded sources (e.g. Looney et al. 2000, 2003, Motte & Andre 2001) have shown evidence for the presence of huge envelopes and possibly massive disks (Eisner & Carpenter 2006). Although the disk masses are still somewhat controversial (Looney et al. 2003), the total circumstellar masses are undoubtedly larger than those of YSOs at later stages (see also Eisner & Carpenter 2003). The larger circumstellar disk mass and the smaller disk size at early times (e.g. Durisen et al. 1989, Stahler et al. 1994, Pickett et al. 1997) work toward enhancing the surface density of the disk, thus lowering the Toomre \\(Q\\)-value (Toomre 1964) and making the disk more susceptible to the development of GIs. As an example, Osorio et al. (2003) studied the L1551 IRS5 binary system in unprecedented detail andestimated disk parameters based on fitting the observed fluxes with irradiated steady-state \\(\\alpha\\)-disk models. They found that the outer part of the disks have \\(Q<1\\).
All this work suggests that GIs are more likely to occur in protoplanetary disks during the embedded phase. Fragmentation of disks due to GIs has been proposed as a formation mechanism for gas giant planets (Boss 1997, 1998, 2000, Mayer et al. 2002, 2004). The strength of GIs depends critically on the thermal physics of the disk (see Durisen et al. 2007 for a review). In particular, efficient cooling is required for the disk to fragment into clumps, and so it is important to determine how strong GIs can be under realistic thermal conditions, including envelope irradiation.
Previous disk GI simulations by our group focused on naked disks, which would correspond to Class II or later sources (Pickett et al. 2003, hereafter Paper I; Mejia et al. 2005, hereafter Paper II; Boley et al. 2006, hereafter Paper III). Boss (2001, 2002, 2003, 2005, 2006), in his disk simulations with radiative diffusion, mimics the effect of envelope irradiation by embedding the disk in a constant temperature thermal bath. Dense protoplanetary clumps form in his simulations, and he attributes this to rapid cooling by convection (Boss 2004). Employing a similar treatment to that of Boss in the optically thick region but using different atmospheric boundary conditions (BCs), Mejia (2004) and Paper III found that, without envelope irradiation, no persistent clumps form and convection does not cause rapid cooling in a GI-active disk. This drastic difference in outcome suggests that results are sensitive to radiative BCs.
Therefore, from both observational and theoretical perspectives, it is interesting to investigate the strength of GI activity when the disk experiences IR irradiation from a surrounding envelope. This is directly applicable to GIs in embedded disks and permitsmore direct comparisons with Boss simulations. Preliminary work by Mejia (2004) on a _stellar_ irradiated disk indicates that GIs are weakened. However, stellar irradiation is difficult to model on coarse 3D grids due to the high optical depth of disks to starlight (Mejia 2004). For this reason, we focus our efforts here on treating the simpler case of envelope IR irradiation.
This paper is the fourth in our series on the regulation of GIs in protoplanetary disks by thermal processes. Papers I and II featured disk simulations with idealized cooling prescriptions. Paper III presented our first simulation using the radiative cooling scheme developed in Mejia (2004). In this paper, we present results of disk simulations using a version of Mejia (2004)'s routines modified to include envelope irradiation. This new method, described in SS2, has already been used to study the effects on GIs of varying metallicities and dust grain sizes in Cai et al. (2006). The set of simulations and their results are presented in SS3. Section 4 discusses the significance of the results and compares them to work by other groups, and SS5 is a brief summary of our conclusions.
## 2 Methods
### 3D Hydrodynamics
We conduct protoplanetary disk simulations using the Indiana University Hydrodynamics Group code (Pickett et al. 1998, 2000, Mejia 2004, Papers I & II), which solves the equations of hydrodynamics in conservative form on a cylindrical grid (\\(r\\),\\(\\varphi\\), \\(z\\)to second order in space and time using an ideal gas equation of state with the ratio of specific heats \\(\\gamma\\) = 5/3. Self-gravity and shock mediation by artificial bulk viscosity are included. Reflection symmetry is assumed about the equatorial plane, and free outflow boundaries are used along the top, outer, and inner edges of the grid.
### 2.2 Radiative Scheme
Let \\(\\tau\\) be the Rosseland mean optical depth measured vertically down from above. As in Mejia (2004) and Paper III, the cells with \\(\\tau\\geq\\) 2/3 are considered part of the disk interior, and the energy flow in all three directions is calculated using flux-limited diffusion (Bodenheimer et al. 1990). The cells with \\(\\tau\\) < 2/3 (the atmosphere) cool according to a free-streaming approximation, so they cool as much as their emissivity allows. However, similar to the original Mejia (2004) scheme, the atmospheric cells directly above the disk interior are also heated by the flux from underneath, which is determined by the boundary condition described below. This emergent flux is attenuated as it passes upward through the atmosphere according to exp(-\\(\\tau_{up}\\)), where \\(\\tau_{up}\\) is the Planck mean optical depth, measured between the cell of interest and the interior/atmosphere boundary, evaluated at the local temperature. The Planck mean is used in this regime because the medium is optically thin.
For the simulations presented in this paper, we include external infrared irradiation from an envelope heated by the star. This IR irradiation is approximately treated as a blackbody flux \\(\\sigma T_{irr}\\)4 with a characteristic temperature \\(T_{irr}\\) shining vertically downward onto the disk. For one of the calculations, as noted below in SS3.2, we include a similar envelope heating flux entering radially from the outer boundary of the disk, because a real envelope is usually much larger than the disk. Also, for this reason, an envelope irradiation with constant temperature (i.e., neglecting the drop off of envelope temperature farther out from the central star) may not be a bad approximation (e.g., the disks in L1551 IRS 5, D'Alessio 2005, private communication). In each column of the atmosphere, the downward envelope flux is reduced by a factor 1-exp(-\\(\\varDelta\\tau_{P}\\)) as it crosses each cell, where \\(\\tau_{P}\\) is the Planck mean optical depth measured vertically downward and evaluated at \\(T_{irr}\\) and where \\(\\varDelta\\tau_{P}\\) is the change in \\(\\tau_{P}\\) across a cell. Unlike Boss (2003), we do not explicitly model an infalling envelope here, nor does the disk atmosphere include the infall effects.
The total cooling rate in atmosphere cells can then be expressed as
\\[\\alpha=\\frac{\\alpha_{max}}{\\alpha_{max}} \\tag{1}\\]
where heating by the upward boundary flux leaving the hot disk interior and heating by the downward envelope irradiation flux are treated as additional terms to the free-streaming approximation. Note that all terms use the Planck mean opacity \\(\\kappa_{Planck}\\), but it is evaluated at different temperatures in different terms. The term represents heating by the flux leaving the interior and is discussed in more detail below. A three-cell linear smoothing of \\(\\Lambda\\) in the radial direction is also employed to make the temperature structure smoother.
As in Mejia (2004) and Paper III, both the disk interior and the atmosphere are treated separately and then coupled by an Eddington-like atmosphere boundary condition, which defines the boundary flux for the diffusion approximation. This boundary flux represents the balance between the energy leaving the disk interior through the atmosphere and the energy gained by the atmosphere from external heating sources that must be carried down to the interior. Because there is an additional source of heating, i.e., envelope irradiation, the boundary condition defined by equation (2-26) in Mejia (2004) must be modified to incorporate the contribution of the (residual) envelope irradiation flux that survives atmospheric absorption:
\\[\\tau_{\\it{helv}}-\\frac{4\\,\\sigma(T_{\\it{helv}}^{\\it{helv}}-T_{\\it{helv}}^{\\it{helv} }-T_{\\it{helv}}^{\\it{helv
Eddington-like fit in that column (eq. [2]), and transition cells are otherwise neither heated nor cooled.
Boley et al. (2007c; see also Paper III) describe a test suite for radiative hydrodynamics algorithms that are used to evolve protoplanetary disks. One of these tests assumes a functional form for an _ad hoc_ energy dissipation in a vertically stratified gas layer under constant gravity and reflection symmetry about a \"midplane\". For this test, code results can be compared to analytic solutions. Figure 1 shows the flux and temperature profiles that Mejia (2004)'s radiative scheme achieves in a case without irradiation where numerical oscillations occur because \\(\\tau\\) = 2/3 is too close to a cell boundary. Also shown in Figure 1 is the result for the modified scheme with a one-cell transition layer. The new scheme not only yields a more realistic vertical temperature structure, but as discussed in more detail by Boley et al. (2007c), it also suppresses the numerical oscillations. The sudden drop in the temperature in both schemes at the photosphere (cell 11 or 12) is a result of the lack of complete cell-to-cell coupling in the optically thin region. For both cases, accurate fluxes and temperature structures are reproduced throughout the disk interior.
Following Mejia (2004) and Paper III, the volumetric radiative cooling rate \\(\\varLambda\\), the volumetric shock heating rate due to artificial viscosity \\(\\varGamma\\), and the divergence of the radiative flux \\(\\boldsymbol{\
abla\\cdot F}\\) are limited such that the cooling and heating times are not allowed to be shorter than about 3% of the initial orbital time of the outer disk. As discussed in Paper III, these limiters do not prevent fragmentation, and the cells that are affected tend to be located in the diffuse background, not within areas of interest.
### 2.3 Boss-like Boundary Condition
As mentioned in SS1, we suspect that the apparent disagreement in cooling rates between our results and those of Boss is primarily due to differences in the treatment of radiative boundary conditions (BCs), although differences in the treatment of the equation of state may also be important (Boley et al. 2007a). In order to investigate the effects of different BCs on the same radiative cooling algorithm, we implement a Boss-like BC for our flux-limited diffusion scheme in some comparison calculations discussed in SS3.4. For regions where \\(\\tau\\)\\(\\geq\\) 10, we compute radiative diffusion in all three directions with the flux
Figure 1: Snap shot results of relaxation tests described in Boley et al. (2007c). The left panel indicates the analytic temperature profile (dashed, red [gray] curve), the actual profile for the Mejía (2004) scheme (dashed, black curve), and the actual profile for the one-cell transition layer modification (solid, black curve). The first drop indicates where the disk interior ends and the atmosphere begins, and is due to the lack of complete cell-to-cell coupling that the solution to the equation of radiative transfer requires (see discussion in Paper III). The second drop is where the density falls to background. The right panel indicates the flux through the interior only, because the flux through the atmosphere is not explicitly tracked. The addition of the transition layer gives more accurate temperature and flux distributions and also avoids the pulsations that occur in the Mejía (2004) scheme when the photosphere is near a vertical grid boundary.
limiter turned off, as done in Boss (2001). In the outer disk (the optically thin region) where \\(\\tau<10\\), the temperature is set to the background temperature \\(T_{B}\\). The two regions are not otherwise coupled radiatively. Our initial model, grid geometry, and internal energy for molecular hydrogen are still different from those of Boss.
## 3 The Simulations
### Initial Model for the Irradiation Study
For purposes of comparison with Paper III, the same initial axisymmetric equilibrium disk is used for the calculations in which \\(T_{irr}\\) is varied. The disk is nearly Keplerian, has a mass 0.07 \\(M_{\\oplus}\\) with a surface density \\(\\Sigma(r)\\propto\\ r^{\\ \\ \\cdot 0.5}\\) from 2.3 to 40 AU, and orbits a star of 0.5 \\(M_{\\oplus}\\). The initial minimum value of the Toomre stability parameter \\(Q_{min}\\) is about 1.5, so the disk is marginally unstable to GIs. More details can be found in Paper III and in Mejia (2004).
### The Simulations With Varied Envelope Irradiation
We present three new simulations, each with a different \\(T_{irr}\\). The simulation with \\(T_{\\rm irr}\\ =15\\)K, hereafter Irr 15K, is the same as the solar metallicity simulation (Z = Z\\({}_{\\oplus}\\)) presented by Cai et al. (2006). The second simulation (Irr 25K) was run with \\(T_{\\rm irr}\\ =25\\)K. The third simulation with \\(T_{\\rm irr}\\ =50\\)K (Irr 50K), which demonstrates the damping of GIs by strong envelope irradiation, is started at time \\(t=11.5\\) ORP of Irr 25K run. Here the time unit of 1 ORP is defined as the initial outer rotation period at 33 AU, about 253 yr. For comparison, we include the non-irradiated case (No-Irr) from Paper III. Except for No-Irr, the initial grid has (256, 128, 64) cells above the midplane in cylindrical coordinates \\((r,\\varphi,z)\\). The No-Irr run requires only 32 vertical cells because the atmosphere of the non-irradiated disk is less expansive. The number of radial cells is increased from 256 to 512 when the disk expands radially due to the onset of GIs. In the Irr 15K case only, we include an extra envelope irradiation shining horizontally inward from the grid outer boundary (see SS3.2.2). Each simulation is evolved for solar metallicity with a maximum grain size for the D'Alessio et al. (2001) opacities of \\(a_{max}=1\\)\\(\\mu\\)m and a mean molecular weight of about 2.7 (see Paper III).
Envelope temperatures of 15K and 25K are at the low end of likely envelope temperatures (Chick & Cassen, 1997). They are probably consistent with the low mass of our star and are lower than the 50K bath used in Boss's (2001, 2002) simulations. Boss's disks and our disks have similar masses, but his have an initial outer radius of 20AU, versus 40AU for our disks. The lower surface density for our disks means a lower \\(T_{irr}\\) is required to achieve a marginally unstable \\(Q\\)-value.
One way to interpret \\(T_{irr}\\) is to ask what would be the radius of a spherical envelope shell radiating down on the disk like a \\(T_{irr}\\) blackbody if the shell is maintained through the absorption of \\(L_{star}\\). We then have \\(L_{star}=4\\pi R_{shell}{}^{2}\\sigma T_{irr}\\)4 and \\(R_{shell}=R_{star}(T_{star}/T_{irr})^{2}\\). The parameters of the protostar are chosen to be the same as in D'Alessio et al. (2001) and Mejia (2004), where \\(T_{eff}=4000\\)K and \\(R_{star}=2\\rm R_{\\oplus}\\), values typical of T Tauri stars (see, e.g., Chiang & Goldreich, 1997, 2001). For \\(T_{irr}=15\\)K, \\(R_{shell}=661\\) AU,which is a reasonable size for a protostellar envelope (e.g., Osorio et al. 2003). For \\(T_{irr}=25\\)K and 50K, \\(R_{shell}=238\\) AU and 59.5 AU, respectively.
It is also interesting to compare the energy input onto the disk with the stellar luminosity. Assuming the disk is flat, the ratio between the total luminosity of envelope irradiation that shines down onto the disk \\(L_{env}\\), and the stellar luminosity \\(L_{star}\\) is then given by
\\(\\frac{L_{env}}{L_{env}}-\\frac{1}{2}\\big{(}\\frac{T_{env}}{T_{env}}\\big{)}^{ \\text{'}}\\big{(}\\frac{R_{d}}{R_{env}}\\big{)}^{\\text{'}}-\\)
\\(2.25\\times 10^{-11}T_{irr}\\) (K)\\({}^{4}R_{d}\\) (AU)\\({}^{2}\\) (4)
With \\(T_{irr}=15\\)K and a disk radius \\(R_{d}=50\\) AU (see SS3.2.2 for an explanation), \\(L_{env}\\) amounts to about 2.8\\(\\times 10^{-3}\\) of the stellar luminosity. For \\(T_{irr}=25\\)K and the same disk radius, \\(L_{env}/L_{star}=2.2\\times 10^{-2}\\). When \\(T_{irr}\\) is raised to 50K, the envelope irradiation becomes approximately 1/3 of the stellar luminosity. Please note, however, these are only simple analytic calculations and are not what we put into the disk. A tally of the actual energy input in the disk is presented in SS3.2.2.
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline
**Case** & **No-Irr** & **Irr 15K** & **Irr 25K** & **Irr 50K** \\\\ \\hline \\(T_{env}\\) & **0** & **15K** & **25K** & **50K** \\\\ \\hline
**Duration\\({}^{\\rm a}\\)** & **16.0** & **15.7** & **15.4** & **4.4\\({}^{\\rm b}\\)** \\\\ \\hline \\(t_{I}\\) (ORPs) & **3.0** & **5.0** & **7.0** & NA \\\\ \\hline \\(t_{2}\\) (ORPs) & **10.0** & **9.9** & **9.9** & NA \\\\ \\hline \\(\\langle A\\rangle\\) & **1.51** & **1.16** & **1.01** & **0.40** \\\\ \\hline \\(\\langle Q_{late}\\rangle\\) & **1.51** & **1.65** & **1.96** & **2.
#### Overall Evolution
Table 1 summarizes some simulation results. Here, \"Duration\" refers to the simulation length measured in ORPs. Other entries are defined as they are introduced in the text.
As shown in Figure 2 for Irr 15K, these disks go through the same evolutionary phases as described in Paper II and Paper III. The initial _axisymmetric cooling_ phase ends
Figure 2: Midplane density maps of Irr 15K showing the evolution. Each square enclosing the disk is 121.4 AU on a side. Densities are displayed on a logarithmic color scale from dark blue to red, as densities range from about 3.0\\(\\times\\)10\\({}^{\\text{-15}}\\) to 4.8\\(\\times\\) 10\\({}^{\\text{-11}}\\) g cm\\({}^{\\text{-3}}\\), respectively. The scale saturates to white at even higher densities. The burst phase occurs at about 6 ORP. By about 10 ORP, the disk has transitioned into its asymptotic behavior.
with a _burst_ phase of rapid growth in several discrete global nonaxisymmetric spiral modes that rapidly redistribute mass. The first noticeable effect of irradiation is a delay in the start of the burst phase, with later onset for higher \\(T_{irr}\\), as indicated by the \\(t_{1}\\) entries in Table 1. Heating by irradiation slows the approach to a strongly unstable \\(Q\\)-value. As in Paper III and Cai et al. (2006), these disks do not form persistent protoplanetary clumps, in contrast to Boss (2001, 2006). Instead, after the burst, the disks become more axisymmetric again during a _transition_ phase and then settle into a quasi-steady _asymptotic_ phase of nonaxisymmetric GI activity, where heating and cooling are in rough balance and where average quantities change slowly (see also Lodato & Rice 2005). The approximate start of the asymptotic phase is indicated by \\(t_{2}\\) in Table 1 and does not appear to be delayed as \\(T_{irr}\\) increases. In this paper, we focus on the disk behavior during the asymptotic phase, because the bursts may be an artifact of our initial conditions (see Papers II and III).
Figure 3 compares midplane densities at \\(t=14\\) ORP for all four cases. Not surprisingly, the densest spirals appear in the No-Irr case. For Irr 50K, even though the disk has not yet had a chance to settle into an asymptotic state at the time shown, the spiral structure already appears much weaker than in the Irr 25K disk. Following Cai et al. (2006), to quantify differences in GI strength, we compute Fourier amplitudes \\(A_{m}\\) for the nonaxisymmetric structure over the whole disk during the asymptotic phase as follows (see also Imamura et al. 2000):
\\[\\tiny\\begin{array}{c}\\tiny\\begin{array}{c}\\tiny\\begin{array}{c}\\tiny \\begin{array}{c}
where \\(\\rho_{0}\\) is the axisymmetric component of density and \\(\\rho_{m}\\) is the amplitude of the \\(m\\)th Fourier component for a decomposition of the density in azimuthal angle \\(\\varphi\\).
As a measure of the total nonaxisymmetry, \\(A\\), a sum of the \\(A_{m}\\) over all resolved \\(m\\)\\(\\geq\\) 2, is computed. The \\(A_{I}\\) is excluded because our star is fixed at the origin, and so \\(m=1\\) may not be treated realistically (see Paper III). This sum is quite variable with time, and so the entry \\(<\\)\\(A\\)\\(>\\) in Table 1 is \\(A\\) averaged over the 13.0-14.0 ORP time period. \\(<\\)\\(A\\)\\(>\\) is
Figure 3: Midplane density maps at 14 ORP for the four cases labelled. Each square enclosing the disk is 155 AU on a side. The color scale is the same as in Fig. 2.
greatest for No-Irr and decreases with increasing \\(T_{irr}\\), in agreement with what we see in Figure 3. Overall, stronger envelope irradiation leads to lower nonaxisymmetric amplitudes. For Irr 50K, not only is its \\(<\\)A\\(>\\) value the smallest, but its GI amplitudes continue to decrease after this time interval. For example, when \\(<\\)A\\(>\\) is re-evaluated for 14.3 - 15.3 ORPs, it is only \\(\\sim\\) 0.28. The stabilization of the disk by \\(T_{irr}\\) = 50K is corroborated by the high \\(Q\\) value at 14.9 ORPs (see Table 1 & Fig. 5). A crude fit to \\(A(t)\\) yields an exponential damping time for the GIs of \\(\\sim\\) 5 ORP.
To show the relative strengths of the nonaxisymmetric Fourier components, a distribution of \\(<\\)A\\({}_{m}\\)\\(>\\) averaged over the last two ORPs for each sim
Figure 4: \\(<\\)A\\({}_{m}\\)\\(>\\) distribution over the last two ORPs of the simulations. As the irradiation increases, the higher-order modes are preferentially damped, and \\(m=2\\) becomes more pronounced in the profile. Moreover, the profile tends toward a shallower drop off at large \\(m\\) as the low-order modes are suppressed.
Figure 4*. A higher \\(T_{irr}\\) progressively suppresses high-order components. In contrast, moderate irradiation seems to preserve \\(m=2\\) selectively, and it is the last component of nonaxisymmetric structure to fade away. For large \\(m\\), the \\(A_{m}\\) are roughly fit by an \\(m\\)-3 profile for No-Irr, Irr 15K, and Irr 25K and by a shallower \\(m\\)-2.7 for Irr 50K. As in Paper III, the GIs appear to be dominated by global low-order modes, with superposed gravitoturbulence (Gammie 2001) manifested by the higher \\(m\\)-values. This interpretation agrees with Fromang et al. (2004)'s hydrodynamics simulations where low-order modes tend to dominate the nonlinear spectrum of GIs. With a high level of irradiation (\\(T_{irr}=50\\)K), all GI modes are damped.
Footnote *: The \\(m=1\\) component seen in Fig. 4 is probably generated by nonlinear mode coupling (Laughlin & Korchagin 1996), as evidenced by its phase incoherence in a periodogram analysis.
The Toomre \\(Q=c_{s}\\kappa/\\pi G\\Sigma\\), where \\(c_{s}\\) is the adiabatic sound speed and \\(\\kappa\\) is the epicyclic frequency, is calculated using azimuthally-averaged values, which are localized to the midplane for the variables \\(c_{s}\\) and \\(\\kappa\\). The evolution of \\(Q\\) with time is discussed in detail in Cai (2006); here we focus on late times. Figure 5 shows midplane \\(Q(r)\\) profiles at \\(t=14.9\\) ORP during the asymptotic phase, and Table 1 lists the \\(Q_{min}\\) and the radial range \\(\\Delta r_{Q<2}\\) over which \\(Q<2\\) at this late time. Clearly, the low-\\(Q\\) region becomes narrower and the \\(Q_{min}\\) becomes higher for higher \\(T_{irr}\\). Because it is these low-\\(Q\\) regions that drive the GIs, this is another manifestation of GI suppression by irradiation. The radial width of the low-\\(Q\\) region differs among the runs and so does the mass contained within it. To characterize the difference more precisely, mass-weighted average \\(Q\\)-values for \\(t=15\\) ORP are computed between 15-35 AU and are given in Table 1 as \\(<\\)\\(Q_{late}\\)\\(>\\). The values of \\(<\\)\\(Q_{late}\\)\\(>\\) in Table 1 and the curves in Figure 5 show that \\(Q\\) generally increases as \\(T_{irr}\\) increases. In particular, as clearly shown in Figure 5, \\(Q\\) for Irr 50K is nowhere below 2.0, which indicates that the Irr 50K disk is stable against GIs (Pickett et al.1998, 2000; Nelson et al. 2000). The GIs developed earlier in this disk when \\(T_{irr}\\) = 25 K are being damped, as confirmed by the decay of \\(<\\)\\(A\\)\\(>\\) mentioned earlier.
This behavior led us to wonder whether the disk would have become unstable at all if we set \\(T_{irr}\\) = 50K at \\(t\\) = 0. However, when we change the midplane temperatures \\(T_{mid}\\)
Figure 5: The Toomre \\(Q\\)_(r)_ at 14.9 ORPs for the three irradiated disks. The dotted lines delineate the radial range over which \\(<\\)\\(Q_{late}\\)\\(>\\) is computed for Table 1, and the dashed line is \\(Q=2\\).
to 50K wherever \\(T_{mid}<50\\)K in the initial disk, the initial \\(Q_{min}\\) only rises to about 1.69, probably not enough to prevent instability during the initial cooling phase. The difference in \\(Q_{min}\\) values between the initial disk at 50K and the asymptotic disk at 50K is caused by the decrease in surface density due to radial mass transport in the evolved disk (see SS3.2.3). A similar test in which we reset the initial \\(T_{mid}\\) to 70K yields a \\(Q_{min}\\) of 2.0. Indeed, in a low azimuthal resolution test run with \\(T_{irr}=70\\)K that covers 2.8 ORP, \\(Q_{min}\\) saturates at about 2.2, well above the marginally stability limit for nonaxisymmetric GIs of about 1.5 to 1.7 (see Durisen et a. 2007). This demonstrates that, for our disk, there is a critical value of \\(T_{irr}\\) between 50 and 70K, call it \\(T_{crit}\\), where a simulation started at \\(t=0\\) with \\(T_{irr}>T_{crit}\\) would never become unstable.
#### 3.2.2 Energetics
Figure 6 shows total internal energy and total energy losses due to cooling as functions of time. The \"optically thick\" cooling curves are total energy losses in the interior of the disk due to diffusion as determined from the divergence of the flux; the \"optically thin\" cooling curves are the net energy losses based on integrating equation (1). As in Figure 2 of Cai et al. (2006), due to our limited vertical resolution and the Eddington fit near the photosphere (see SS2.2), the thick curves effectively include energy losses in most of the photospheric layers for most columns through the disk. The thin curves tally additional cooling from extended layers above the photospheric cells, plus the parts of the outer disk that are optically thin all the way to the midplane (hereafter, the \"outer thin disk\"). Figure 6 shows that, while the total energy loss in Irr 15K is dominated
by cooling in the optically thick region, the optically thick cooling in Irr 25K is essentially zero. The net cooling that occurs is primarily due to optically thin emission, especially after about 8 ORP, even though a greater radial range of the disk midplane is optically thick for higher \\(T_{irr}\\).
Figure 6: Cumulative total energy loss as a function of time due to radiative cooling in optically thick (green curves) and optically thin (blue curves) regions. The two black curves that start at 3.6x10\\({}^{41}\\) ergs and lie in the middle of the diagram and the dotted tail rising upward at \\(t=11.5\\) ORP for 50K show how the total disk internal energies (labelled \\(E_{in}\\)) change with time for the three calculations. The type of the line denotes which run it is plotting, as shown in the figure legend.
When \\(T_{irr}\\) is increased to 50K at \\(t=11.5\\) ORP, the optically thick disk heats rapidly until about \\(t=13\\) ORP, after which it cools, as revealed by the positive slope of the \\(E_{int}\\) curve and the negative value of the optically thick \"cooling\" curve for Irr 50K when the curves begin at \\(t=11.5\\) ORP. The Irr 15K and Irr 25K internal energy curves indicate relaxation to a state of quasi-steady balance of heating and cooling, while Irr 50K is still undergoing transient adjustments.
As in Cai et al. (2006), the final global net cooling time (\\(t_{cool}\\) in Table 1) is obtained by dividing the final total internal energy by the final total net radiative energy loss rate using the slopes of the thin and thick curves in Figure 6. Table 1 shows that \\(t_{cool}\\) is longer for a disk with higher \\(T_{irr}\\). When \\(T_{irr}=50\\)K, it reaches \\(\\sim\\)10 ORP. However, it should be noted that, especially for Irr 25K and Irr 50K, instantaneous cooling times computed for individual vertical columns of the disk fluctuate spatially and temporally by factors of several and can even change sign. As reported in Paper III for the No-Irr case, the overall and local cooling times for irradiated disks also tend to increase as they evolve. Gammie (2001) showed that disks fragment only when \\(t_{cool}<\\) about a rotation period. The \\(t_{cool}\\) values in Table 1 suggest that none of our disks should fragment, consistent with what we observe.
We can estimate the actual envelope irradiation being put into the disk from summing \\(\\sigma T_{irr}\\)\\({}^{4}\\)(1-exp[- \\(\\tau_{mid}\\))) times the top area of each vertical column over the entire disk surface, where \\(\\tau_{mid}\\) is based on the Planck mean opacity (see SS2.2). For simplicity, the additional radial irradiation in the Irr 15K case is not counted. The result is tabulated in Table 1 as \\(L_{env}\\)(code). In all three cases, the numbers are close to the analytic estimations of \\(L_{env}\\) (Table 1, row 12) with a disk radius R\\({}_{\\rm d}=50\\) AU (see SS3.2.0). For comparison, the total _net_ cooling \"luminosity\" \\(L_{disk}\\) (Table 1, row 14) is computed by adding the total _net_ energy loss rates at the end of each simulation. As \\(T_{irr}\\)increases, \\(L_{env}\\) becomes \\(>>L_{disk}\\). So, the absorption and re-emission of the envelope energy input for \\(T_{irr}\\) = 25 and 50K far outweighs the net cooling of the disk that counterbalances heating from GI activity.
Figure 7 compares the vertical density and temperature structures in the asymptotic phase for Irr 25K (top) and No-Irr (bottom). Envelope irradiation makes the disk physically thicker and generates a nearly isothermal structure with temperature \\(T\\) close to \\(T_{irr}\\). The temperature increase over No-Irr in the outer disk is dramatic. Superficially, this appears to justify the Boss thermal bath approach (see SS2.3), but it is still not clear that Boss obtains the correct BC for the optically thick regions of his disks.
Figure 7: Meridional cross-sections of density (left) and temperature (right) at \\(t\\) = 12.0 ORP (3000 yr). The Irr 25K case is at the top and No-Irr case at the bottom. The density and temperature scales are displayed on the right of each panel, ranging from log(\\(\\rho\\)/gcm\\({}^{-3}\\)) = -8 to -3 and T = 3 to 50 K, respectively. Note that the vertical scale is exaggerated for clarity; the boxes span 60 AU radially and 5 AU vertically.
Our result also agrees qualitatively with D'Alessio et al. (1997).
The low temperature pocket near the midplane in the optically thin outer region of the Irr 25K disk is probably an artifact of using mean opacities evaluated at different temperatures in equation (1) and of incomplete radiative coupling of cells in the optically thin regions (see SS2.2). For \\(a_{max}=1\\upmu\\)m and \\(T<100\\)K, the Planck mean is greater than Rosseland mean (Appendix A of Paper III). As a result, the envelope irradiation gets completely absorbed before reaching the midplane in some columns for the outer disk even though the Rosseland \\(\\tau_{mid}<2/3\\). A related problem is that, in columns where the envelope irradiation is not drained by the midplane, the surviving photons should continue through and be at least partially absorbed on the other side of the disk. These upward moving photons are not included in our scheme. The existence of the overcool region near the midplane suggests that not many columns are affected by this omission. These two problems do not change our main conclusions, because they both tend to bias our calculations toward _underestimating_ the effect of irradiation.
#### Mass Transport and Redistribution
Strong mass transport occurs during the burst phase, though it becomes weaker as \\(T_{irr}\\) increase. Here we limit our discussion to the asymptotic phase, because the initial bursts may be an artifact of our initial conditions (see Paper III). Mass transport rates for this phase averaged over the last 3 ORPs are plotted in Figure 8, and Table 1 (row 15) gives a spatial average of the inward mass transport rates computed between 9 AU and the radius at which the radial mass flow shows a major sign change in Figure 8, which we call the \"mass inflow/outflow boundary\". For the irradiated disks in Figure 8, both the peak and average \\(\\overline{\\mathbf{\\mathcal{ANE}}}\\)'s decrease as \\(T_{irr}\\) increases from 15K to 25K. However, the \\(\\overline{\\mathbf{\\mathcal{ANE}}}\\)'s in both these irradiated disks appear to be substantially _higher_ than for No-Irr. This is perhaps the most surprising result of our simulations. As will be discussed at greater length in SS3.2.4, it appears that _mild_ irradiation _enhances_ the gravitational torques and hence the mass transport by selectively suppressing higher-order spiral structure while leaving the dominant global two-armed (\\(m=2\\)) modes relatively intact (see also Figure 3). The mass transport for the Irr 50K run is not shown in Figure 8, because the GIs are damping and so the mass transport observed is a dying transient that should not be compared to asymptotic phase results.
Figure 8: The mass inflow rate calculated by differences in the total mass fraction as a function of radius. The time interval is taken to be the last 3 ORP for each calculation. The mass inflow/outflow boundary (MIOB) is labeled for each curve. In the No-Irr case, we adopt 26AU as the major MIOB because, over a long period of time, the designated MIOB is the typical sign change boundary.
During the burst phase, the initial surface density profile \\(\\Sigma(r)\\sim r\\ ^{-0.5}\\) is destroyed. As in Paper III, during the asymptotic phase, the inner disks show substantial concentrations of mass in annular regions, while the azimuthally averaged \\(\\Sigma(r)\\) in the outer disks can be reasonably well fit by a power law. For Irr 15K, a fit of \\(\\Sigma\\sim r\\ ^{-p}\\) gives \\(p\\sim 2.85\\) between 22 and 40 AU. The Irr 25K disk is similar but with \\(p\\sim 3.5\\) between 25 and 40 AU. Compared with No-Irr (see Paper III for details), the irradiated disks fall off more steeply outside 40 AU, the initial outer radius of the disk. Because it is GI torques during the burst that expand the disk, the steeper fall off probably results from weaker bursts for irradiated disks. In Paper III, we found that \\(\\Sigma(r)\\) for the No-Irr in the asymptotic phase could be fit over a very wide range of radii by a Gaussian. This is not the case for the irradiated disks, and so we do not present Gaussian fits.
Like the simulations presented in Papers II and III and in Cai et al. (2006), dense gas rings are produced in the inner portions of the irradiated disks and appear to still be growing when the calculations end (see Cai 2006 for more detail). Nearly axisymmetric rings form at about 6.5 and 8 AU for Irr 15K and are obvious in azimuthally-averaged surface density plots of inner disks, as shown in Figure 9. The surface density maxima at 11, 15, and 20 AU represent strong radial concentrations of mass, which exhibit strong nonaxisymmetric behavior and are not rings. Durisen et al. (2005) suggested that rings and radial concentrations may act as regions of accelerated planetary core formation (see also Rice et al. 2004, 2006). For \\(r>10\\) AU, the physics in our simulations are well resolved. There, the radial concentrations are real and associated with mass transport by discrete global two-armed spirals. We have begun higher resolution simulations, to be reported elsewhere, to test whether or not the innermost axisymmetric rings are physically real or are artifacts of poor vertical resolution in the inner disk. Regardless, the inner regions of real disks should be kept hot and GI-stable by physical processes not included in our code (see, e.g., Najita et al. 2006), and rings are expected to form at boundaries between GI active and inactive regions, as shown already in Paper I, Paper II, and Durisen et al. (2005).
Figure 9: Azimuthally-averaged surface density distribution for the inner disk of Irr 15K. Shown are the averaged surface density profiles at 0, 9.9, 12.5, and 15.7 ORP.
#### Torques, Modes, and Effective \\(\\alpha\\)
In Paper III, the gravitational and Reynolds (or hydrodynamic) torques associated with mass and angular momentum transport were calculated for the No-Irr disk. Similar calculations for the irradiated disks are shown in Figure 10. We only plot the gravitational torques, because the Reynolds torques are poorly determined and tend to be considerably smaller. In both the Irr 15K and Irr 25K cases, the mass inflow/outflow
Figure 10: _Upper left_. The gravitational torque for each simulation time-averaged over the last three ORPs. The torque becomes smoother and larger for moderate irradiation, while strong irradiation suppresses the torque. _Upper right_. A Fourier deconstruction of the gravitational torque for the No-Irr simulation over the same time period. The curve labeled M = 1 is the torque due only to the Fourier component \\(m=1\\), the curve labeled M = 1,2 is the torque due to \\(m=1\\) and the torque due to \\(m=2\\) added together, and so on. _Lower left_. The same for the Irr 15K simulation. _Lower right_. The same for the Irr 25K simulation. As \\(T_{ir}\\) increases, the \\(m=2\\) component becomes more dominant in the torque. In all of the irradiated cases, almost all the torque can be accounted for by the four lowest-order modes.
boundaries shown in Figure 8 roughly align with the global maxima of the gravitational torques (Figure 10, upper left panel) when averaged over the same interval as the mass transport rates, consistent with what is observed in the No-Irr disk (Paper III). A slight degree of _mis_alignment is probably due to the different sampling rate used for the various time-averaged values and the difficulty of calculating accurate Reynolds stresses in our code (see Paper III).
The \\(m\\)-wise reconstructions in the upper right and the two lower panels of Figure 10 demonstrate that, as \\(T_{irr}\\) increases, the \\(m=2\\) or two-armed component of the nonaxisymmetric structure becomes more dominant. This is consistent with the GI spectra of Figure 4 and suggests that \\(m=2\\) spirals are the principal mode for mass and angular momentum transport in the irradiated disks. A periodogram analysis of \\(m=2\\) for Irr 15K (as in Figure 11 of Paper III for No-Irr) confirms that several discrete coherent global pattern periods have corotation radii that line up with the peaks in the torque profile in the lower left panel of Figure 10. The upper left panel of Figure 10 also shows that the magnitudes for the torques in Irr 15K and Irr 25K are comparable and, at their peaks, are roughly 1.4 times the maximum torque of No-Irr. This is consistent with the high average mass transport rates in these runs reported in SS3.2.3 and Table 1. We speculate that the \\(m=2\\) spirals, being more global, are more efficient at producing net torques, while high-order modes tend to produce more fluctuations and cancellation on smaller scales, as evidenced by the multi-peaked character of the No-Irr curves in Figure 8 and Figure 10. So, the torque increases when mild envelope irradiation preferentially suppresses modes with \\(m>2\\). On the other hand, as shown for the Irr 50K case in Figures5 and 10 (upper left), too much irradiation suppresses GIs and hence mass transport altogether by heating the disk to stability.
Also, as in Paper III, we calculate an effective Shakura & Sunyaev (1973) \\(\\alpha\\) from the torque profile, as shown in Figure 11. Despite the larger torques and greater domination by \\(m=2\\) for \\(T_{irr}=15\\) and 25K, the effective \\(\\alpha\\) is lowered as the irradiation is increased. This may at first seem paradoxical. However, the effective \\(\\alpha\\) due to gravitational torques is proportional to the vertically integrated gravitational stress tensor divided by the square of the sound speed times the disk surface density (see equation [20] of Paper III). Effectively, \\(\\alpha\\) is a dimensionless measure of the relative strength of gravitational stresses and gas pressure. In the asymptotic phase of the Irr 15K and Irr 25K simulations, the sound speed is higher due to the increase in the midplane temperature caused by irradiation, and \\(\\Sigma\\) is also higher in the 15 to 35 AU region because the initial GI burst is weaker. We have verified numerically that the increase in \\(\\Sigma{c_{s}}^{2}\\) does outweigh the higher torques as \\(T_{irr}\\) increases to give lower \\(\\alpha\\) values. It is not clear from these simulations what would happen to \\(\\alpha\\) if we increased \\(T_{irr}\\) for an asymptotic disk with the same initial \\(\\Sigma\\). This will be explored in future papers. Figure 11 also shows that the uniformity of the irradiation tends to drive \\(\\alpha\\) toward a constant value over a large fraction of the GI-active region. The GIs in Irr 50K are damping, and so \\(\\alpha\\) should approach zero in this case on the same time scale (\\(\\sim\\) 5 ORP). The \\(\\alpha\\) values at 30 AU for No-Irr, Irr 15K, and Irr 25K are 2.5x10\\({}^{-2}\\), 1.3x10\\({}^{-2}\\), and 8x10\\({}^{-3}\\), respectively, comparable to values reported in other GI studies for global simulations with similar cooling times (Lodato & Rice 2004, Paper III, Boley et al. 2007c).
### Massive Embedded Disk
Recently, a few embedded disks have been resolved and studied in detail by millimeter-wave interferometry. Examples include L1551 IRS 5 (Osorio et al. 2003), SVS 13 (Anglada et al. 2004) and IRAS 16293-2422B (Rodriguez et al. 2005). These YSOs are all very young (Class 0 or I), with the stars and massive circumstellar disks embedded in huge envelopes. Although massive, these disks are small in size. Except for SVS 13, the Toomre \\(Q\\)-values are estimated to be small in the outer part of the disks. For example, Osorio et al. (2003) fit the SEDs with a composite model that includes almost
Figure 11: Effective \\(\\alpha\\) averaged over the last 3 ORP of each simulation as a function of radius. Despite the increase in total torque for moderate irradiation, the effective \\(\\alpha\\) is suppressed as \\(T_{irr}\\) increases due to an increase of the midplane temperature.
all components of the L1551 IRS5 binary system, including the envelope, the circumbinary disk, and both circumstellar disks. The disk parameters were derived (see Table 2 of Osorio et al. 2003) based on fitting the observed fluxes at (mostly) millimeter (mm) wavelengths with irradiated steady-state \\(\\alpha\\)-disk models. They find that the disk masses in L1551 IRS 5 are comparable to those of the accreting stars, with a Toomre \\(Q<1\\) in the outer disks. Using the radiative scheme described in SS2.2, we have performed a disk simulation tuned to their derived parameters. In fact, our group has modeled such massive, young, stubby disks in earlier papers (e.g., Pickett et al. 1997, Pickett et al. 1998).
#### Initial Model
Following the procedures outlined in Paper I, an initial axisymmetric equilibrium model was generated with a specified \\(M_{d}\\)/\\(M_{tot}\\), \\(R_{d}\\)/\\(R_{p}\\), and power-law index \\(p\\) of the surface density distribution. We choose \\(M_{d}\\)/\\(M_{tot}\\) = 0.4 to resemble the Northern disk of L1551 IRS 5, where \\(M_{star}\\) = 0.3 \\(M_{\\phi^{\\prime}}\\) and \\(M_{d}\\) = 0.2 \\(M_{\\phi}\\)(see Table 2 of Osorio et al. 2003). The outer radius of the disk \\(R_{d}\\) is set to 15 AU, where 1 AU = 16 cells, and \\(p\\) = 1.0. After creating an equilibrium star/disk model, a modified 2D code was used to remove the star and replace it with a point mass potential. The disk model is then evolved in 2D for a few thousand steps to quench the waves generated by removing the star. The ORP is now defined as the orbital period at 14.5 AU or about 76.6 years. The initial grid has (256, 128, 32) cells in (\\(r\\),\\(\\varphi\\), \\(z\\)) above the midplane. The initial \\(Q\\) profile shown in Figure 12 has a \\(Q_{min}\\) of about 0.9, a little higher than Osorio et al. (2003)'s prediction for the Northerndisk but still below 1. This difference owes partly to the use of the isothermal sound speed in \\(Q\\) by Osorio et al., in contrast to the adiabatic sound speed (\\(\\gamma=5/3\\)) we use. Following Osorio et al., an envelope irradiation temperature \\(T_{irr}=120\\)K is applied. The maximum grain size \\(a_{max}\\) in the D'Alessio et al. (2001) opacity is set to be 200 \\(\\mu\\)m to be similar to the Northern disk, and the mean molecular weight is 2.34 proton masses.
#### 3.3.2 The Evolution
As in the simulations of SS3.2, the disk remains fairly axisymmetric for the first 3 ORPs of evolution, which is surprisingly long for a disk with initial \\(Q_{min}<1\\). Toomre's (1964) stability analysis is based on a thin disk approximation, and so the finite thickness of this disk introduces a correction factor to \\(Q\\) when used as a stability criterion (Romeo 1992, Mayer et al. 2004). According to Mayer et al. (2004), the correct \\(Q\\)* is related to the conventional \\(Q\\) by \\(Q\\)* = (1+2\\(\\pi\\)\\(h_{d}\\)/\\(\\lambda_{mu}\\))\\(Q\\). The quantity \\(\\lambda_{mu}\\) is the \"most unstable wavelength\" and equals 0.55\\(\\lambda_{crit}\\) (Binney & Tremaine 1987), where the \"Toomre wavelength\" \\(\\lambda_{crit}=4\\pi^{2}G\\Sigma/\\kappa^{2}\\). The quantity \\(h_{d}\\) is the disk scale height, which we compute as \\(h_{d}=0.5\\Sigma/\\rho_{mid}\\), following Romeo (1992), where \\(\\rho_{mid}\\) is the midplane density. The correction factor is about 1.5 to 2 in the outer disk, and the uncorrected initial \\(Q_{min}\\) of 0.9 becomes \\(Q\\)*\\({}_{min}\\approx 1.4\\), as shown in Figure 12. This value indicates that, instead of being violently unstable to axisymmetric disturbances, this disk is only marginally unstable to nonaxisymmetric modes, similar to the initial model presented in SS3.1.
By \\(t=3.5\\) ORP, spiral structure is apparent in the disk. Then the arms expand outward until 4.6 ORP (see Figure 13). During this burst phase, the disk is dominated by global \\(m=2\\) and 3 spiral modes and expands greatly in both the radial and vertical directions. Not only was the number of radial cells doubled to 512, but the number of vertical cells had to be quadrupled to 128 in order to avoid losing too much mass off the grid. Unfortunately, the calculation had to be stopped at \\(t=4.6\\) ORP due to a numerical difficulty that arises when the dense spiral arms become separated by very low-density inter-arm regions. The flux-limited diffusion calculation in the \\(r\\) and \\(\\varphi\\)-directions apparently breaks down due to the high resulting density and temperature contrasts and produces unphysical results. Figure 12 shows the \\(Q\\)(\\(r\\)) and \\(Q\\)*(\\(r\\)) profiles at \\(t=0\\) and 3.6 ORP. By \\(t=3.6\\) ORP, both \\(Q\\)s increased, primarily in the outer disk. Shock heating is an important heating source, as it is in the burst phase of other simulations (Papers II and III). The intense envelope irradiation does not seem to affect the disk evolution, at least at this stage.
The actual envelope irradiation shining onto the disk, calculated as in SS3.2, is about 7.7\\(\\times\\)10\\({}^{33}\\) erg/s at \\(t=4\\) ORP, whereas the net \\(L_{disk}\\) is almost 5 orders of magnitude lower, which indicates a very long overall net cooling time. In fact the optical depth to the midplane at \\(r\\sim\\)10 AU is \\(\\sim\\) 4\\(\\times\\)10\\({}^{3}\\). A large optical depth and slow radiative cooling are expected for the high surface density and large grain size.
Figure 12: Q(r) profile at t = 0 and 3.6 ORP. The black curves represent the initial Q profile, with dotted curves denoting the uncorrected and solid curves being corrected Q. The red curves are Q(r) at 3.6 ORP, where the dashed curve shows the uncorrected Q, and the solid one is the corrected Q. The values Q = 1 and 1.5 are plotted in black dashed lines for reference.
Figure 13: Selected midplane density contours for the L1551 IRS 5-like disk. Each square enclosing the disk is 58 AU on a side. The number at the upper right corner of each panel denotes the time elapsed in ORP units. Densities are displayed on a logarithmic color scale from dark blue to red, with densities
ranging from about 4.6\\(\\times\\)10\\({}^{-14}\\) to 2.3\\(\\times\\)10\\({}^{-9}\\) g cm\\({}^{-3}\\), except that the scale saturates to white at even higher densities.
### Simulations with a Boss-like Boundary Condition
To investigate whether the apparent disagreement between our results and those of Boss is primarily due to radiative boundary conditions, two \"Boss-like\" calculations were carried out with the Boss-like radiative BC (see SS2.3). To compare with the irradiated disk simulations presented in SS3.2, the background temperature \\(T_{B}\\) is set to 15K or 25K, with the same initial axisymmetric model (SS3.1). We refer to the resulting simulations as Boss 15K and Boss 25K, respectively. Following Boss (2001, 2002), the artificial bulk viscosity (AV) is turned off.
Both Boss-like disks develop \"bursts\" after only a few ORPs, then gradually adjust and appear to approach a quasi-steady state, similar to the irradiated disks. The disk structures do not resemble those of Boss disks, and no clumps are produced. A comparison of the bursts between Irr 25K and Boss 25K indicates that the presence of AV heating tends to maintain higher temperatures and lower the GI amplitudes. In the Boss 25K case, most of the disk cools down to 25K by 3 ORPs.
In both Boss-like calculations, by \\(t=8\\) ORP, just before the disks enter the asymptotic phase, the inner disk becomes very thin and dense and the inner edge fragments, but with an unphysical-looking pattern. We suspect the fragmentation is artificial in nature due to the high density and low resolution there (Truelove et al. 1997, Nelson 2006). A calculation of the local Jeans length and the local grid resolution revealsthat the Jeans number is about 1/8 to 1/10, smaller than the upper limit (1/4) that Truelove et al. (1997) suggested for stability. However, a proper Truelove et al. criterion may be code specific, because it is likely to depend on grid geometry, advection scheme, and differencing scheme. Since we use a cylindrical grid while Truelove et al.'s work was based on Cartesian grid simulations, we suspect that the fragmentation is numerical and that the critical Jeans number for artificial fragmentation is code dependent. Nelson (2006) pointed out that, for a disk, a Toomre criterion is more appropriate than the Jeans analysis. Nevertheless, according to Nelson's equation (16), the Toomre wavelength is comparable or even larger than the local Jeans wavelength in our case (\\(Q>1\\)). Another possible contributing factor to the numerical instability is the breakdown of our radiative routine in the inner disk due to insufficient vertical resolution (Nelson 2006). To investigate all this, a stretch of the Boss 15K was re-run, with doubled resolution in all three directions. The result is that the inner edge did _not_ fragment, at least for the 0.7 ORP of the rerun, which confirms the artificial nature of the fragmentation. In fact, in Boss's own disk simulations with spherical coordinates, his Jeans number is not far from 1/4, especially at the loci of dense clumps (Boss 2002).
Our Boss-like disks seem to reach higher nonaxisymmetric amplitudes. However, without data in the asymptotic phase, there is not much that can be concluded from the comparisons, except that the Boss-like BCs seem to allow more rapid cooling and contraction of the inner disk. As mentioned in SS2.3, there are some BCs that can only be implemented with a similar starting model. The use of an initial disk similar to the one used in the Boss simulations seems necessary. We constructed such an initial model according to the formulations in Boss (1993, 2003) and carried out two disk simulations using both a Boss-like BC and our radiative scheme. Drastic differences were observed. We are trying to understand the results in a collaborative effort with Boss. More details will be included in a forthcoming paper.
## 4 Discussion
### The Effect of Irradiation on Gravitational Instabilities
In this paper, we have reported a series of simulations for a disk with a radius of 40 AU and a mass of 0.07 M\\({}_{\\oplus}\\) around a young star of 0.5 M\\({}_{\\oplus}\\) to study how variations in the amount of IR envelope irradiation onto the disk affects gravitational instabilities. As the irradiation temperature \\(T_{irr}\\) is increased, the nonaxisymmetric amplitudes of GIs decrease, the Toomre \\(Q\\)-values increase, the energy input from irradiation becomes more dominant in the disk thermal equilibrium, the effective \\(\\alpha\\) derived from GI torques decreases, and the overall net cooling times become longer. In particular, in the Irr 50K simulation, when \\(T_{irr}\\) is increased from 25 to 50K during the asymptotic phase, the GIs damp out on a time scale comparable to the cooling time. The increase in \\(Q\\)-values when \\(T_{irr}\\) is increased indicates that the disk becomes gravitationally stable for this \\(T_{irr}\\) = 50K case (see Fig. 5). Thus, we conclude that mild envelope IR irradiation weakens GIs, and strong irradiation suppresses them altogether. In all cases, envelope irradiation raises the outer disk temperature significantly above its initial values, as found by D'Alessio et al. (1997).
Perhaps our most interesting and unexpected result is how the radial mass inflow rate \\(\\stackrel{{\\mathbf{\\frown}}}{{\\mathbf{ \\frown}}}\\)varies with \\(T_{irr}\\). In simulations with idealized cooling, where the cooling time is held fixed everywhere, Paper II found that the asymptotic mass transport rate is inversely proportional to cooling time. Here we find that, with mild irradiation (\\(T_{irr}=15\\)K), the \\(\\mathbf{\\overline{\\Lambda}_{\\mbox{becomes}}}\\) significantly _larger_ than for no irradiation at all, even though the overall cooling time is longer (see Table 1). As discussed in SSSS3.2.1 and 3.2.4, mild irradiation (\\(T_{irr}=15\\) and 25K) selectively suppresses high-order spiral structure without affecting the strength of the two-armed (\\(m=2\\)) modes that dominate the global torques and drive the mass transport. We think the reason behind this selective suppression of the high-order modes is that irradiation increases the sound speed (c\\({}_{s}\\)), which tends to stabilize the instabilities with short wavelengths (Toomre 1964). This increase in \\(c_{s}\\) is reflected by the increase of the asymptotic \\(Q\\)-values in the irradiated disks as \\(T_{irr}\\) is increased (cf. Table 1 and Fig.5). Mild irradiation thus increases the total torque, but, as explained in SS3.2.4, the effective \\(\\alpha\\) decreases somewhat as \\(T_{irr}\\) increases due to the larger values of \\(\\Sigma{c_{s}}^{2}\\) in the asymptotic states of irradiated disks. The torque increase associated with isolation of the \\(m=2\\) waves reinforces arguments in Papers II and III that GIs are an intrinsically global phenomenon (see also Balbus & Papaloizou 1999). In fact, the same mode-dependent suppression is observed in simulations with varied metallicity and grain size presented in Cai et al. (2006), though no discernable difference in mass transport rates was noticed in that paper, probably due to the low accuracy used in the analysis. As irradiation is strengthened slightly from \\(T_{irr}=15\\) to 25K, the mass inflow rate decreases somewhat but is still higher than for no irradiation at all. At \\(T_{irr}=50\\)K, irradiation is sufficient to suppress GIs entirely in the asymptotic phase and mass inflow shuts off.
While \\(T_{irr}=50\\)K is enough to damp GIs in the asymptotic phase, we found in SS3.2.1 that this is not a high enough \\(T_{irr}\\) to prevent GIs from growing in the initial disk.
So, there seem to be three regimes of behaviors with increasing \\(T_{irr}\\): 1) For \\(T_{irr}<T_{crit1}\\), the disk is able to become gravitationally unstable and remains unstable thereafter. 2) For \\(T_{crit2}>T_{irr}>T_{crit1}\\), the disk becomes unstable, but the GIs are eventually damped. 3) For \\(T_{irr}>T_{crit2}\\), the disk never becomes unstable. For our disk, we find that \\(T_{crit1}\\) is between 25 and 50K and \\(T_{crit2}\\) between 50 and 70K. The difference between \\(T_{crit1}\\) and \\(T_{crit2}\\) is due to the drastic redistribution of mass during the burst.
We suspect that our main conclusion - that irradiation tends to weaken and even suppress GIs - probably generalizes qualitatively to other types of irradiation, such as stellar irradiation and external irradiation from nearby stars. On the other hand, our preliminary calculation of a highly embedded massive disk (SS3.3) demonstrates that a disk can become unstable even with \\(T_{irr}\\) as high as 120K. Weakening and stabilization of GIs by irradiation makes direct planet formation by GIs in irradiated disks less likely, and realistic modeling of the GIs in such embedded disks must take envelope irradiation into account.
**4.2 Comparison with Analytic Arguments**
Rafikov (2005) has argued analytically that, for realistic opacities, an unstable disk and fast enough cooling for fragmentation are incompatible constraints in the planet-forming regions of radiatively cooled disks. More recently, in agreement with our own results in Paper III, he has furthered argued (Rafikov 2006) that, contrary to Boss (2004), convection cannot substantially change this conclusion. The simulations in this paper show that irradiation makes this problem worse. Modifying Rafikov's arguments by using \\(Q_{0}=1.5\\) instead of 1 as the stability threshold for GIs (as also critiqued by Boss 2005), we can evaluate \\(\\Sigma_{\\rm inf}\\) and T\\({}_{\\rm inf}\\) in Rafikov's (2005) equations (7) and (10)) based on our disk parameters at \\(r=20\\) AU, where \\(Q\\) is low (see Fig. 5). With his f(\\(\\tau\\)) factors, we find that Rafikov's (2005) fragmentation conditions (6) and (9) predict no fragmentation, consistent with what we see in the asymptotic phase of the simulations. In the asymptotic phase, we expect our disks to hover near marginal stability. With \\(Q_{0}=1.5\\), Rafikov's (2005) criterion (2) for GIs predicts midplane densities that are within a factor of two of what we see at 20 AU for the Irr 15K and Irr 25K simulations, which is a good agreement for such an approximate analysis.
By examining the stability of steady-state accretion disk models, Matzner & Levin (2005) also argue analytically that gravitational fragmentation will be suppressed under realistic conditions and is not a viable planet formation mechanism. Their analysis considered both the local instability and a global one, where the latter applies to very early phases when disks are massive and might be subject to global SLING instability (see Shu et al. 1990). Their treatment of disk irradiation is qualitatively similar to what our irradiated disks achieve in the asymptotic phase, and their estimate for the fraction of stellar surface flux tapped (\\(f_{F}\\), equivalent to \\(L_{env}\\)/\\(L_{star}\\) in SS3.2) puts their typical disk irradiation intensity between our Irr 25K and Irr 50K. Not surprisingly, we reached similar general conclusions.
However, both Rafikov and Matzner & Levin's local GI analysis are based on the Gammie fragmentation criterion (2001), which may not hold precisely for a disk with realistic radiative cooling (Johnson & Gammie 2003). Moreover, the critical value in Rafikov's (2005) criterion (3) changes with the first adiabatic index (Rice et al. 2005), which will vary in real disks (see, e.g., Boley et al. 2007a).
### Comparisons with Other Numerical Work
Boss (2002) explored effects on GIs of varying some of his thermodynamic assumptions, including variations in the \"outer disk temperature\" \\(T_{0}\\), which controlled the initial \\(Q_{min}\\). Because Boss, in these tests, resets \\(T_{0}\\) every time the disk cools below this temperature, \\(T_{0}\\) is analogous to our \\(T_{ir}\\) in terms of its effect (see Fig. 7). In one model with \\(T_{0}=150\\)K, the initial \\(Q_{min}\\) reached 2.6, and \"only weak spiral arms\" developed. This high \\(T_{0}\\) model resembles our Irr 50K disk, where intense envelope irradiation damps GIs gradually. Boss (2002) also noted that clump formation is slightly delayed in time in models with moderate \\(T_{0}\\) when compared to those with lower \\(T_{0}\\), similar to the delayed burst we report in SS3.2.1. So, indirectly, our main finding that irradiation tends to weaken GIs was anticipated in the Boss (2002). A critical difference, however, is that our disks do not fragment into dense clumps, regardless of \\(T_{ir}\\). Boss (2004) attributes fragmentation in his disks with radiative cooling to rapid cooling by convection, whereas our cooling times in Table 1 are too long for fragmentation to occur, and, as we will now explain, we do not see and do not expect convection in these irradiated simulations.
The superadiabatic regions seen during the axisymmetric phase of No-Irr (Paper III) are, for the most part, absent in Irr 15K and Irr 25K. There is some convection during roughly the beginning of each irradiation simulation, probably due to our isentropic starting condition, but it disappears after about \\(t=2\\) ORP, well before the GIburst. Because No-Irr convects until the onset of GIs, we surmise that the irradiation makes the temperature profile too shallow for superadiabatic regions to form even without GI-activity. There are some superadiabatic regions at the interface between the interior and the atmosphere of the disk due to the sudden temperature drop shown in Figure 1 (see Paper III for more details), but because the optical depths are low at the drop, convection is not present at these altitudes. During the asymptotic phase, we find no sign of convection even though a large volume is still optically thick in both irradiated disks. So, even the mild irradiation of the \\(T_{irr}\\) = 15K case is sufficient to suppress convection. For the irradiated simulations, there are strong vertical motions, but they are associated with shock bores, as in No-Irr (Boley & Durisen 2006, Paper III). We strongly suspect that the vertical motions seen by Boss (2004) are not convection. Even if convection does occur in a disk, we would not expect it to produce substantially more rapid cooling, because all disk energy transported vertically must still ultimately be radiated away (Paper III, Ravikov 2006).
Another question of interest to GI researchers is whether there is time in a disk's evolution during which GIs can be approximated by a local description (Laughlin & Rozyczka 1996, Balbus & Papaloizou 1999, Gammie 2001, Lodato & Rice 2004, Paper II, Paper III). As summarized in SS4.1, all the calculations presented in this paper strengthen the case made by Paper III that GIs are dominated by _global_ low-order modes. Here, we find that mild envelope irradiation enhances global behavior by selectively suppressing high-order modes, while not reducing the amplitudes of the lowest-order modes, especially \\(m=2\\). On the other hand, the effective \\(\\alpha\\) due to the GIs becomes more uniform in \\(r\\) as \\(T_{irr}\\) increases, and it may be possible to crudely approximate the mass transport by GIs with an \\(\\alpha\\) prescription. However, the picture of mass slowly diffusing inward is misleading, because global low-order modes induce large fluctuations in the radial motions of fluid elements and lead to shock bores (Boley & Durisen 2006). Moreover, substructure caused by resonances with the global modes will likely be missed in a local description. Such substructures may be important to planet formation (Durisen et al. 2005) because solids are marshalled by gas drag into regions of pressure maxima (Weidenschilling 1977; Haghighipour & Boss 2003; Rice et al. 2004).
### 4.4 Implications for Real Protoplanetary Disks and Planet Formation
Real disks are probably irradiated on the surface not only by the envelope but also by a variety of other sources, such as the central star and nearby stars, as evidenced by recent molecular line observations (e.g., Najita et al. 2003, Qi et al. 2005). Although different forms of irradiation require different treatments, our simple blackbody approximation gives us some general insight. All forms of surface irradiation will cause heat to flow into the deeper layers of the disk. Judging by the luminosities in Table 1, we surmise that, if we start with a marginally unstable disk, GIs become completely suppressed once the irradiation energy input becomes more than one or two orders of magnitude greater than the energy dissipation rate in the disk interior that can be sustained by GIs. Such energy input rates by external radiation are not unusual (D'Alessio et al. 2001). At levels below this, irradiation significantly alters the structure of GIs, tending to make them more ordered and global, with notable enhancement of gravitational torques and mass inflow rates. The selective damping of higher order modes means that significant irradiation makes fragmentation of a disk much less likely.
Our treatment of envelope irradiation assumes that \\(T_{irr}\\) does not vary with radius or time. Irradiation of real disks, especially by their central stars, is probably spatially inhomogeneous and time variable. Variability raises the interesting possibility that the nature of GIs could vary significantly over GI-active regions of the disk in strength, mass transport rate, and structure. In a disk simulation where we adopt a variable \\(T_{irr}\\) profile \\(T_{irr}\\)(\\(r\\)) \\(\\sim\\)\\(r^{-0.5}\\), with a total \\(L_{env}\\) matching that of Irr 25K run, preliminary results, to be reported elsewhere, suggest that stellar irradiation may be somewhat more effective in suppressing GIs than envelope irradiation. Protracted shadowing or strong illumination of disk regions could have a profound impact on GI behaviour as well. The complete suppression of GIs in the Irr 50K simulation requires about one global cooling time, or about two thousand years. This is probably the characteristic response time to temporal variations in irradiation for the 10s AU regions of young disks for the opacities we assume. In the future, we need to model envelope irradiation in more detail, by calculating the reprocessing of the stellar irradiation, as in Matzner & Levin (2005). We also need to attempt realistic treatment of other forms of irradiation.
For an embedded disk, the envelope not only acts as a hot blanket, but it also replenishes mass onto the disk continuously through infall, which may make the disk more unstable to GIs (Mayer et al. 2004, Vorobyov & Basu 2005, 2006). On the other hand, this process will also result in an accretion shock (Banerjee, Pudritz & Holmes 2004, Banerjee & Pudritz 2006) that might tend to suppress GIs. As shown by Chick &Cassen (1997), efficient accretion could significantly increase both the disk and envelope temperatures. Our Irr 50K case shows that such heating could damp GIs completely. Taken together, the fate of an embedded disk may depend on which of the physical processes (envelope irradiation, accretion shock heating, and mass accumulation) is most efficient. There are many factors that could affect disk fragmentation in the embedded phase, so it may still be a bit too early to say that GIs cannot produce protoplanetary clumps in an embedded disk under realistic conditions.
While tending to inhibit fragmentation and hence the direct formation of planets by disk instability, irradiation tends to enhance the global character of GIs. As a result, irradiation still allows the production of rings and radial concentrations of gas, and so a hybrid theory remains viable, where GIs accelerate core accretion by providing dense structures into which solids can become concentrated (Rice et al. 2004, 2006, Durisen et al. 2005).
Some attempts were made in Paper III to generate spectral energy distributions (SEDs) from the simulations. It is premature, however, to do so in our case, because we do not include a detailed model for the envelope, which would contribute significantly to the SEDs (e.g., Calvet et al. 1994, Miroshnichenko et al. 1999). Another limitation is our use of a grain size distribution similar to that of the interstellar grains, which has proven to be inappropriate for protoplanetary disks (D'Alessio et al. 2001), even for those in the embedded phase (Osorio et al. 2003). We use small grains here in order to make direct comparisons with the No-Irr simulation from Paper III.
When we simulate a small, massive disk with radiative cooling (SS3.3), the disk goes unstable even with an envelope irradiation of 120K and larger grain sizes. Unlessthe disk parameters for L1551 estimated by Osorio et al. (2003) do have large systematic errors (especially the disk masses), this implies that the development of GIs and strong spiral structure is inevitable in heavily embedded phases, when the disks are comparable in mass to their central stars. Powerful millimeter instruments are being built which hold promise for the detection of such spiral structures (see, e.g.,Wolf & D'Angelo 2005). Unfortunately, our L1551-like disk simulation could not be integrated more than a few outer orbit periods, and so we cannot say with any confidence how this disk will behave over longer periods of time. It is not even clear that such a massive disk will settle into an asymptotic phase. It may instead be subject to episodic bursts (Lodato & Rice 2005, Vorobyov & Basu 2005, 2006). What we can say with some confidence is that the cooling times are very long, and the disk is unlikely to fragment.
During our short integration of the L1551-like disk, the radial mass distribution undergoes considerable modification. The disk expands to a size approaching the Roche lobe radius of the system in a few orbit periods. In this sense, the Osorio et al. disk parameters are not physically self-consistent, i.e., the low-\\(Q\\) disk they describe is strongly and globally unstable. Because the binary separation is only \\(\\sim\\) 40AU, about three times the radius of either disk (see Table 2 of Osorio et al. 2003), any further modeling of GIs in an L1551-like disk would have to take the gravitational effect of the binary companion into account. In fact, tidal truncation has been suggested as an explanation for the small disk radii (Artymowicz & Lubow 1994, Rodriguez et al. 1998, Lim & Takakuwa 2005).
Inasmuch as the L1551-like disk is an extremely massive, optically thick disk, it is worthwhile to consider what other accretion mechanisms may be viable. Energetic particles have a typical attenuation of 100 g cm\\({}^{\\cdot 2}\\) (Stepinski 1992), and any nonthermallyionized layer would likely be fairly shallow, but a reasonable estimate for its depth is beyond the scope of this paper. As argued by Hartmann et al. (2006), a thin MRI-active layer likely will be unable to create Reynolds stresses that affect the midplane of the disk as seen in layered accretion simulations with thick (\\(>\\)10%) MRI-active layers (Fleming & Stone 2003). Moreover, the temperatures only become high enough to ionize thermally alkalis (T\\(>\\)1000K; e.g., Gammie 1996) in approximately the first AU of the simulated disk. However, it should also be noted that T\\(>\\)1000K may be insufficient to trigger a thermal MRI because dust will deplete ionized species. It may be necessary for temperatures to approach dust sublimation before a thermal MRI becomes effective (e.g, Desch 2004). It seems that the primary heating mechanisms for the interior of this disk are related to gravity, e.g., gravitational contraction in the vertical direction, GIs, and tidal forcing by a binary companion. Because GIs are dominated by global modes (Pickett et al. 2003; Mejia et al. 2005; Boley et al. 2006), treating the disk as an alpha disk (e.g., Osorio et al. 2003) is likely to misrepresent the evolution of the system.
Even though our models are directly applicable only to low mass stars, our main conclusions may well apply to Herbig Ae disks, given their similarity to T Tauri disks, as found by FUSE (e.g., Grady et al. 2006). However, it is probably inappropriate to extend the scaling to more massive star/disk systems like Herbig Be stars. Recent observations (e.g., Monnier at al. 2005, Mottram at al. 2007) suggest distinctly different accretion scenarios between the two groups.
Finally, our results indicate that, regardless of whether net cooling is dominated by optically thick or thin regions, cooling times are too long to permit fragmentation. Convection is suppressed during all phases of evolution of the irradiated disks (except the L1551-like disk), and, even if convection did occur in the irradiated cases as it did in the axisymmetric phase of No-Irr, it is not expected to lower the cooling times enough for fragmentation. All energy must ultimately be radiated away (Paper III, Rafikov 2006). Moreover, we are confident that our code allows for convection under the appropriate conditions, inasmuch as we have tested it against analytic problems (see Boley et al. 2007c).
## 5 Conclusion
We have presented 3D radiative hydrodynamics simulations of disks around young low-mass stars demonstrating that infrared irradiation by a surrounding envelope significantly affects the occurrence and behavior of gravitational instabilities. By selectively weakening high-order structure, mild irradiation enhances the global character of the resulting spiral waves and increases mass inflow rate. On the other hand, strong irradiation suppresses gravitational instabilities entirely. In all cases, irradiation tends to inhibit disk fragmentation and protoplanetary clump formation. Future simulations require more detailed treatment of the envelope structure, reprocessing of starlight, and the effects of accretion onto the disk from the envelope. Improvements in the gas equation of state (Boley et al. 2007a), better algorithms for radiative transfer (Boley et al. 2007c), consideration of other forms of irradiation, and inclusion of the effects of binary companions (e.g., Nelson 2000, Mayer et al. 2005, Boss 2006) are also needed. Work by our group along these various lines is planned or underway.
_Acknowledgments._ This work was supported in part by NASA Origins of Solar Systems grants NAG5-11964 and NNG05GN11G and Planetary Geology and Geophysics grant NAG5-10262. We thank S. Basu, A.P. Boss, N. Calvet, C.F. Gammie, L. Hartmann, R.E. Pudritz, and R. Rafikov for useful comments and discussions related to this work, and P. D'Alessio for providing mean molecular weight and dust opacity tables. We especially would like to thank the anonymous referee whose comments led to substantial improvements in the presentation of our results. K.C. and A.C.B. acknowledge the generous support of a CITA National Postdoctoral Fellowship and a NASA Graduate Student Researchers Fellowship, respectively. This work was supported in part by systems obtained by Indiana University by Shared University Research grants through IBM, Inc. to Indiana University.
## References
* (1) Adams, F. C., Lada, C. J., & Shu, F. H. (1987) ApJ 312, 788
* (2) (1988) ApJ 326, 865
* (3) Anglada, G., Rodriguez, L. F., Osorio, M., Torrelles, J. M., Estalella, R., Beltran, M. T & Ho, P. T. P. (2004) ApJ 605, L137
* (4) Balbus, S. A. & Papaloizou, J. C. B. (1999) ApJ 521, 650
* (5) Banerjee, R., Pudritz, R. E., & Holmes, L. (2004) MNRAS 355, 248
* (6) Banerjee, R., & Pudritz, R. E. (2006) ApJ 641, 949
* (7) Bodenheimer, P., Yorke, H., Rozyczka, M., & Tohline, J.E. (1990) ApJ 355, 651
* (8) Boley, A. C., & Durisen, R. H. (2006) ApJ 641, 534
* (9)* (1999) Boley, A. C., Mejia, A. C., Durisen, R. H., Cai, K., Pickett, M. K., & D'Alessio, P. (2006) ApJ 651, 517 (Paper III)
* (2007a) Boley A. C., Hartquist, T. W., Durisen, R. H., & Michael, S. (2007a) ApJ 656, L89
* (2007b) Boley A. C., Hartquist, T. W., Durisen, R. H., & Michael, S. (2007b) ApJ 660, 175
* (2007c) Boley, A.C., Durisen, R.H., Nordlund, A, & Lord, J. (2007c) ApJ, submitted (astph 0704.2532).
* (2001) Boss, A. P. (2001) ApJ 563, 367
* (2002) --. (2002) ApJ 576, 462
* (2003) --. (2003) ApJ 599, 577
* (2004) --. (2004) ApJ 610, 456
* (2005) --. (2005) ApJ 629, 535
* (2006a) --. (2006a) ApJ 641, 1148
* (2006b) ApJ 643, 501
* (2006) Cai, K., (2006), Ph.D. Dissertation, Indiana University
* (2006) Cai, K., Durisen, R. H., Michael, S., Boley, A. C., Mejia, A. C., Pickett, M. K., & D'Alessio, P. (2006) ApJ 636, L149
* (1994) Calvet, N., Hartmann, L., Kenyon, S.J., & Whitney, B.A. (1994) ApJ 434, 330
* (2001) Chiang, E. I., Joung, M. K., Creech-Eakman, M. J., Qi, C., Kessler, J. E., Blake, G. A. van Dishoeck, E. F. (2001) ApJ 547, 1077
* (1997) Chick, K.M., & Cassen, P. (1997) ApJ 477, 398
* (1997) D'Alessio, P., Calvet, N., & Hartmann, L. (1997) ApJ 474, 397
* (2001) --. (2001) ApJ 553, 321
* (1998) D'Alessio, P., Canto, J., Calvet, N., & Lizano, S. (1998) ApJ 500, 411
* (2004) Desch, S. J. (2004) ApJ, 608, 509
* (1989) Durisen, R. H., Yang, S., Cassen, P., Stahler, S. W. (1989) ApJ 345, 959Durisen, R. H., Cai, K., Mejia, A. C., & Pickett, M. K. (2005) Icarus 173, 417
* (2007) Durisen, R. H., Boss, A. P., Mayer, L., Nelson, A. F., Quinn, T., & Rice, W. K. M., (2007) in Protostars and Planets V, B. Reipurth, D. Jewitt, and K. Keil (eds.), University of Arizona Press, Tucson, p. 607.
* (2003) Eisner, J. A., & Carpenter J. M. (2003) ApJ 598, 1341
* (2006) -----. (2006) ApJ 641, 1162
* (2003) Fleming, T., & Stone, J. M. (2003) ApJ, 585, 908
* (2004) Fromang, S., Balbus, S. A., Terquem, C., & De Villiers, J.-P. (2004) ApJ 616, 364
* (1996) Gammie, C.F. (1996) ApJ 457, 355
* (2001) Gammie, C.F. (2001) ApJ 553, 174
* (2006) Grady, C. A., Williger, G. M., Bouret, J.-C., Roberge, A., Sahu, M., Woodgate, B. E., (2006) in Astrophysics in the Far Ultraviolet: Five Years of Discovery with FUSE ASP Conference Series, Vol. 348, eds. G. Sonneborn, H. Moos, & B.-G. Andersson, APS, San Francisco, p. 281
* (2003) Haghighipour N., & Boss A. P. (2003) ApJ 583, 996
* (2006) Hartmann, L., D'Alessio, P., Calvet, N., & Muzerolle, J. (2006) ApJ 648, 484
* (2000) Imamura, J. N., Durisen, R.H., & Pickett, B.K. (2000) ApJ 528, 946
* (2003) Johnson, B. M., & Gammie, C. F. (2003) ApJ 597, 131
* (1993) Kenyon, S. J., Calvet, N., & Hartmann, L. (1993a) ApJ 414, 676
* (1987) Kenyon, S. J. & Hartmann, L. (1987) ApJ 323, 714
* (1993) Kenyon, S. J., Whitney, B.A., Gomez, M., & Hartmann, L. (1993b) ApJ 414, 773
* (1996) Laughlin, G., & Rozyczka, M. (1996) ApJ 456, 279
* (2005) Lim, J., & Takakuwa, S. (2005) JKAS 38, 237
* (2004) Lodato, G. & Rice, W. K. M. (2004) MNRAS 351, 630
* (2005) Lodato, G., & Rice, W. K. M. (2005) MNRAS 358, 1489
* (2000) Looney, L. W., Mundy, L.G., & Welch, W.J., (2000) ApJ 529, 477
* (2003)* (2003) 255
* (2005) Matzner, C. D. & Levin, Y. (2005) ApJ 628, 817
* (2002) Mayer, L., Quinn, T., Wadsley, J., Stadel, J. (2002) Science 298, 1756
* (2004) ----. (2004) ApJ 609, 1045
* (2005) Mayer, L., Wadsley, J., Quinn, T., Stadel, J. (2005) MNRAS 363, 641
* (2005) Monnier, J. D. et al. (2005) ApJ 624, 832
* (2007) Mottram, J. C., Vink, J. S., Oudmaijer, R. D., Patel, M. (2007) MNRAS 377, 1363
* (2004) Mejia, A.C. (2004), Ph.D. Dissertation, Indiana University
* (2005) Mejia, A. C., Durisen, R. H., Pickett, M. K., & Cai, K. (2005) ApJ 619, 1098 (Paper II)
* (1999) Miroshnichenko, A., Iverzic, Z., Vinkovic, D., & Elitzur, M. (1999) ApJ 520, L115
* (2001) Motte, F., & Andre, P. (2001) A&A 365, 440
* (2007) Najita, J. R., Carr, J. S., Glassgold, A. E., & Valenti, J. A. (2007), in Protostars and Planets V, B. Reipurth, D. Jewitt, and K. Keil (eds.), University of Arizona Press, Tucson, 507.
* (1993) Natta, A. (1993) ApJ 412, 761
* (2000) Nelson, A. F. (2000) ApJ 537, L65
* (2006) Nelson, A. F. (2006) MNRAS 373, 1039
* (2003) Osorio, M., D'Alessio, P., Muzerolle, J., Calvet, N., & Hartmann, L. (2003) ApJ 586, 1148
* (1997) Pickett, B. K., Durisen, R. H., & Link, R. (1997) Icarus 126, 243
* (1998) Pickett, B.K., Cassen, P.M., Durisen, R.H., & Link, R. (1998) ApJ 504, 468
* (2000) ----. (2000) ApJ 529, 1034
* (2003) Pickett, B.K., Mejia, A.C., Durisen, R.H., Cassen, P.M., Berry, D.K., & Link, R. (2003) ApJ 590, 1060 (Paper I)
* (2005) Rafikov, R. R. (2005) ApJ 621, L69
* (2006)* (2007) 262, 642 (astro-ph/0609549)
* (2005) Rice, W. K. M. Lodato, G., & Armitage, P. J. (2005) MNRAS 364, L56
* (2004) Rice, W. K. M., Lodato, G., Pringle, J. E., Armitage, P. J., & Bonnell, I. A. (2004) MNRAS 355, 543
* (2006) ----. (2006) MNRAS 372, L9
* (1998) Rodriguez, L. F. et al. (1998) Nature 395, 355
* (2005) Rodriguez, L. F., Loinard, L., & D'Alessio, P. (2005) ApJ 621, L133
* (1992) Romeo, A. B. (1992) MNRAS 256, 307
* (1973) Shakura, N. I. & Sunyaev, R. A. (1973) A&A 24, 337
* (1994) Stahler S. W., Korycansky, D. G., Brothers, M. J., & Touma, J. (1994) ApJ 431, 341
* (1992) Stepinski, T. F. (1992) Icarus, 97, 130
* (1964) Toomre, A. (1964) ApJ 139, 1217
* (1997) Truelove, J.K., Klein, R.I., McKee, C.F., Holliman, J.H., II, Howell, L.H., & Greenough, J.A. (1997) ApJ 489, L179
* (2005) Vorobyov, E. I, & Basu, S. (2005) ApJ 633, L137
* (1977) Weidenschilling, S. J. (1977) MNRAS 180, 57
* (2005) Wolf, S., & D'Angelo, G. (2005) ApJ 619, 1114 | It is generally thought that protoplanetary disks embedded in envelopes are more massive and thus more susceptible to gravitational instabilities (GIs) than exposed disks. We present three-dimensional radiative hydrodynamics simulations of protoplanetary disks with the presence of envelope irradiation. For a disk with a radius of 40 AU and a mass of 0.07 M\\({}_{\\oplus}\\) around a young star of 0.5 M\\({}_{\\oplus}\\), envelope irradiation tends to weaken and even suppress GIs as the irradiating flux is increased. The global mass transport induced by GIs is dominated by lower-order modes, and irradiation preferentially suppresses higher-order modes. As a result, gravitational torques and mass inflow rates are actually increased by mild irradiation. None of the simulations produce dense clumps or rapid cooling by convection, arguing against direct formation of giant planets by disk instability, at least in irradiated disks. However, dense gas rings and radial mass concentrations are produced, and these might be conducive to accelerated planetary core formation. Preliminary results from a simulation of a massive embedded disk with physical characteristics similar to one of the disks in the embedded source L1551 IRS5 indicate a long radiative cooling time and no fragmentation. The GIs in this disk are dominated by global two and three-armed modes. | Write a summary of the passage below. |
arxiv-format/0706_4128v1.md | # A review of wildland fire spread modelling,
1990-present
2: Empirical and quasi-empirical models
A.L. Sullivan
## Introduction
### History
An empirical model is one that is based upon observation and experiment and not on theory. Empiricism has formed the basis for much of the scientific and technological advances in recent centuries and generally provides the benchmark against which theoryis tested. The study of fire and combustion in general was mainly an empirical endeavour, directed primarily toward application of combustion to industrial processes (for example, the industrial revolution of the late 1700s-1800s), until early in the previous century when the physical or theoretical approach had matured to the point of providing significant advances in understanding and prediction. The development of physical understanding of other forms of combustion (i.e. unintentional or uncontrolled fire) in general and wildland fires in particular, did not occur, however, until only very recently (in the last few decades) (Sullivan, 2007).
While there had always been a great general interest in unintentional fire in urban settings (Williams, 1982), for instance, the Great Fire of London in 1666, or the Chicago Fire on October 1871-prevention, control, prediction-unintentional fire in wildlands received much less attention, mainly due to the relatively little impact such fires have on the general populace. The study of the behaviour of fires in wildland regions has traditionally been driven by the needs of those practitioners involved in wildland resource management-foresters for the most part-for whom understanding this natural phenomenon was critical to the success of their work.
Despite the fact that practically no region of the world (except for Antarctica) is free from such fires, much of the work in this field was galvanised in the United States following the devastating 1910 fires in the mid-west (Pyne, 2001), where workers such as Hawley (1926) and Gisborne (1927, 1929) pioneered the notion that understanding of the phenomenon of wildland fire and the prediction of the danger posed by a fire could be gained through measurement and observation and theoretical considerations of the factors that might influence such fires. Curry and Fons (1938, 1940), and Fons (1946) brought a rigorous physical approach to the measurement and modelling of the behaviour of wildland fires that set the benchmark for wildland fire research for decades following.
In addition to the work conducted in the US, through the Federal US Forest Service and State agencies, other countries became increasingly involved in wildland fire research, primarily through their forest services-the Canadian Forest Service, the Commonwealth Forestry and Timber Bureau (later absorbed into the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in conjunction with various state authorities in Australia-although many other countries such as South Africa, Spain, Russia, France, Portugal to name a few, have also had significant impact on wildland fire research.
Since the early 1990s, European Union countries have committed significant funds towards wildland fire research, resulting in a boom period for this research in mainly Mediterranean countries and a major shift in focus away from the pioneering three (US, Canada and Australia).
During the past two decades, the direction of much of the wildland fire research has been toward the use of fire as a resource management tool in the form of hazard reduction burning or the study of ecological effects of fire (e.g. Gill et al. (1981); Goldammer and Jenkins (1990); Abbot and Burrows (2003).
## Empirical modelling
The focus of empirical modelling of wildland fire in the past has been on the determination of the key characteristics used to describe the behaviour of the fire. These generally have been the rate of forward spread (ROS) of the head fire (that portion of the fire perimeter being blown downwind and normally of much greater intensity that the rest of the fire perimeter), the height of the flames, the angle of the flames, and the depth of flames at the head, although other characteristics such as rate of perimeter or area increase may also be of some interest.
While observations of wildfires or fires lit intentionally for other purposes (such as hazard reduction or prescribed fires) have been used in the development of empirical models of fire behaviour, the predominant method has been the lighting of 'experimental' fires-fires whose only purpose is that of an experimental nature. This method can be divided into four parts. Firstly, the characterisation and quantification of the fuel and terrain in which the fire will lit (the slowly varying variables, which has included fuel load, fuel height, moisture content, bulk density, combustion characteristics, slope, etc.). Secondly, the observation and measurement of the atmospheric environment (the quickly varying variables, wind speed and direction, air temperature, relative humidity, etc.). Thirdly, the lighting, observation and measurement of the fire itself (its speed, spread, flame geometry, combustion rate, combustion residues, smoke, etc.). Fourthly, the statistical correlation between any and all of the measured quantities in order to produce the model of fire behaviour. Many workers have chosen to limit or control the possible natural variation in many quantities by conducting experimental fires in laboratory conditions which aids in the analysis of such fires.
The primary use of such models has been to estimate the likely spread in the direction of the wind (and potential for danger to firefighter safety) for suppression planning purposes, much of which has traditionally been conducted in the form of simple 'back of the envelope' calculations for plotting on a wall map. Due to this simple need, empirical fire spread models have traditionally been one dimensional models in which the independent variable that is predicted is the rate of forward spread of the head of the fire in the direction of the wind. The rather pragmatic nature of these models, their relatively straightforward implementation, their direct relation to the behaviour of real fires, and, perhaps most importantly, their development by for the most part by forestry agencies for their own immediate use, have meant that empirical fire spread models have gained acceptance with wildland fire authorities around the world and to varying degrees form the basis for all operational fire behaviour models in use today.
## Operational models
In the United States, the quasi-empirical model of Rothermel (1972) forms the basis of the National Fire Danger Rating System (Deeming et al., 1977; Burgan, 1988) and the fire behaviour prediction tool BEHAVE (Andrews, 1986). This model is based on a heat balance model first proposed by Fransden (1971) and utilised data obtained from wind tunnel experiments in artificial fuel beds of varying characteristics and from Australian field experiments of grassfires in a range of wind speed conditions to correlate fire behaviour with measured input variables. The model of Rothermel (1972) and associated systems have been introduced to a number of countries, particularly Mediterranean Europe.
In Australia, the predominant operational fire spread prediction systems have been the McArthur Grassland (McArthur, 1965, 1966) and Forest (McArthur, 1967) Fire Danger Rating Systems (FDRS), and the Forest Fire Behaviour Tables for Western Australia (commonly called the Red Book) (Sneeuwjagt and Peet, 1985), based on the work of Peet (1965). Both McArthur's systems and the Red Book are purely empirical correlations of observed fire behaviour and measured fuel and environmental variables from mainly field experimental fires augmented by well-documented wildfires. More recently, the CSIRO Grassland Fire Spread Meter (GSFM) (CSIRO, 1997; Cheney and Sullivan, 1997) based on the empirical modelling of Cheney et al. (1998) has replaced the McArthur Grassland FDRS as the preferred tool for predicting fire behaviour in grasslands. This, too, is based on field experimentation and documented wildfire observations.
In Canada, the quasi-empirical Fire Behaviour Prediction (FBP) System (Forestry Canada Fire Danger 1992) forms part of the Canadian Forest Fire Danger Rating System (CFFDRS) and is the culmination of 60 years of research effort in fuel moisture and fire behaviour (Van Wagner, 1998; Taylor and Alexander, 2006). Almost 500 fires were used in the construction of the FBP system, of which approximately 400 were field experiments, the remainder well-documented observations of prescribed and wild fires. The CFFDRS has been introduced and implemented in a number of countries, including New Zealand, Mexico and several countries of south-east Asia.
The main characteristic of all but the CSIRO GSFM is that these systems were based primarily on small (\\(<\\)1 ha) experimental or laboratory fires and augmented with wildfire observations3. The series of experiments upon which the CSIRO GSFM was based (Cheney et al., 1993) was the first to use experimental burning plots of which the smallest was 1 ha (See review of this model below).
Footnote 3: Another interesting characteristic is that the CSIRO GSFM model was the only one published in a peer-reviewed journal; all the others were published as technical reports by the associated organisations.
## Background
This series of review papers endeavours to comprehensively and critically review the extensive range of modelling work that has been conducted in recent years. The range of methods that have been undertaken over the years represents a continuous spectrum of possible modelling (Karplus, 1977), ranging from the purely physical (those that are based on fundamental understanding of the physics and chemistry involved in the behaviour of a wildland fire) through to the purely empirical (those that have been based on phenomenological description or statistical regression of fire behaviour). In between is a continuous meld of approaches from one end of the spectrum or the other. Weber (1991), in his comprehensive review of physical wildland fire modelling, proposed a system by which models were described as physical, empirical or statistical, depending on whether they accounted for different modes of heat transfer, made no distinction between different heat transfer modes, or involved no physics at all. Pastor et al. (2003) proposed model descriptions of theoretical, empirical and semi-empirical, again depending on whether the model was based on purely physical understanding, of a statistical nature with no physical understanding, or a combination. Grishin (1997) divided models into two classes, deterministic or stochastic-statistical. However, these schemes are rather limited given the combination of possible approaches, and, given that describing a model as semi-empirical or semi-physical is a 'glass half-full or half-empty' subjective issue, a more comprehensive and complete convention was required.
Thus, this review series is divided into three broad categories: Physical and quasi-physical models; Empirical and quasi-empirical models; and Simulation and Mathematical analogous models. In this context, a physical model is one that attempts to represent both the physics and chemistry of fire spread; a quasi-physical model attempts to represent only the physics. An empirical model is one that contains no physical basis at all (generally only statistical in nature), a quasi-empirical model is one that uses some form of physical framework upon which to base the statistical modelling chosen. Empirical models are further subdivided into field-based and laboratory-based. Simulation models are those that implement the preceding types of models in a simulation rather than modelling context. Mathematical analogous models are those that utilise a mathematical precept rather than a physical one for the modelling of the spread of wildland fire.
Since 1990 there has been rapid development in the field of spatial data analysis, e.g. geographic information systems and remote sensing. Following this, and the fact that there has not been a comprehensive critical review of fire behaviour modelling since Weber (1991), I have limited this review to works published since 1990. However, as much of the work that will be discussed derives or continues from work carried out prior to 1990, such work will be included much less comprehensively in order to provide context.
## Previous reviews
Many of the reviews that have been published in recent years have been for audiences other than wildland fire researchers and conducted by people without an established background in the field. Indeed, many of the reviews read like purchase notes by people shopping around for the best fire spread model to implement in their part of the world for their particular purpose. Recent reviews (e.g. Perry (1998); Pastor et al. (2003); etc), while endeavouring to be comprehensive, have offered only superficial and cursory inspections of the models presented. Morvan et al. (2004) take a different line by analysing a much broader spectrum of models in some detail and conclude that no single approach is going to be suitable for all purposes.
While the recent reviews provide an overview of the models and approaches that have been undertaken around the world, mention must be made of significant reviews published much earlier that discussed the processes in wildland fire propagation themselves. Foremost is the work of Williams (1982) which comprehensively covers the phenomenology of both wildland and urban fire, the physics and chemistry of combustion, and is recommended reading for the beginner. The earlier work of Emmons (1963, 1966) and Lee (1972) provides a sound background on the advances made during the post-war era. Grishin (1997) provides an extensive review of the work conducted in Russia in the 1970s, 80s and 90s. Chandler et al. (1983) and Pyne et al. (1996) provide a useful review of the forestry approach to wildland fire research, understanding and practice.
The first paper in this series discussed those models based upon the fundamental principles of the physics and chemistry of wildland fire behaviour. This particular paper will discuss those models based directly upon only statistical analysis of fire behaviour observations or models that utilise some form of physical framework upon which the statistical analysis of observations have been based. In this paper, particular distinction is made between observations of the behaviour of fires in the strictly controlled and artificial conditions of the laboratory and those observed in the field under more naturally occurring conditions.
The last paper in the series will focus upon models concerned only with the simulation of fire spread over the landscape and models that utilise mathematical conceits analogous to fire spread but which have no real-world connection to fire.
## Empirical models
The following sections identify and discuss those empirical and quasi-empirical surface-only fire spread models that appeared in the literature since 1990. It is interesting to note the observation of Catchpole (2000) that the majority of new models that have been developed in recent years have been the result of efforts to initially develop and validate local fuel models required for the implementation of the BEHAVE (based on Rothermel) fire behaviour prediction system. Many researchers obviously felt that it was far easier to start from scratch with a purpose built model than to try to retrofit their local conditions into an existing model. Table 1 summarises the empirical models discussed in this review.
Due to the varied nature of the empirical models presented here, including the fuels and weather conditions under which the data for the construction of the models were collected, the size and number of experimental fires and purposes for which the models were developed, it is difficult to compare them side by side. One possible method is the relationship between rate of forward spread (ROS) and wind speed. Wind speed is widely accepted as being the dominant variable determining the forward speed of a fire front. The reasons for this are cause for significant debate, ranging from the reduction in angle of separation of flame to unburnt fuel to increased turbulent mixing of combustants. Regardless of the mechanics of the process, the empirical approach to modelling fire spread must cater for this process and is manifested in the functional form chosen to represent it. Fuel moisture content (FMC) is also a key variable in determining rate of spread and this is also discussed. Fire spread models for fuel layers other than surface fuels, such as crown fires or ground fires, are not covered.
### Canadian Forest Service (CFS) - Acceleration (1991)
While the Canadian Forest Service (CFS-accel) work (McAlpine and Wakimoto, 1991) is not a model of fire spread as such, it does address a major concern of fire spread, namely the acceleration in rate of fire spread from initiation. The assumption is that a fire will attain an equilibrium rate of spread for the prevailing conditions (the prediction of which is the primary aim of all fire spread prediction systems discussed here). The form of the function for the time to reach this equilibrium ROS is assumed to be exponential based on models proposed by Cheney (1981) and Van Wagner (1985) (as cited by McAlpine and Wakimoto (1991)).
29 experimental fires were conducted in a wind tunnel with a fuel bed 6.15 m long by 0.915 m wide consisting of _Pinus ponderosa_ needles or excelsior4 of varying fuel load and bulk density. Four wind speeds (0, 0.44, 1.33 and 2.22 m s\\({}^{-1}\\)) measured at mid-flame height were used. Temperature and relative humidity were held constant at 26.7\\({}^{\\circ}\\)C and 80%.
Equilibrium ROS was assumed to occur after 2.0 m of forward spread and determined using linear regression of averaged fire location and time measurements.
Acceleration was modelled as an allometric (power law) function asymptoting to the equilibrium ROS with two coefficients, one based on the equilibrium ROS (which eliminated differences in fuel properties and integrated all other burning condition variables) and the other on the wind speed. The model was found to well represent the laboratory data but observations of elapsed time to equilibrium ROS did not coincide with point source field observations of other authors which were much greater and also dependent on wind speed.
### CALM Spinifex (1991)
The Western Australia Department of Conservation and Land Management (CALM) Spinifex model (Burrows et al., 1991) was developed from 41 experimental fires conducted in predominantly spinifex (_Troidia basedowii_ and _Plectrachne schinizi_) fuels on relatively flat sand plains. These fires were lit using drip torches to create lines of fire up to 200 m long perpendicular to the wind direction. Fuel particle dimension and arrangement were measured for individual clumps; fuel distribution, quantity and moisture content were measured using line transect methods. Bare ground between clumps was also measured. Wind speed and direction, air temperature and relative humidity were measured at 10-min intervals. Wind speed ranged over 1.11 - 10 m s\\({}^{-1}\\) and FMC over 12-31%. Fire spread was measured using metal markers placed near the flame front at intervals of 1-4 mins and later surveyed. Fires were allowed to spread until they self-extinguished. The range of ROS was 0-1.53 m s\\({}^{-1}\\). Data gathered were analysed using multiple linear regression techniques.
Burrows et al. (1991) found that above a threshold wind speed zone (3.33-4.72 m s\\({}^{-1}\\)), in which flames are tilted sufficiently to bridge the gap between hummocks (Bradstock and Gill, 1993; Gill et al., 1995), the ROS varied with the square of the wind speed (R\\({}^{2}\\) = 0.85). Below the threshold wind speed zone, which depends on the percent cover of fuel (ratio of percentage of area covered by hummocks to bare ground), the fire does not spread. The higher the percent cover, the lower the threshold wind speed required. A lesser, negative, linear correlation was determined with FMC. Percent cover and air temperature were also found to influence the ROS but much less than either wind or FMC. Fuel load and other fuel characteristics were found not to be important.
### Canadian Forest Fire Behaviour Prediction (CFBP) System (1992)
The Canadian Forest Fire Behaviour Prediction (CFBP) System is a component of the Canadian Forest Fire Danger Rating System (CFFDRS) (Stocks et al., 1991), which also incorporates the Canadian Forest Fire Weather Index (CFWI) System. The CFFDRS is the result of continuing research into forest fire behaviour since the mid-1920s and has undergone several incarnations in that time. The current CFFDRS system came into being in the late 1960s in the form of a modular structure. The first major component to be completed was the CFWI in 1971, which provided a relative measure of fuel moisture and fire behaviour potential for a standard fuel type, and has been revised several times since its introduction (Van Wagner, 1987). While there have been several interim editions of the CFBP, the first of which appeared in 1984 (Lawson et al., 1985), it was not until 1992 that a final version of the prediction system was released (Forestry Canada Fire Danger Group, 1992; Taylor and Alexander, 2006) and thus is covered in this review.
The CFBP system, following on from the long-established Canadian approach to studying wildland fire, is based on the combined observations of nearly 500 experimental, prescribed and wild fires in 16 discrete fuel types covering 5 major groups: coniferous, deciduous, mixed wood, slash and grass fuels. The experimental work on which the system is based was conducted by individual researchers working in specific fuel types and locales across the country using a variety of methods and published in a variety of places (initially including the 1966 work of McArthur in Australian grasslands, later replaced by the data of Cheney et al. (1993) (Forestry Canada Fire Danger Group, 1992)). Alexander et al. (1991) provides an overview of the methods used since the 1960s to obtain the dataset from the CFBP was derived, but which, due to a number of factors (including technological improvements) evolved over the years. The result is a system constructed by a small group of dedicated researchers over a period of 20 years that has broad applicability to a wide range of fuels and climates.
Experimental burn plots varied in size from 0.1 ha up to 3 ha (Alexander et al., 1991), with the majority being less than 1 ha. Ignition methods included both point ignitions as well as line-ignitions. Wind speed unaffected by the fire was measured at 10-m in the open (or converted a 10-m in the open equivalent). Experiments were usually conducted in the late afternoon in order to attain maximum burning conditions for the day. ROS was normally measured by visual observations of fire passage over predetermined distances. For point ignition experiments, metal tags were placed at the head and flanks of the fire and surveyed afterwards.
The final version of the CFBP system works in conjunction with the CFWI system to determine an Initial Spread Index (ISI) for the standard fuel type (pine forest) and based solely on fine FMC and wind speed. The functions chosen for the effect of wind speed and fine FMC on the ISI are exponential (exponent 0.05039) for wind, and a complicated mix of exponential and power law (exponents -0.1386 and 5.31 respectively) for FMC (Van Wagner, 1987). No quantification of performance of these functions is given.
To predict ROS, the ISI is modified by a Build-up Index (BUI), which is a fuel-specific fuel consumption factor that includes fuel moisture. Predicted ROS is the headfire ROS on level terrain under equilibrium conditions, thereby implicitly including effects of acceleration and crowning (Forestry Canada Fire Danger Group, 1992). The effect of slope (Van Wagner, 1977b) and crown fire transition effects (Van Wagner, 1977a) then modify the basic ROS. Recent work of the International Crown Fire Modelling Experiment (Stocks et al., 2004) has investigated the behaviour of fully-developed crown fires (which is not covered in this review as it is outside the scope of surface fire spread).
## Button (1995)
Marsden-Smedley and Catchpole (1995b) presented a model for the prediction of ROS and flame height of fires in Tasmanian buttongrass moorlands, described as largely tree-less communities dominated by sedges and low heaths (Marsden-Smedley and Catchpole,1995a). The behaviour of 64 fires (of which 44 were experimental fires, 4 test fires, 11 fuel reduction fires and 5 wildfires) at 12 sites was measured. Experimental burns were conducted on blocks of either 0.25 or 1.0 ha with ignition line lengths of 50 or 100 m respectively under a limited range of weather conditions. ROS was measured by either using metal tags thrown at different times or by timing the passage of flames past pre-measured locations. For experimental fires, wind speed and direction, temperature and relative humidity were measured at 10 m, and wind speed only at 1.7 m above ground level, all averaged over 1-3 min periods. Meteorological data for non-experimental fires were collected using handheld sensors at 1.7 m. Data ranged from 0.19 - 10 m s\\({}^{-1}\\) for wind speed and 8.2-96% for FMC, and 0 - 0.92 m s\\({}^{-1}\\) for ROS.
Marsden-Smedley and Catchpole (1995b) found surface wind speed, dead FMC and fuel age (time since last fire) to be the key variables affecting ROS, with wind being the dominant factor. Age and FMC each accounted for 15 to 20% of the observed variation in ROS. A power law with an exponent of 1.312 was used to describe the effect of wind, whereas both the FMC and fuel age were modelled as exponential functions (FMC decreasing, age increasing to a maximum at about 40 years). Rates of spread of the back and flank of the fires were found to be approximately 10% and 40% of the head ROS, respectively.
## CALM Mallee (1997)
McCaw (1997) conducted a large-scale field experiment in _Eucalyptus tetragona_ mallee-heath community in south-west Western Australia. Shrubs \\(<\\) 1.0 m tall comprised more than half the plant species present. Burn plots 200 m \\(\\times\\) 200 m were established in 20-year-old fuel in flat terrain. A semi-permanent meteorological site was set up 500 m from the experimental plots recording 30 min averages of temperature and relative humidity at 1.5 m and wind speed and direction at 2 m. During each experiment, mean wind speed and direction at a location up to 250 m upwind of the plot were measured at heights of 2 m and 10 m at 30 s intervals. FMC was measured using 5 samples of four fuel components (3 dead and 1 live) collected post-fire within 30 min of ignition. Wind speed at 10 m in open ranged from 1.5-6.9 m s\\({}^{-1}\\), FMC 4-32%. Experimental fires were ignited using a vehicle-mounted flame thrower to establish a line perpendicular to the prevailing wind up to 200 m long. Fire spread was measured using buried electronic timers (placed on a 24-point grid) equipped with a fusible link that melted on exposure to flames. ROS ranged 0.13-0.68 m s\\({}^{-1}\\).
Isopleths representing the position of the fire front at successive time intervals were fitted to the grid of timer data for each plot using a contouring routine based on a distance-weighted least squares algorithm. ROS up to 0.67 m/s and fireline intensities up to 14 MW/m were recorded. Fires were found to spread freely when the FMC of the dead shallow litter layer beneath the low shrubs was \\(<\\) 8%. Forward ROS was modelled as a function of the wind speed in the open at 2 m and FMC of the deep litter layer. These accounted for 84% of the variation in ROS. A power function (exponent 1.05) and an exponential (coefficient -0.11) were chosen to describe wind (measured at 2 m) and FMC influences respectively (McCaw, 1997, page 142). Good agreement between the model and observations of rate of spread of a limited number of prescribed and wild fires (up to ROS = 1.1 m/s), although observed ROS of a wildfire burning under extreme fire danger conditions was over-predicted by 30%.
## CSIRO Grass (1997)
The CSIRO Grassland Fire Spread Meter (CSIRO, 1997) is a cardboard circular slide rule that encapsulates the algorithms developed by Cheney et al. (1998) for fire spread in natural, grazed and eaten-out grassland pastures. These algorithms are based primarily on the results of experiments conducted in annual grasses of the Northern Territory with the aim of determining the relative importance of fuel characteristics on rate of forward spread of large unconstrained fires, particularly fuel load (Cheney et al., 1993), augmented by large experimental fires conducted in open woodland (Cheney and Gould, 1995) and detailed observations of 20 wildfires. 121 experimental fires were carried out on a flood plain in a range of fuel treatments under a variety of weather conditions (Cheney et al., 1993; Cheney and Gould, 1995) in prepared blocks ranging in size from 100 \\(\\times\\) 100 m to 200 \\(\\times\\) 300 m. These fires were predominantly lit from lines ranging in length from 30 to 175 m, although there were also a number of point ignitions, and allowed to burn freely. The range of fuel treatments included moving and removing cuttings, moving and retaining cuttings, or leaving the grass in its natural state. Two distinct grass species (_Eriachne burkittii_ and _Themeda australis_), of different height, bulk density and fineness were present.
Fuel characteristics (height, load, bulk density, etc.) were measured on four transects through each plot approximately every 25 m. In addition to remote standard 10 m and 2 m meteorological stations, the wind speed at 2 m was measured at the corner of each plot and averaged for each ROS interval. FMC samples were taken before and after each fire. ROS and flame depth were measured from a series of rectified time-stamped oblique aerial photographs of each fire. Wind speed ranged from 2.9 - 7.1 m s\\({}^{-1}\\), FMC 2.7-12.1%, and ROS 0.29-2.07 m s\\({}^{-1}\\).
Cheney and Gould (1995) found the growth of the fires to be related to wind speed and the width of the head fire normal to the wind direction. They found that the width of the fire required to achieve the potential quasi-steady ROS for the prevailing conditions increased with increasing wind speed, and the time to reach this quasi-steady ROS was highly variable. ROS was found to depend on the initial growth of the fire, the pasture type (natural, grazed or eaten-out), wind speed and live and dead FMC. Utilising the notion of potential quasi-steady ROS and a minimum threshold wind speed for continuous forward spread, Cheney et al. (1998) developed a model of fire spread assuming a width necessary to reach the potential ROS. This model uses wind speed, dead FMC and degree of curing to predict the potential (i.e. unrestricted) ROS for the prevailing conditions. Above a threshold of 5 km h\\({}^{-1}\\) the ROS is assumed to have a power function (with an exponent less than 1 (0.844)) relation with the wind speed. This wind speed function is similar to that proposed by Thomas and Pickard (1961), in which a power function with exponent of just less than 1 was found. Less than the threshold, the ROS is linear with wind speed and dominated by dead FMC.
## Heath (1998)
A cooperative research effort from a number of Australasian organisations, Heath (Catchpole et al., 1998a) utilises observations of 133 fires (comprising a mix of experimental (95), prescribed(22) and wild (16) fires) conducted in mixed heathland (heath and shrub) fuels. This includes 48 experiments conducted by Marsden-Smedley and Catchpole (1995b) in button-grass. Only experimental and prescribed fires were used in model development; wildfire observations were used for validation.
In mixed heathland (comprising heath, scrub and gorse in New Zealand and mixed species including Banksia, Hakea and Allocasuarina in Australia), fuel age ranged from 5-25 years. Fires were lit as lines of unstated length on slopes \\(<\\) 5\\({}^{\\circ}\\). Due to the disparate nature of the researchers involved, methods for measuring variables varied from experiment to experiment. Wind speed was generally measured by handheld anemometry at 2 m at 20 s intervals and averaged over the life of the fire. Wind speed ranged from 0.11-10.1 m s\\({}^{-1}\\) and ROS 0.01-1.00 m s\\({}^{-1}\\). Fuel load does not appear to have been measured but fuel height was. FMC was measured in some cases and modelled in others using pre-established functions based on air temperature and relative humidity.
Wind speed was found to account for 53% of the variation in ROS. Aerial dead fuel (i.e. those fuels not in contact with the ground) FMC was found not to be significant. Fuel height was highly significant and with wind accounted for 70% of the variation in ROS. A power function of wind speed (exponent 1.21) was used to describe this variation. A power function was also used for fuel height (exponent 0.54).
The model was found to perform reasonably well for the selection of wildfires, considering the paucity of available data and necessary assumptions about the involved fuel characteristics (fuel height, moisture etc.) but could be improved with more variables. The wind power function does fail for zero wind but was found to better fit the data than an exponential growth function.
## PortShrub (2001)
Fernandes (2001, 1998) presented a model developed from field experiments and observations of prescribed burns conducted in four different types of shrub in flat terrain of Portugal. He found that Rothermel (1972) did not predict observed ROS well. 29 fires were conducted on flat (\\(<\\)3\\({}^{\\circ}\\)slope) in gorse, low heath, tall heath and tall heath/tree mix. Fine aerial live and dead FMC was sampled prior to each burn. Meteorological variables (wind speed, air temperature, relative humidity) were measured at 2 m in the open using either a fixed weather station placed near the burn plot or upwind with handheld instruments. Fires were lit as lines of length 10 m in experimental fires and 100 m in prescribed burns. ROS was measured by recording time of arrival of the head fire at reference locations. Wind speed ranged 0.28-7.5 m s\\({}^{-1}\\), FMC 10-40% and ROS 0.01-0.33 m s\\({}^{-1}\\).
ROS was significantly correlated with wind speed (1% level) and less so with RH, temperature, and aerial dead FMC (5%). Other fuel characteristics were also found to affect ROS but were strongly intercorrelated and thus could not be separated, however, preference was given to fuel height. The initial model found a power law (exponent 1.034) for wind speed. However, as the model predicted no ROS in zero wind, an exponential function (coefficient 0.092) was subsequently incorporated. The final model, with an exponential decay function for dead FMC (coefficient -0.067) and power function (exponent 0.932) for fuel height, improved the overall performance of the model (R\\({}^{2}\\) = 0.91). The model was also found to predict well the data sets of other authors and be in close agreement with other field studies (e.g. Marsden-Smedley and Catchpole (1995b); Cheney et al. (1998); Catchpole et al. (1998a)).
### CALM Jarrah I (1999)
Burrows (1999a, 1994) conducted a series of 144 laboratory experiments (54 wind-driven, 6 no wind, 13 backing, 34 with slope, 15 point ignition) using fallen leaves and twigs (\\(<\\)6 mm) placed on a 4 m long by 2 m wide table set in a large shed. Wind was supplied by four domestic fans calibrated to give a desired wind speed over the fuel bed. FMC was varied by uncontrolled ambient conditions and wetting prior to burning. It ranged from 3 to 14%. Fires were lit along the 2 m upwind edge using cotton wick soaked in methylated spirits and allowed to burn for 50 cm before measurements commenced. ROS was measured by recording time taken to reach end of fuel bed. Wind was varied from 0.0 to 2.1 m s\\({}^{-1}\\) with mean 1.06 m s\\({}^{-1}\\). ROS ranged 0.002-0.075 m s\\({}^{-1}\\).
In wind-driven fires, no relationship between fuel load and forward ROS was found. Most variation in ROS was due to wind speed (correlation coefficient 0.94). ROS was negatively related to FMC (correlation coefficient -0.31). Backing ROS was found to be directly related to fuel load.
At wind speeds \\(<\\) 0.83 m s\\({}^{-1}\\), ROS was relatively insensitive to wind. Above this value, ROS was found to vary linearly with wind speed. However, a power function (exponent 2.22) was used to model wind speed effect on ROS. An inverse linear function was used for FMC. This model was found to underspecify ROS \\(>\\) 3.33 m s\\({}^{-1}\\) with an error variance that increased with ROS.
### CALM Jarrah II (1999)
Burrows (1999b, 1994) studied four series of fire behaviour data obtained from field experiments and fuel reduction burns on flat to gently sloping terrain in Jarrah (_Eucalyptus marginata_) forest in south-west Western Australia to test Jarrah I and other models for forest fire spread. Fuel was characterised by a layer of dead leaves, twigs, bark and floral parts on the forest floor with low (\\(<\\)0.5 m, 30% cover) understorey of live and suspended vegetation. Plots were 100 m wide \\(\\times\\) 200 m long. 56 of 66 total plots were lit from lines of 50-100 m length, the remainder being point ignitions.
Historical (pre-fire) weather data (including rainfall, temperature, relative humidity, wind speed and direction at 2 hourly intervals) were obtained from nearby permanent weather stations. During each experiment a portable weather station approximately 50 m from the fire recorded wind speed at 1.5 m and 10 m and temperature and relative humidity at 1.5 m at 5 minute averages. FMC was measured at the time of ignition. Wind speed at 10 m in the open ranged 0.72-3.33 m s\\({}^{-1}\\) and FMC 3-18.6%.
ROS was measured by recording the time of arrival at a grid of predetermined locations, along with other fire characteristics, after first allowing the fire to spread 20 - 40 m (\\(\\simeq\\) 15 min) in order for it to attain a quasi-steady ROS for the prevailing conditions. The position of the flames in relation to the grid was mapped at 5 min intervals. ROS ranged 0.003-0.28 m s\\({}^{-1}\\).
Unlike the laboratory findings (Burrows, 1999a), Burrows here found a non-linear relation between wind speed and ROS. A power function (exponent 2.674) was selected. FMC was determined to also be a power function (exponent -1.495). Like the laboratory findings, fuel load was not found to correlate with ROS. The model was found to underpredict ROS of large wildfires burning under severe conditions.
### Gorse (2002)
Baeza et al. (2002) conducted field experiments during spring and autumn in gorse shrublands of eastern Spain with the aim of developing a prescribed burning guide. Fuels were 3, 9 and 12 years old and were replicated 3 times resulting in a total of 9 fires. Plots were 33 m \\(\\times\\) 33 m and were burnt under low (\\(<\\) 1.39 m s\\({}^{-1}\\) at 2 m) wind, utilising headfire spread for the 3-year-old fuel and backing fires for the other two age classes. Meteorological data was recorded at 2 m at 15 min intervals. Fuel characteristics were recorded along 5 parallel transects 5 m in length. FMC was measured from 10 samples of the most abundant species collected prior to ignition and ranged from 22-85%, presumably including both live and dead fuels5. Ignition technique is not specified. ROS was measured by recording the time to travel a fixed distance within the plot and ranged 0.004-0.039 m s\\({}^{-1}\\).
Footnote 5: It should be noted that the authors dried their fuels at 80\\({}^{\\circ}\\)for 24 hours which is much less than the generally accepted 104\\({}^{\\circ}\\)C for 24 hours (e.g. Cheney et al. (1993)), perhaps resulting in lower than actual FMC values–see discussion on Measurement issues.
It was found that FMC was the dominant factor affecting ROS in a linear manner (coefficient 0.487). The combination of heading and backing propagation negated any consistent effect of wind speed on ROS.
### PortPinas (2002)
Fernandes et al. (2002) developed a model for the behaviour of fires in maritime pine (_Pinus pinaster_ stands in northern Portugal under a range of fire weather conditions that occur outside the wildfire season for the purpose of improving the understanding of prescribed fire for hazard reduction. Six study sites in mountainous terrain with forests founded by plantation or regeneration following fire events and aged 14 to 41 years were established. Fuel complexes were dominated by litter, shrubs or non-woody understorey (e.g. grass) types. Extensive destructive and non-destructive sampling to quantify the fuels was undertaken along transects in each experimental plot. Four strata of fine fuel layers were defined: shrubs, herbs and ferns, surface litter and upper duff. Experimental plots were square, 10-15 m wide, and defined by 0.3 to 1.2 wide control strips assisted by a hose line during burning.
Wind speed was measured continuous at 1.7 m above ground approximately 10 m from each experimental plot. Three composite fuel moisture samples (one litter, one duff and one live) were sampled at random locations prior to ignition.
94 experimental fires for fire behaviour studies were conducted when slope and wind direction were aligned within 20\\({}^{\\circ}\\). Line ignition occurred 2 m from the windward edge to allow both forward and backing spread observations. Fire behaviour measurements used 1.5-m-high poles located at regular distances along the plot axis as reference points. ROS was determined by recording the time at which the base of the fire front reached each pole. Flame height and flame angle were estimated visually and used to calculate flame length. Wind speed ranged from 0.3 - 6.4 m s\\({}^{-1}\\), surface dead FMC ranged 8 - 56%, air temperature 2 - 22\\({}^{\\circ}\\)C and relative humidity 26 -96%. ROS ranged 0.004 - 0.231 m s\\({}^{-1}\\).
Fernandes et al. (2002) found that three existing models underestimated ROS with significant differences between predicted and observed values, as much as 8-fold in one case. Undertaken non-linear least=squares analysis, they found that slope and wind speed were the most significant variables with dead FMC in a less significant role. A power law function with wind speed only (exponent 0.803) explained 45% of the variation in ROS. If wind speeds less than 0.83 m s\\({}^{-1}\\) were excluded, the correlation coefficient increased to 0.996. Slope alone explained 30% of the variation. The final model selected for filter-shrub fuels (the general case for maritime pine stands) involved wind speed (power law, exponent 0.868) dead surface FMC (exponential, coefficient -0.035), slope and understorey fuel height. Fuel height was selected as could be considered a surrogate for the overall fuel complex structure effect on ROS. The model was then adapted through changes in constants to predict ROS in litter and non-woody understorey complexes. No assessment of the performance of this model was reported.
## Maquis (2003)
Bilgili and Saglam (2003) conducted a series of 25 field experiments in open, level shrubland of maquis fuel in southwestern Turkey. Average height of the fuel was 0.53 m and fires were conducted under a range of wind and fuel conditions. Each fire plot was 20 m wide by 30 m long. A meteorological station recorded air temperature, relative humidity, wind speed and precipitation at 1.8 m daily. Fuel characteristics were measured from random destructive sampling prior to the experiment series. Live and dead FMC was sampled immediately prior to ignition. During each fire, wind speed, temperature and relative humidity at 1.8 m were recorded at 1 min intervals using the automatic meteorological station. These were averaged over the period of fire spread. Wind speed ranged 0.02-0.25 m s\\({}^{-1}\\), FMC 15.3-27.7%.
Fires were lit with a drip torch along the upwind (20 m long) edge to quickly establish a line fire and were allowed to propagate with the wind across the plot. ROS was measured by recording time of arrival at a series of predetermined locations and ranged 0.01-0.15 m s\\({}^{-1}\\). ROS was strongly correlated with wind speed; a linear function explained 71% of observed variation. FMC was found not to have any significant effect on ROS, attributed to the narrow range studied. The final model used a linear function of wind speed (coefficient 0.495) and total fuel load, with an R\\({}^{2}\\) = 0.845.
## Quasi-empirical models
Where the data gathered from experimental observation is analysed using a physical framework for the functional relationships between dependent and independent variables, a quasi-empirical model results. The degree to which the physical framework controls the structure of the model can vary but the nature of the model is essentially based upon the observed data (which differentiates it from quasi-physical models which use data solely for parameterisation). Table 2 summarises the quasi-empirical models discussed below.
### Trw (1991)
Wolff et al. (1991) presented the results of laboratory experiments conducted in a purpose-built wind tunnel 1.1 m wide by 7 m long with a moveable ceiling. The fuel layer was vertical match splints (1.3-4.4 mm in diameter) set in a ceramic substrate. Wind speed varied from 0-4.7 m s\\({}^{-1}\\), ROS ranged from 0-0.007 m s\\({}^{-1}\\). The results confirmed the theoretical treatment conducted by Carrier et al. (1991), in which it was hypothesised that the dominant heat transfer mechanism in such a set-up would be a mix of convection and diffusion (i.e. 'confusion') heating that would result in a relationship in which the ROS would vary as the square root of the wind speed normalised by the fuel load. If radiation was the predominant preheating mechanism, it was hypothesised that the variation would be as the power of 1.5 rather than 0.5.
Wolff _et al._ found that not only did the width of the fuel bed play an important part in determining the ROS but also the total width of the wind tunnel itself. The narrower the fuel bed, and the facility, the slower the ROS. It was suggested that a narrower fuel bed forced air away from the fuel bed due to drag considerations in the fuel. A series of experiments with tapering fuel beds and working section confirmed this. If the fuel bed and working section was too narrow, ROS ceased.
### Nbru (1993)
Beer (1993b, 1991) investigated the interaction of wind and fire spread utilising a series of 18 small-scale wind tunnel (length 40 cm, height 16 cm) experiments using a single row of match splints in wind ranging from 0.0 to 9 m s\\({}^{-1}\\). ROS ranged from 0.004-0.38 m s\\({}^{-1}\\). Rather than a single continuous function to describe the relationship between wind speed and ROS, Beer put forward the hypothesis that there exists a critical characteristic (threshold) wind speed that affects ROS with different wind speed functions above and below this value. Below the threshold Beer found a normalised (by the threshold wind speed) power function (exponent 0.5). Above the threshold, Beer found a normalised power function (exponent 3.0). Beer postulates that the choice of the value is related to the wind speed at which the wind shear is strong enough to generate flame billows and that this value corresponds to a mid-flame wind speed of 2.5 m s\\({}^{-1}\\). Above this value it is thought that the flames remain within the fuel bed rather than above it. Beer attempted to fit this model to observations of grassfire behaviour but could not.
Beer (1993a) further explored the effects of wind on fire spread through simplified (match splints) fuel. His extension of a simple geometric model of fire spread in no wind to includewind (based on geometry of wind-tilted flame and distance between fuel elements), in which the ROS-wind function is a complicated solution to a set of equations to determine the critical time for flame immersion of adjacent fuel elements, did not perform well. This was attributed to assumptions about the characteristics of the flame and a constant ignition temperature. Beer concludes that a single simple power law or exponential is unlikely to be a correct mathematical description for the ROS-wind speed relation.
## Usfs (1998)
Catchpole et al. (1998b) conducted an extensive series of environmentally-controlled wind tunnel experiments and used the results, in conjunction with energy transfer considerations, to develop a spread model, USFS (United States Forest Service). 357 experimental fires were carried out on a fuel bed 8 m long by 1 m wide in a 12-m long wind tunnel of 3 m square cross section. Four fuels with different surface-area-to-volume ratios (two sizes of poplar excelsior, ponderosa pine needles and ponderosa pine sticks) were chosen to be reasonable approximations to natural fuel layers. Temperature and relative humidity were controlled to produce a range of FMC, 2% to 33% (although the majority of fires were carried out at ambient values of 27\\({}^{\\circ}\\)C and 20% RH giving an FMC range of 5-9%). Wind speed above the tunnel's boundary layer ranged from 0.0 to 3.1 m s\\({}^{-1}\\). Rate of spread was measured at 0.5 m intervals using photovoltaic diodes placed 25 mm above the fuel bed to record the time of arrival of the flame front.
Utilising the conservation of energy model of Fransden (1971), Fransden's (1973) effective heating number, a propagating flux model that is linear in packing ratio, and an exponential decay function for FMC, the authors built a model of fire spread very similar in its construction to that of Rothermel (1972) except that they used the heat of ignition of a unit mass of fuel (which comprises the heat of pyrolysis and heat of dessication) rather than the heat of pre-ignition as used by Rothermel. A power function for wind was then fitted to the data and an exponent of 0.91 determined. Although a cubic polynomial function was found to better fit the data, the authors chose the power function as it was more consistent with data from wildfire observations.
### Coimbra (2002)
Viegas (2002) presents a quasi-empirical model of fire spread that utilises the geometry of the fire perimeter to determine the forward spread rate. The main conceit of this notion, previously proposed in Viegas (1998) and Viegas et al. (1998), is that a line fire lit at an angle to a slope or wind gradient undergoes a translation and rotation of the fireline in order to spread with the maximum rate in the direction of the gradient. Extensive laboratory experimentation utilising a double-axis tiltable fuel bed (1.6 m \\(\\times\\) 1.6 m) for a range of forward/back and left/right slopes was used to develop the model. The fuel was _Pinus pinaster_ needles with an FMC determined by ambient conditions (ranging 10% - 15%). 23 experimental fires were conducted, with 10 fires of varying slope and 13 fires of varying inclination. Viegas (2002) found a maximal rotation velocity at an inclination angle of 60\\({}^{\\circ}\\) but was unable to convert this to a forward ROS. However, Viegas does develop a fire perimeter propagation algorithm in which the perimeter is treated as a continuous entity that will endeavor to align itself with the gradient, through this proposed rotation mechanism, to an angle of approximately 60\\({}^{\\circ}\\). The translation and rotation hypothesis, however, ignores a basic observation of the evolution of flanking spread and instead assumes that spread at non-parallel angles to the slope or wind gradient must be driven by a headfire.
Viegas (2005) attempts to extend these ideas to describe the phenomenon of 'fire blow-up' based on the concepts of fire 'feedback effects'. Viegas proposes the existence of a positive dynamic feedback between the ROS of a fire and the flow velocity driving the fire such that the fire accelerates exponentially. He uses some of the results of experimental fires burnt in a \"canyon\", a doubly-sloped tray, in no wind and a range of canyon slopes and inclinations to parameterise his model and the remainder to test it. Viegas treats all data for all slope and inclination combinations as independent and continuous. As a result, his model increases ROS exponentially, resulting in extremely rapid acceleration-what he describes as blow-up. However, categorised by slope, rather than treated continuously, the ROS data actually asymptotes to a reasonable number in each case, which in most cases confirms the long-held rule of thumb of doubling the flat ground ROS for every 10 degrees increase in slope (McArthur, 1967; Van Wagner, 1988). Viegas (2006) conducts a parameteric study of this model and determines that fires in light and porous fuels are more likely to exhibit 'eruptive' behaviour than fires in heavy and compacted fuels.
Viegas's extrapolation of this model to fatal wildfire incidents is tenuous at best and really only proves the widely accepted acceleration up a slope. Other, more robust, theories of unexpected fire behaviour resulting in fatalities are probably more applicable (e.g. Cheney et al. (2001)).
## Nelson (2002)
Nelson (2002) extended the quasi-empirical work of Nelson and Adkins (1988) utilising the laboratory data of Weise and Biging (1997) to build a trigonometric model of fire spread that combines wind and slope effects into a single combined 'effective' wind speed.
The Nelson and Adkins (1988) model utilised the dimensional analysis of fire behaviour of Byram (1966), where three dimensionally homogeneous (i.e. dimensionless) relations were derived: 1) the square root of the Froude number, 2) a buoyancy number relating convective heat output to rate of buoyancy production, and 3) the ratio of combustion time to time characteristic of flame dynamics. Nelson and Adkins (1988) then used spread observations from 59 experimental fires (a total of 44 lab and 21 field, some deleted) and mixed and matched the dimensionless relations until they found a combination that gave a reasonable correlation. They derived a dimensionless form of ROS and wind speed, which, when fitted to the data and converted back to dimensions, gave a power law relation between wind speed and ROS (exponential 1.51). As the maximum wind speed used to obtain the data was 3.66 m s\\({}^{-1}\\) and maximum observed ROS was 0.271 m s\\({}^{-1}\\), Nelson and Adkins (1988) acknowledged the need for higher wind speed experiments. ROS was also found to be a function of fuel load (power function, exponent 0.25) and residence time (inversely proportional). FMC was considered to be accounted for in the estimate of fuel load and residence time.
Rather than the traditional approach used by McAlpine et al. (1991) where the equivalent wind speed for a slope-only ROS was determined, Nelson (2002) used the concept of vertical buoyant velocity and the slope angle component of this wind to construct a slope-induced wind which was then added vectorally to the ambient wind across slope. Nelson extended the dimensional analysis of Nelson and Adkins (1988) to then determine ROS. The resultant equations, which do not apply to flanking or backing fires due to the assumption about convective heating through the Froude number, were then compared against the data of Weise and Biging (1997), gathered from 65 experiments in a portable tilting wind tunnel using vertical paper birch sticks as the fuel bed in a variety of wind and slope configurations, ranging from 0.0 to 1.1 m s\\({}^{-1}\\) and -30\\({}^{\\circ}\\)to +30\\({}^{\\circ}\\). The effective wind speed was found to correlate linearly with ROS.
## Discussion
### Wind speed function
As stated earlier, one method of comparing the structure of each of the above models is to examine the form of the functional relationship between ROS and wind speed chosen by the authors. Table 3 summarises the models discussed and the form of the wind speed function chosen. Also listed are the experimental bounds of the wind speed and ROS.
Only three of the models for which the wind function is given are not power functions, PortShrub and CFBP are exponential and Maquis is linear. Of the remaining models, 3 models have exponents less than one: TRW, CSIRO Grass and USFS. The remaining wind functions all result in non-linear increases (Figure 1) in the ROS that will result in a speed greater than the wind speed driving it, which is unphysical (Beer, 1991). (While this is also the case also for CFBP as illustrated here for wind speeds \\(>\\) 15 m s\\({}^{-1}\\), the ROS function is further modified in the CFFDRS system by fuel-specific functions which can reduce the predicted ROS below the wind speed.) The reason for this choice of function appears to be the desire by the modellers to fit data at low wind speeds (including zero). Many of the models had ranges of wind speed that were fairly low (\\(<\\) 3 m s\\({}^{-1}\\)).
The few models that were based on large ranges of wind speed in field experiments (with the exception of CALM Jarrah II) tended to result in power functions with exponents less than one. CSIRO Grass is the highest power function less than one and is very similar in form to the linear function of Maquis over the given range. Fendell and Wolff (2001), in their brief review of the topic including a number of older models, found that the wind power function exponents ranged from 0.42 (Thomas, 1967) to 2.67 (Burrows, 1999b).
There seem to be two key factors in the choice of functional relationship used to describe fire spread and wind speed. The first is the need to fit the function through the origin. In many cases, particularly laboratory experiments, zero wind speed is taken as the default state and thus any continuous function must not only fit the data of non-zero wind, but also of zero wind. This is discussed in greater detail below. The second is that for the most part, the range of wind speeds studied (again particularly in the laboratory) is very small. As can be seen in Figure 1, any function, be it cubic or very shallowly linear, performs rather similarly at low wind speeds (\\(<\\)1.5 m s\\({}^{-1}\\)).
It is interesting to note that in the full range of functions presented here, the nearly median wind function in Figure 1 (i.e. Heath) is the result of the combination of multiple datasets, experimental methods and authors, perhaps resulting in a middle ground of approaches. Many physical models of fire spread (e.g. Grishin (1984); Linn (1997)) have observed linear functional relationships between wind speed and ROS, suggesting that power law functions with exponents close to unity may have a more fundamental basis.
In their validation of the performance in Mediterranean shrub fuels of seven wildland fire spread models, including the CFBP and Rothermel, Sauvagnargues-Lesage et al. (2001) found that a model's performance is not related to the model's complexity and that even the most simple model (in this particular case a local fire officer's rule of thumb based on a linear discount of the wind speed) performed as well as more complicated models such as CFBP.
### Threshold wind speed
One important aspect differentiating the various choices of wind function, is the ideal of a continuous function that includes zero wind speed. It has been noted previously (Burrows et al., 1991) that fires in discontinuous fuels such as spinifex have a minimum wind speed required before forward spread is achieved. This notion was extended further by Cheney et al. (1998) to define a threshold wind speed at which fires spread forward _consistently_. The argument is that fires burning in low winds in the open respond to eddy fluctuations in the wind flow (resulting in near circular perimeter spread after a long period) and do not spread in a continuous consistent manner until the wind speed exceeds a certain threshold. Above this threshold, the fire spreads forward in a manner directly related to the wind speed.
The choice of threshold value is dependent then upon the method of measuring the wind speed (location, height, period, etc) and the fuel type in which the fire is burning (taller fuels reduce the wind speed reaching the fire). Cheney et al. (1998) chose a 1.39 m s\\({}^{-1}\\) open wind speed threshold for open grasslands. Fernandes et al. (2002) found that wind speed explained more variation in ROS when wind speeds below 0.83 m s\\({}^{-1}\\) (at 1.7 m) were excluded, suggesting that at low wind speeds factors other than wind play a more significant role in determining ROS.
### Fuel moisture content function
Another method of comparing the various empirical and quasi-empirical models is that of fuel moisture content function. Not all the models discussed here addressed the relation between fuel moisture content and rate of spread, and so the discussion here is not comprehensive. Figure 2 shows the various functions for those models that include fuel moisture content.
As with the wind function, there is a wide spread of functional forms used to described the influence of FMC on ROS, perhaps reflecting modelling approaches, methods or personal choice. There appear to be three types of functions representing the fuel moisture content/ROS function: weakly linear (e.g. Gorse (normalised) or CALM Spinifex (normalised)) in which the FMC plays a minor role in determining the ROS, strongly exponential (e.g. CALM Jarrah II and USFS) in which FMC plays a strong role until very low FMC values, and strongly linear or weakly exponential (which might be approximated to linear) in which the role of FMC is spread over a large range of values. The majority of models discussed here fall into the latter group.
The weakest of the linear models (Gorse (normalised) is characterised by few experiments with a limited range of FMC values, which raises the issue of how many sample points are needed to properly inform functional choice and model validation-one could argue that 9 fires is simply not enough given the range of the uncontrolled variables.
The weakly exponential group of models, which includes strictly linear models (e.g. CALM-Spinifex), appears to be the most robust in terms of range of FMC values and experimentation. It is interesting to note the similarity between the CALM Mallee and the CSIRO Grass models. The large difference in functionality between the strongly exponential and weakly exponential is interesting and may reflect differences in functionality as a result of wind function modelling, as all modelling identified wind speed as the primary variable and FMC as the secondary.
### Measurement issues
All empirical science is limited by the ability to measure necessary quantities, to quantify the errors in those measurements, and then relate those measurements to the phenomenon under investigation and wildland fire science is no different. Sullivan and Knight (2001) discussed the determination of the errors in measuring wind speed under a forest canopy some distance from an experimental fire and relating that measurement to measurements of fire spread. The issues of where to measure (location, height, in the open, under the canopy, etc.), how long to measure (instantaneous, period sampling, average, period of average, etc.), and how to correlate measurements with observations are complex and necessarily require approximations and simplification in order to be undertaken.
Similarly, destructive sample of FMC has issues that complicate a seemingly simple quantity. The time of sampling (morning, afternoon, wetting period, drying period), the general location (in the open or under the canopy, in the sun, in the shade, in between, etc.), the specific location (surface litter fuels, profile litter fuels, mid-layer fuels), the species of fuel (predominant fuels, non-predominant fuel, live, dead, etc.). Also once the samples have been taken there then is the issue of best drying methods for the particular samples to ensure a water-free weight, the best method of determining an average value for a plot, variance, error, etc.
Quantifying other factors such as fuel, again seemingly simple quantities rely upon a knowledge of the mode of combustion of the fire and which aspects of the fuel most influence that combustion and therefore the behaviour of the fire. These include definition of fuel strata (which itself depends on the intensity of the fire and which parts of the fuel complex will be burning during the fire and thus contribute to the energy released by the fire), the structure of the fuel and the size of fuel particles important to fire behaviour of the front, flanks and behind the flame zone, the amount of fuel available, the amount consumed, the chronology of the consumption of the fuel, the mode of consumption, transport of burning fuel (i.e. firebrands), spatial and temporal variation of these fuel and fire characteristics, determination of averages and methods of averaging, determination of errors, etc. The list could continue. Other factors, such as air temperature and relative humidity, insolation, atmospheric stability, slope, soil type and moisture, have their own range of measurement difficulties, and are by no means the only quantities involved in quantifying the behaviour of wildland fires.
Laboratory-based experiments may aim to reduce the variation and control the errors in measurement of many of these quantities but are not immune to the difficulties of measurement.
### Field versus laboratory experimentation
Empirical or quasi-empirical modelling of fire behaviour has resulted in significant advances in the state of wildfire science and produced effective operational guides for determining the likely behaviour of wildfires for suppression planning purposes. Unlike physical or quasi-physical models of fire behaviour, these systems are simple, utilise readily available fuel and weather input data, and can be calculated rapidly. However, there is a significant difference between those models developed from field experimentation and those developed from laboratory experimentation.
Large-scale field experiments are costly, difficult to organise, and inherently have many of the difficulties associated with wildfire observations (e.g. spatial and temporal variation of environmental variables, uncontrolled variations, changing frames of reference and boundary conditions, etc.). Laboratory experiments can be cheap and safe, provide relatively repeatable conditions, and can limit the type and range of variations within variables and thus simplify analysis. Van Wagner (1971) raises the issue of laboratory versus field experimentation but avoids any categorical conclusions (perhaps because there are none), simply stating some features of wildland fire behaviour are better suited to studying in the controlled environment of the laboratory or could not be attempted in the field, and other features cannot be suitably replicated anywhere but in large-scale outdoor experiments.
Correct scaling of laboratory experiments (and field experiments for that matter) is vital to replicating the conditions expected during a wildfire. Byram (1966) and Williams (1969) conducted dimensional analysis of (stationary) mass fires in order to develop scaling laws to conduct scaled model experiments. Both found that scaling across all variables presents considerable difficulty and necessitates approximations, particularly in regard to atmospheric variables, which result in impractical lower limits (e.g. model forest fires \\(\\simeq\\) 6-16 m across (Byram, 1966), or gravitational acceleration \\(\\simeq\\) 10 g (Williams, 1969)). As a result, it is clear that any scaled experiments must take great care in drawing conclusions that are expected to be applicable at scales different from that of the experiments; not only may physical and chemical processes behave differently at different scales but the phenomena as a whole may behave differently.
The key difference between field-based and laboratory-based experimentation, in this author's opinion, is the assumptions about the nature of combustion (including heat transfer) that are required in order to design a useful small-scale laboratory experiment. That is, there is the presumption implicit in any laboratory experiment that there is sufficient understanding about the nature of fire such that key variables can be isolated and measured without regard to the fire itself.
One such aspect identified by Cheney et al. (1993) is the importance of the size and shape of the fire in determining resultant fire behaviour. Prior to this work, it was thought that the size of the fire played little part in determining the behaviour of a fire and thus the results of small experimental fires could be extrapolated to larger fires burning under less mild conditions. Other factors such as the physical structure of the fuel or moisture content of live fuels, or other hitherto unconsidered factors, may play less significant but important roles in explaining the unaccounted variation in ROS.
Field experiments on the other hand, by their very nature, are real fires and thus incorporate all the interactions that define wildland fire. This aspect holds considerable weight with end users who endow such systems with a confidence that purely theoretical or laboratory-only-based models do not receive. As Morvan et al. (2004) concluded, no single approach to studying the behaviour of wildland fire will provide a complete solution and thus it is important that researchers maintain open and broad paradigm
## Acknowledgements
I would like to acknowledge Ensis Bushfire Research and the CSIRO Centre for Complex Systems Science for supporting this project; Jim Gould and Rowena Ball for comments on the draft manuscript; and the members of Ensis Bushfire Research who ably assisted in the refereeing process, namely Stuart Anderson, Miguel Cruz, and Juanita Myers.
## References
* Abbot and Burrows (2003) Abbot, I. and Burrows, N., editors (2003). _Fire in Ecosystems of South-West Western Australia: Impacts and Management_. Backhuys, Leiden, The Netherlands.
* Alexander et al. (1991) Alexander, M., Stocks, B., and Lawson, B. (1991). Fire behaviour in Black Spruce-lichen woodland: The Porter Lake project. Information Report NOR-X-310, Forestry Canada, Northwest Region, Northern Forestry Centre, Edmonton, Alberta.
* burn subsystem, part 1. Technical Report General Technical Report INT-194, 130 pp., USDA Forest Service, Intermountain Forest and Range Experiment Station, Ogden, UT.
* Baeza et al. (2002) Baeza, M., De Luis, M., Raventos, J., and Escarre, A. (2002). Factors influencing fire behaviour in shrublands of different stand ages and the implications for using prescribed burning to reduce wildfire risk. _Journal of Environmental Management_, 65(2):199-208.
* Beer (1991) Beer, T. (1991). The interaction of wind and fire. _Boundary-Layer Meteorology_, 54(2):287-308.
* Beer (1993a) Beer, T. (1993a). Fire propagation in vertical stick arrays: The effects of wind. _International Journal of Wildland Fire_, 5(1):43-49.
* Beer (1993b) Beer, T. (1993b). The speed of a fire front and its dependence on wind speed. _International Journal of Wildland Fire_, 3(4):193-202.
* Bilgili and Saglam (2003) Bilgili, E. and Saglam, B. (2003). Fire behavior in maquis fuels in Turkey. _Forest Ecology and Management_, 184(1-3):201-207.
* Breger et al. (2002)Bradstock, R. and Gill, A. (1993). Fire in semiarid, mallee shrublands - size of flames from discrete fuel arrays and their role in the spread of fire. _International Journal of Wildland Fire_, 3(1):3-12.
* Burgan (1988) Burgan, R. (1988). 1988 revisions to the 1978 National Fire-Danger Rating System. Research Paper SE-273, USDA Forest Service, Southeastern Forest Experiment Station, Asheville, North Carolina.
* Burrows (1994) Burrows, N. (1994). _Experimental development of a fire management model for jarrah (Eucalyptus marginata Donn ex Sm) forest_. PhD thesis, Dept of Forestry, Australian National University, Canberra.
* Burrows (1999a) Burrows, N. (1999a). Fire behaviour in jarrah forest fuels: 1. Laboratory experiments. _CALMScience_, 3(1):31-56.
* Burrows (1999b) Burrows, N. (1999b). Fire behaviour in jarrah forest fuels: 2. Field experiments. _CALM-Science_, 3(1):57-84.
* Burrows et al. (1991) Burrows, N., Ward, B., and Robinson, A. (1991). Fire behaviour in spinifex fuels on the Gibson Desert Nature Reserve, Western Australia. _Journal of Arid Environments_, 20:189-204.
* Byram (1966) Byram, G. (1966). Scaling laws for modeling mass fires. _Pyrodynamics_, 4:271-284.
* Carrier et al. (1991) Carrier, G., Fendell, F., and Wolff, M. (1991). Wind-aided firespread across arrays of discrete fuel elements. I. Theory. _Combustion Science and Technology_, 75:31-51.
* Catchpole (2000) Catchpole, W. (2000). _FIRE! The Australian Experience. Proceedings of the 1999 Seminar_, chapter The International Scene and Its Impact on Australia, pages 137-148. National Academies Forum.
* Catchpole et al. (1998a) Catchpole, W., Bradstock, R., Choate, J., Fogarty, L., Gellie, N., McArthy, G., McCaw, L., Marsden-Smedley, J., and Pearce, G. (1998a). Co-operative development of equations for heathland fire behaviour. In _Proceedings of III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998, Vol 1_, pages 631-645.
* Catchpole et al. (1998b) Catchpole, W., Catchpole, E., Butler, B., Rothermel, R., Morris, G., and Latham, D. (1998b). Rate of spread of free-burning fires in woody fuels in a wind tunnel. _Combustion Science and Technology_, 131:1-37.
* Chandler et al. (1983) Chandler, C., Cheney, P., Thomas, P., Trabaud, L., and Williams, D. (1983). _Fire in Forestry 1: Forest Fire Behaviour and Effects_. John Wiley & Sons, New York.
* Cheney (1981) Cheney, N. (1981). Fire behaviour. In Gill, A., Groves, R., and Noble, I., editors, _Fire and the Australian Biota_, chapter 5, pages 151-175. Australian Academy of Science, Canberra.
* Cheney and Gould (1995) Cheney, N. and Gould, J. (1995). Fire growth in grassland fuels. _International Journal of Wildland Fire_, 5:237-247.
* Cheney et al. (1993) Cheney, N., Gould, J., and Catchpole, W. (1993). The influence of fuel, weather and fire shape variables on fire-spread in grasslands. _International Journal of Wildland Fire_, 3(1):31-44.
* Cheney et al. (1994)Chengy, N., Gould, J., and Catchpole, W. (1998). Prediction of fire spread in grasslands. _International Journal of Wildland Fire_, 8(1):1-13.
* Cheney et al. (2001) Cheney, P., Gould, J., and McCaw, L. (2001). The dead-man zone-a neglected area of firefighter safety. _Australian Forestry_, 64(1):45-50.
* Cheney and Sullivan (1997) Cheney, P. and Sullivan, A. (1997). _Grassfires: Fuel, Weather and Fire Behaviour_. CSIRO Publishing, Collingwood, Australia.
* CSIRO (1997) CSIRO (1997). CSIRO Grassland Fire Spread Meter. Cardboard meter.
* Curry and Fons (1940) Curry, J. and Fons, W. (1940). Forest-fire behaviour studies. _Mechanical Engineering_, 62:219-225.
* Curry and Fons (1938) Curry, J. R. and Fons, W. L. (1938). Rate of spread of surface fires in the ponderosa pine type of california. _Journal of Agricultural Research_, 57(4):239-267.
* 1978. Technical Report General Technical Report INT-39, USDA Forest Service, Intermountain Forest and Range Experiment Station, Ogden, UT.
* Emmons (1963) Emmons, H. (1963). Fire in the forest. _Fire Research Abstracts and Reviews_, 5(3):163-178.
* Emmons (1966) Emmons, H. (1966). Fundamental problems of the free burning fire. _Fire Research Abstracts and Reviews_, 8(1):1-17.
* Fendell and Wolff (2001) Fendell, F. and Wolff, M. (2001). _Wildland Fire Spread Models_, chapter 6: Wind-Aided Fire Spread, pages 171-223. Academic Press, San Diego, CA, 1st edition.
* Fernandes (1998) Fernandes, P. (1998). Fire spread modelling in Portuguese shrubland. In _Proceedings of III International Conference on Forest Fire Research, 14th Conference on Fire and Forest Meteorology, Luso, Portugal, 16-20 November 1998, Vol 1_, volume 1, pages 611-628.
* Fernandes (2001) Fernandes, P. (2001). Fire spread prediction in shrub fuels in Portugal. _Forest Ecology and Management_, 144(1-3):67-74.
* Fernandes et al. (2002) Fernandes, P., Botelho, H., and Loureiro, C. (2002). Models for the sustained ignition and behaviour of low-to-moderately intense fires in maritime pine stands. page 98, Rotterdam, Netherlands. Millpress. Proceedings of the IV International Conference on Forest Fire Research, Luso, Coimbra, Portugal 18-23 November 2002.
* Fons (1946) Fons, W. L. (1946). Analysis of fire spread in light forest fuels. _Journal of Agricultural Research_, 72(3):93-121.
* Forestry Canada Fire Danger Group (1992) Forestry Canada Fire Danger Group (1992). Development and structure of the Canadian Forest Fire Behavior Prediction System. Information Report ST-X-3, Forestry Canada Science and Sustainable Development Directorate, Ottawa, ON.
* Fransden (1971) Fransden, W. (1971). Fire spread through porous fuels from the conservation of energy. _Combustion and Flame_, 16:9-16.
* Fransden (1973) Fransden, W. H. (1973). Using the effective heating number as a weighting factor in Rothermel's fire spread model. General Technical Report INT-10, USDA Forest Service, Intermountain Forest and Range Experiment Station, Ogden UT.
* Fransden (1974)Gill, A., Burrows, N., and Bradstock, R. (1995). Fire modelling and fire weather in an Australian desert. _CALMScience Supplement_, 4:29-34.
* Gill et al. (1981) Gill, A., Groves, R., and Noble, I., editors (1981). _Fire and the Australian biota_. Australian Academy of Science, Canberra.
* Gisborne (1927) Gisborne, H. (1927). The objectives of forest fire-weather research. _Journal of Forestry_, 25(4):452-456.
* Gisborne (1929) Gisborne, H. (1929). The complicated controls of fire behaviour. _Journal of Forestry_, 27(3):311-312.
* Goldammer and Jenkins (1990) Goldammer, J. and Jenkins, M., editors (1990). _Fire in Ecosystem Dynamics_. SPB Academic Publishing bv, The Hague, The Netherlands.
* Grishin (1984) Grishin, A. (1984). Steady-state propagation of the front of a high-level forest fire. _Soviet Physics Doklady_, 29(11):917-919.
* Grishin (1997) Grishin, A. (1997). _Mathematical modeling of forest fires and new methods of fighting them_. Publishing House of Tomsk State University, Tomsk, Russia, english translation edition. Translated from Russian by Marek Czuma, L Chikina and L Smokotina.
* Hawley (1926) Hawley, L. (1926). Theoretical considerations regarding factors which influence forest fires. _Journal of Forestry_, 24(7):7.
* Karplus (1977) Karplus, W. J. (1977). The spectrum of mathematical modeling and systems simulation. _Mathematics and Computers in Simulation_, 19(1):3-10.
* Lawson et al. (1985) Lawson, B., Stocks, B., Alexander, M., and Van Wagner, C. (1985). A system for predicting fire behaviour in Canadian forests. In _Eighth Conference on Fire and Forest Meteorrology_, pages 6-16.
* Lee (1972) Lee, S. (1972). Fire research. _Applied Mechanical Reviews_, 25(3):503-509.
* Linn (1997) Linn, R. R. (1997). A transport model for prediction of wildfire behaviour. PhD Thesis LA-13334-T, Los Alamos National Laboratory. Reissue of PhD Thesis accepted by Department of Mechanical Engineering, New Mexico State University.
* Marsden-Smedley and Catchpole (1995a) Marsden-Smedley, J. and Catchpole, W. (1995a). Fire behaviour modelling in Tasmanian buttongrass moorlands I. Fuel characteristics. _International Journal of Wildland Fire_, 5(4):202-214.
* Marsden-Smedley and Catchpole (1995b) Marsden-Smedley, J. and Catchpole, W. (1995b). Fire behaviour modelling in Tasmanian buttongrass moorlands II. Fire behaviour. _International Journal of Wildland Fire_, 5(4):215-228.
* McAlpine et al. (1991) McAlpine, R., Lawson, B., and Taylor, E. (1991). Fire spread across a slope. In _Proceedings of the 11th Conference on Fire and Forest Meteorology_, pages 218-225, Missoula, MT. Society of American Foresters.
* McAlpine and Wakimoto (1991) McAlpine, R. and Wakimoto, R. (1991). The acceleration of fire from point source to equilibrium spread. _Forest Science_, 37(5):1314-1337.
* McAlpine et al. (1997)McArthur, A. (1965). Weather and grassland fire behaviour. Country Fire Authority and Victorian Rural Brigades Association Group Officers Study Period, 13th - 15th August 1965.
* McArthur (1966) McArthur, A. (1966). Weather and grassland fire behaviour. Technical Report Leaflet 100, Commonwealth Forestry and Timber Bureau, Canberra.
* McArthur (1967) McArthur, A. (1967). Fire behaviour in eucalypt forests. Technical Report Leaflet 107, Commonwealth Forestry and Timber Bureau, Canberra.
* McCaw (1997) McCaw, L. (1997). _Predicting fire spread in Western Australian mallee-health shrubland_. PhD thesis, School of Mathematics and Statistics, University of New South Wales, Canberra, ACT, Australia.
* Morvan et al. (2004) Morvan, D., Larini, M., Dupuy, J., Fernandes, P., Miranda, A., Andre, J., Sero-Guillaume, O., Calogine, D., and Cuinas, P. (2004). Euifrelab: Behaviour modelling of wildland fires: a state of the art. Deliverable D-03-01, EUFIRELAB. 33 p.
* Nelson (2002) Nelson, Jr., R. (2002). An effective wind speed for models of fire spread. _International Journal of Wildland Fire_, 11(2):153-161.
* Nelson and Adkins (1988) Nelson, Jr., R. M. and Adkins, C. W. (1988). A dimensionless correlation for the spread of wind-driven fires. _Canadian Journal of Forest Research_, 18:391-397.
* Pastor et al. (2003) Pastor, E., Zarate, L., Planas, E., and Arnaldos, J. (2003). Mathematical models and calculation systems for the study of wildland fire behaviour. _Progress in Energy and Combustion Science_, 29(2):139-153.
* Peet (1965) Peet, G. (1965). A fire danger rating and controlled burning guide for the northern jarrah (euc. marginata sm.) forest of western australia. Technical Report Bulletin No 74, Forests Department, Perth, Western Australia.
* Perry (1998) Perry, G. (1998). Current approaches to modelling the spread of wildland fire: a review. _Progress in Physical Geography_, 22(2):222-245.
* Pyne et al. (1996) Pyne, S., Andrews, P., and Leven, R. (1996). _Introduction to Wildland Fire, 2nd Edition_. John Wiley and Sons, New York.
* Pyne (2001) Pyne, S. J. (2001). _Year of the Fires : The Story of the Great Fires of 1910_. Viking, New York.
* Rothermel (1972) Rothermel, R. (1972). A mathematical model for predicting fire spread in wildland fuels. Research Paper INT-115, USDA Forest Service.
* Sauvagnargues-Lesage et al. (2001) Sauvagnargues-Lesage, S., Dusserre, G., Robert, F., Dray, G., and Pearson, D. (2001). Experimental validation in mediterranean shrub fuels of seven wildland fire rate of spread models. _International Journal of Wildland Fire_, 10(1):15-22.
* Sneeuwjagt and Peet (1985) Sneeuwjagt, R. and Peet, G. (1985). Forest fire behaviour tables for Western Australia (3rd Ed.). Department of Conservation and Land Management, Perth, WA.
* Stocks et al. (1991) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (1991). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2002) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2002). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2003) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2003). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2004) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2004). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2005) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2005). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2006) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2006). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2007) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2007). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2008) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2008). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2007) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2007). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2008) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2008). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2009) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2009). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2010) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2010). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2011) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2011). The Canadian system of forest fire danger rating. In Cheney, N. and Gill, A., editors, _Conference on Bushfire Modelling and Fire Danger Rating Systems_, pages 9-18, Canberra. CSIRO.
* Stocks et al. (2011) Stocks, B., Lawson, B., Alexander, M., Van Wagner, C., McAlpine, R., Lynham, T., and Dube, D. (2011). The Canadian system of forest fire danger rating.
Stocks, B. J., Alexander, M. E., and Lanoville, R. A. (2004). Overview of the International Crown Fire Modelling Experiment (ICFME). _Canadian Journal of Forest Research_, 34(8):1543-1547.
* Sullivan (2007) Sullivan, A. (2007). A review of wildland fire spread modelling, 1990-present, 1: Physical and quasi-physical models. arXiv:0706.3074v1[physics.geo-ph], 46 pp.
* Sullivan and Knight (2001) Sullivan, A. and Knight, I. (2001). Estimating error in wind speed measurements for experimental fires. _Canadian Journal of Forest Research_, 31(3):401-409.
* Taylor and Alexander (2006) Taylor, S. and Alexander, M. (2006). Science, technology, and human factors in fire danger rating: the canadian experience. _International Journal of Wildland Fire_, 15(1):121-135.
* Thomas (1967) Thomas, P. (1967). Some aspects of the growth and spread of fire in the open. _Journal of Forestry_, 40:139-164.
* Thomas and Pickard (1961) Thomas, P. and Pickard, R. (1961). Fire spread in forest and heathland materials. Report on forest research, Fire Research Station, Boreham Wood, Hertfordshire.
* Van Wagner (1971) Van Wagner, C. (1971). Two solutes in forest fire research. Information Report PS-X-29, Canadian Forestry Service, Petawawa Forest Experiment Station, Chalk River, ON.
* Van Wagner (1977a) Van Wagner, C. (1977a). Conditions for the start and spread of crown fire. _Canadian Journal of Forest Research_, 7(1):23-24.
* Van Wagner (1977b) Van Wagner, C. (1977b). Effect of slope on fire spread rate. _Canadian Forestry Service Bi-Monthly Research Notes_, 33:7-8.
* Van Wagner (1985) Van Wagner, C. (1985). Fire spread from a point source. Memo PI-4-20 dated January 14, 1985 to P. Kourtz (unpublished), Canadian Forest Service, Petawawa National Forest Institute, Chalk River, Ontario.
* Van Wagner (1987) Van Wagner, C. (1987). Development and structure of the canadian forest fire weather index system. Forestry Technical Report 35, Canadian Forestry Service, Petawawa National.
* Van Wagner (1988) Van Wagner, C. (1988). Effect of slope on fires spreading downhill. _Canadian Journal of Forest Research_, 18:818-820.
* Van Wagner (1998) Van Wagner, C. (1998). Modelling logic and the Canadian Forest Fire Behavior Prediction System. _The Forestry Chronicle_, 74(1):50-52.
* Viegas (2002) Viegas, D. (2002). Fire line rotation as a mechanism for fire spread on a uniform slope. _International Journal of Wildland Fire_, 11(1):11-23.
* Viegas (2006) Viegas, D. (2006). Parametric study of an eruptive fire behaviour model. _International Journal of Wildland Fire_, 15(2):169-177.
* Viegas et al. (1998) Viegas, D., Ribeiro, P., and Maricato, L. (1998). An empirical model for the spread of a fireline inclined in relation to the slope gradient or to wind direction. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 325-342.
* Viegas (1998) Viegas, D. X. (1998). Forest fire propagation. _Philosophical Transmissions of the Royal Society of London A_, 356:2907-2928.
* Viegas (2005) Viegas, D. X. (2005). A mathematical model for forest fires blowup. _Combustion Science and Technology_, 177(1):27-51.
* Weber (1991) Weber, R. (1991). Modelling fire spread through fuel beds. _Progress in Energy Combustion Science_, 17(1):67-82.
* Weise and Biging (1997) Weise, D. R. and Biging, G. S. (1997). A qualitative comparison of fire spread models incorporating wind and slope effects. _Forest Science_, 43(2):170-180.
* Williams (1969) Williams, F. (1969). Scaling mass fires. _Fire Research Abstracts and Reviews_, 11(1):1-23.
* Williams (1982) Williams, F. (1982). Urban and wildland fire phenomenology. _Progress in Energy Combustion Science_, 8:317-354.
* Wolff et al. (1991) Wolff, M., Carrier, G., and Fendell, F. (1991). Wind-aided firespread across arrays of discrete fuel elements. II. Experiment. _Combustion Science and Technology_, 77:261-289.
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline Model & Author & Year & Country & Field/Lab & Fuel type & No. fires & Size (w \\(\\times\\)l) \\\\ & & & & & & & (m)6 \\\\ \\hline CFS-accel & McAlpine & 1991 & Canada & Lab & needles/Excel. & 29 & \\(0.915\\times 6.15\\) \\\\ CALM-Spin & Burrows & 1991 & Aust. & Field & Spiniflex & 41 & \\(200\\times 200\\) \\\\ CFBP & FCFDG & 1992 & Canada & Field & Forest & 493 & \\(10\\)-\\(100\\times 10\\)-\\(100\\) \\\\ Button & Marsden-Smedley & 1995 & Aust. & Field & Buttongrass & 64 & \\(50\\)-\\(100\\times 50\\)-\\(100\\) \\\\ CALM Mallee & McCaw & 1997 & Aust. & Field & Mallee/Heath & 18 & \\(200\\)\\(\\times 200\\) \\\\ CSIRO Grass & Cheney & 1998 & Aust. & Field & Grass & 121 & \\(100\\)-\\(200\\times 100\\)-\\(300\\) \\\\ Heath & Catchpole & 1998 & Aust. & Field & heath/shrub & 133 & \\(100\\) \\\\ PortShrub & Fernandes & 2001 & Portugal & Field & Heath/shrub & 29 & \\(10\\) \\\\ CALM Jarrah I & Burrows & 1999 & Aust. & Lab & Litter & 144 & \\(2.0\\times 4.0\\) \\\\ CALM Jarrah II & Burrows & 1999 & Aust. & Field & Forest & 56 & \\(100\\) \\\\ PortPinas & Fernandes & 2002 & Portugal & Field & Forest & 94 & \\(10\\)-\\(15\\) \\\\ Gorse & Baeza & 2002 & Spain & Field & gorse & 9 & \\(33\\) \\\\ Maquis & Bilgili & 2003 & Turkey & Field & maquis & 25 & \\(20\\) \\\\ CSIRO Forest & Gould & 2006 & Aust. & Field & Forest & 99 & \\(200\\times 200\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Summary of empirical models discussed in this paper
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline Model & Author & Year & Country & Field/Lab & Fuel type & No. fires & Size (w \\(\\times\\)l) \\\\ & & & & & & & (m) \\\\ \\hline TRW & Wolff & 1991 & USA & Lab & match splints &? & \\(1.1\\times 7\\) \\\\ NBRU & Beer & 1993 & Aust. & Lab & match splints & 18 & \\(0.4\\times 0.16\\) (2D) \\\\ USFS & Catchpole & 1998 & USA & Lab. & Pond./Excel & 357 & \\(1.0\\times 8.0\\) \\\\ Coimbra & Viegas & 2002 & Spain & Lab & Pond. needles & 23 & \\(3.0\\times 3.0\\) \\\\ Nelson & Nelson & 2002 & USA & Lab. & Birch sticks & 65 &? \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Summary of quasi-empirical models discussed in this paper
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\hline \\hline Model & Field/Lab & Fuel type & FMC Fn & FMC & Wind Fn & Wind Range & ROS Range \\\\ & & & & Range (\\%) & & (m s\\({}^{-1}\\)) & (m s\\({}^{-1}\\)) \\\\ \\hline \\multicolumn{6}{l}{_Empirical_} \\\\ CFS-accel & Lab. & Pond./Excel & - & - & - & 0-2.22* &?.? \\\\ CALM-Spinifex & Field & Spinifex & \\(-82.08M\\) & 12-31 & \\(U^{2}\\) & 1.1-10 & 0-1.5 \\\\ CFBP & Field & Forest & \\(\\mathrm{e}^{-0.1386M}(1+M^{5.31})\\) &? & \\(\\mathrm{e}^{0.05930U}\\) &? &? \\\\ PWS-Ts & Field & Buttongrass & \\(\\mathrm{e}^{-0.0243M}\\) & 8.2-96 & \\(U^{1.312}\\) & 0.2-10 &?.? \\\\ CALMallee & Field & Mallee & \\(\\mathrm{e}^{-0.113M_{e}}\\) & 4-32 & \\(U^{1.05}\\) & 1.5-6.9 & 0.13-6.8 \\\\ CSIRO Grass & Field & Grass & \\(\\mathrm{e}^{-0.108M}\\) & 2.7-12.1 & \\(U^{0.844}\\) & 2.9-7.1 & 0.29-2.07 \\\\ Heath & Field & heath/shrub & NA & NA & \\(U^{1.21}\\) & 0.11-10.1 & 0.01-1.00 \\\\ Porstburgh & Field & Heath/shrub & \\(\\mathrm{e}^{-0.007M}\\) & 10-40 & \\(\\mathrm{e}^{0.002U}\\) & 0.28-7.5 & 0.01-0.33 \\\\ CALM Jarrah I & Lab. & Litter & 1 & 3-14 & \\(U^{2.22}\\) & 0.2-2.1 & 0.02-0.075 \\\\ CALM Jarrah II & Field & Forest & \\(23.192M\\) & 2.1-18.6 & \\(U^{2.674}\\) & 0.72-33.3 & 0.003-0.28 \\\\ PortPinas & Field & Forest & \\(\\mathrm{e}^{-0.053M}\\) & 8-56 & \\(U^{0.868}\\) & 0.3-6.4 & 0.004-0.231 \\\\ Gorse & Field & gorse & -0.0004M & 22-85 & NA & \\(<1.4\\) & 0.004-0.039 \\\\ Maquis & Field & manguis & NA & 15.3-27.7 & 0.495U & 0.02-0.25 & 0.01-0.15 \\\\ TRW & Lab. & match splints & NA & NA & \\(U^{0.5}\\) & -4.7 & 0-0.007 \\\\ NBRU & Lab. & match splints & NA & NA & \\(U^{3}\\) & 0.9 & 0.004-0.38 \\\\ USFS & Lab. & Pond./Excel & \\(\\mathrm{e}^{-0.007M}\\) & 2-33 & \\(U^{0.91}\\) & 0-3.1 & 0-0.23 \\\\ Coimbra & Lab. & Pond. needles & NA & 10-15 & - &? &? \\\\ Nelson & Lab./Field & Birch sticks & NA & NA & \\(U^{1.51}\\) & 0.0-3.66 & \\(<0.271\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Summary of empirical models discussed in this paperFigure 1: Graph of functional relationships between wind speed and rate of forward spread used in various empirical and quasi-empirical fire spread models. The relationship is only indicative of effect on ROS as the full model in each case may also include effects of other variables such as fuel moisture content as well as increases or decreases in wind speed due to measurement at different heights.
Figure 2: Graph of functional relationships between fuel moisture content and a rate of forward spread factor used in various empirical and quasi-empirical fire spread models. A number of models have been normalised in order to present them in conjunction with other models (i.e. norm.) The ROS factor shows the effect fuel moisture content has on the final rate of spread value in each model. | In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behaviour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of an empirical or quasi-empirical nature. These models are based solely on the statistical analysis of experimentally obtained data with or without some physical framework for the basis of the relations. Other papers in the series review models of a physical or quasi-physical nature, and mathematical analogues and simulation models. The main relations of empirical models are that of wind speed and fuel moisture content with rate of forward spread. Comparisons are made of the different functional relationships selected by various authors for these variables.
Ensis1 Bushfire Research 2
Footnote 1: A CSIRO/Sción Joint Venture
PO Box E4008, Kingston, ACT 2604, Australia
email: [email protected] or [email protected]
phone: +61 2 6125 1693, fax: +61 2 6125 4676
version 3.0 | Give a concise overview of the text below. |
arxiv-format/0706_4130v1.md | # A review of wildland fire spread modelling,
1990-present
3: Mathematical analogues and simulation models
A.L. Sullivan
## 1 Introduction
### History
The ultimate aim of any prediction system is to enable an end user to carry out useful predictions. A useful prediction is one that helps the user achieve a particular aim. Inthe field of wildland fire behaviour, that aim is primarily to stop the spread of the fire or to at least reduce its impact on life and property. The earliest efforts at wildland fire behaviour prediction concentrated on predicting the likely danger posed by a particular fire or set of conditions prior to the outbreak of a fire. These fire danger systems were used to set the level of preparedness of suppression resources or to aid in the identification of the onset of bad fire weather for the purpose of calling total bans on intentionally lit fires.
In addition to a subjective index of fire danger, many of early fire danger systems also provided a prediction of the likely spread of a fire, as a prediction of the rate of forward spread of the fire, the rate of perimeter increase or rate of area increase. In many cases, these predictions were used by users to plot the likely spread of the fire on a map, thereby putting the prediction in context with geographic features or resource locations, and constituted the first form of fire spread simulation.
Because much of the development of the early wildland fire behaviour models was carried out by those organisations intended to use the systems, the level of sophistication of the model tended to match the level of sophistication of the technology used to implement it. Thus, the early fire spread models provided only a single dimension prediction (generally the forward rate of spread of the headfire) which could be easily plotted on a map and extrapolated over time. While modern wildland fire spread modelling has expanded to include physical approaches (Sullivan, 2007a), all modern operational fire spread models have continued this empirical approach in the form of one-dimensional spread models (Sullivan, 2007b). Much of the development of technology for implementing the models in a simulation environment has concentrated on methods for converting the one-dimensional linear model of fire spread to that of two-dimensional planar models of fire spread.
In parallel with approaches to implement existing empirical models of fire spread have been efforts to approach the simulation of fire spread across the landscape from a holistic perspective. This has resulted in the use of methods other than those directly related to the observation, measurement and modelling of fire behaviour. These methods are mathematical in nature and provide an analogue of fire behaviour. Many of these approaches have also paralleled the development of the computer as a computational device to undertake the calculations required to implement the mathematical concepts.
An increase in the capabilities of remote sensing, geographical information systems and computing power during the 1990s resulted in a revival in the interest of fire behaviour modelling as applied to the prediction of spread across the landscape.
### 1.2 Background
This series of review papers endeavours to comprehensively and critically review the extensive range of modelling work that has been conducted in recent years. The range of methods that have been undertaken over the years represents a continuous spectrum of possible modelling (Karplus, 1977), ranging from the purely physical (those that are based on fundamental understanding of the physics and chemistry involved in the behaviour of a wildland fire) through to the purely empirical (those that have been based on phenomenological description or statistical regression of fire behaviour). In between is a continuous meld of approaches from one end of the spectrum or the other. Weber (1991) in his comprehensive review of physical wildland fire modelling proposed a system by which models were described as physical, empirical or statistical, depending on whether they account for different modes of heat transfer, make no distinction between different heat transfer modes, or involve no physics at all. Pastor et al. (2003) proposed descriptions of theoretical, empirical and semi-empirical, again depending on whether the model was based on purely physical understanding, of a statistical nature with no physical understanding, or a combination of both. Grishin (1997) divided models into two classes, deterministic or stochastic-statistical. However, these schemes are rather limited given the combination of possible approaches and, given that describing a model as semi-empirical or semi-physical is a 'glass half-full or half-empty' subjective issue, a more comprehensive and complete convection was required.
Thus, this review series is divided into three broad categories: Physical and quasi-physical models; Empirical and quasi-empirical models; and Simulation and Mathematical analogous models. In this context, a physical model is one that attempts to represent both the physics and chemistry of fire spread; a quasi-physical model attempts to represent only the physics. An empirical model is one that contains no physical basis at all (generally only statistical in nature), a quasi-empirical model is one that uses some form of physical framework upon which to base the statistical modelling chosen. Empirical models are further subdivided into field-based and laboratory-based. Simulation models are those that implement the preceding types of models in a simulation rather than modelling context. Mathematical analogous models are those that utilise a mathematical precept rather than a physical one for the modelling of the spread of wildland fire.
Since 1990, there has been rapid development in the field of spatial data analysis, e.g. geographic information systems and remote sensing. As a result, I have limited this review to works published since 1990. However, as much of the work that will be discussed derives or continues from work carried out prior to 1990, such work will be included much less comprehensively in order to provide context.
### Previous reviews
Many of the reviews that have been published in recent years have been for audiences other than wildland fire researchers and conducted by people without an established background in the field. Indeed, many of the reviews read like purchase notes by people shopping around for the best fire spread model to implement in their part of the world for their particular purpose. Recent reviews (e.g. Perry (1998); Pastor et al. (2003); etc), while endeavouring to be comprehensive, have offered only superficial and cursory inspections of the models presented. Morvan et al. (2004) takes a different line by analysing a much broader spectrum of models in some detail and concludes that no single approach is going to be suitable for all uses.
While the recent reviews provide an overview of the models and approaches that have been undertaken around the world, mention must be made of significant reviews published much earlier that discussed the processes in wildland fire propagation themselves. Foremost is the work of Williams (1982), which comprehensively covers the phenomenology of both wildland and urban fire, the physics and chemistry of combustion, and is recommended reading for the beginner. The earlier work of Emmons (1963, 1966) and Lee (1972) provides a sound background on the advances made during the post-war boom era. Grishin (1997) provides an extensive review of the work conducted in Russia in the 1970s, 80s and 90s.
The first paper in this series discussed those models based upon the fundamental principles of the physics and chemistry of wildland fire behaviour. The second paper in the series discussed those models based directly upon only statistical analysis of fire behaviour observations or models that utilise some form of physical framework upon which the statistical analysis of observations have been based. Particular distinction was made between observations of the behaviour of fires in the strictly controlled and artificial conditions of the laboratory and those observed in the field under more naturally occurring conditions.
This paper, the final in the series, focuses upon models concerned only with the simulation of fire spread over the landscape and models that utilise mathematical conceits analogous to fire spread but which have no real-world connection to fire. The former generally utilise a pre-existing fire spread model (which can be physical, quasi-physical, quasi-empirical or empirical) and implements it in such a way as to simulate the spread of fire across a landscape. As such, it is generally based upon a geographic information system (GIS) of some description to represent the landscape and uses a propagation algorithm to spread the fire perimeter across it. The latter models are for the most part based upon accepted mathematical functions or concepts that have been applied to wildland fire spread but are not derived from any understanding of wildland fire behaviour. Rather, these models utilise apparent similarities between wildland fire behaviour and the behaviour of these concepts within certain limited contexts. Because of this, these mathematical concepts could equally be applied to other fields of endeavour and, for the most part have been, to greater or lesser success.
Unlike the preceding entries in this series, this paper is segmented by the approaches taken by the various authors, not by the authors or their organisations, given the broad range of authors that in some instances have taken similar approaches.
## 2 Fire Spread Simulations
The ultimate aim of any fire spread simulation development is to produce a product that is practical, easy to implement and provides timely information on the progress of fire spread for wildland fire authorities. With the advent of cheap personal computing and the increased use of geographic information systems, the late 1980s and early 1990s saw a flourishing of methods to predict the spread of fires across the landscape (Beer, 1990a). As the generally accepted methods of predicting the behaviour of wildland fires at that time were (and still are) one-dimensional models derived from empirical studies (Sullivan, 2007a), it was necessary to develop a method of converting the single dimension forward spread model into one that could spread the entire perimeter in two dimensions across a landscape. This involves two distinct processes: firstly, representing the fire in a manner suitable for simulation, and secondly, propagating that perimeter in a manner suitable for the perimeter's representation.
Two approaches for the representation of the fire have been implemented in a number of softwares. The first treats the fire as a group of mainly contiguous independent cellsthat grows in number, described in the literature as a raster implementation. The second treats the fire perimeter as a closed curve of linked points, described in the literature as a vector implementation.
The propagation of the fire is then carried using some form of expansion algorithm. There are two main methods used. The first expands the perimeter based on a direct-contact or near-neighbour proximity spread basis. The second is based upon Huygens' wavelet principle in which each point on the perimeter becomes the source of the interval of fire spread. While the method of propagation and method of fire representation are often tied (for example, Hyuygens' wavelet principle is most commonly used in conjuction with a vector representation of the fire perimeter, there is no reason why this should be so and methods of representation and propagation can be mixed.
### Huygens wavelet principle
Huygen's wavelet principle, originally proposed for the propagation of light waves, was first proposed in the context of fire perimeter propagation by Anderson et al. (1982). In this case, each point on a fire perimeter is considered a theoretical source of a new fire, the characteristics of which are based upon the given fire spread model and the prevailing conditions at the location of the origin of the new fire. The new fires around the perimeter are assumed to ignite simultaneously, to not interact and to spread for a given time, \\(\\Delta t\\). During this period, each new fire attains a certain size and shape, and the outer surface of all the individual fires becomes the new fire perimeter for that time.
Anderson et al. (1982) used an ellipse to define the shape of the new fires with the long axis aligned in the direction of the wind. Ellipse shapes have been used to described fire spread in a number of fuels (Peet, 1965; McArthur, 1966; Van Wagner, 1969) and, although many alternative and more complex shapes have been proposed (e.g. double ellipse, lemniscate and tear drops (Richards, 1995), the ellipse shape has been found to adequately described the propagation of wildland fires allowed to burn unhindered for considerable time (Anderson et al., 1982; Alexander, 1985). The geometry of the ellipse template is determined by the rate of forward spread as predicted by the chosen fire spread model and a suitable length-to-breadth ratio (L:B) to give the dimensions of the ellipse (Fig. 1). McArthur (1966) proposed ratios for fires burning in grass fuels in winds up to \\(\\simeq\\) 50 km h\\({}^{-1}\\). Alexander (1985) did the same for fires in conifer forests also up to a wind speed of \\(\\simeq\\) 50 km h\\({}^{-1}\\).
Figure 2 illustrates the application of Huygens' principle to the propagation of a fire perimeter utilising the ellipse template. A section of perimeter defined by a series of linked nodes that act as the source of a series of new fires. The geometry of the ellipse used for each new fire is determined by the prevailing conditions, the chosen fire spread model and length-breadth ratio model, and the given period of propagation \\(\\Delta t\\). In the simple case of homogeneous conditions, all ellipses are the same and the propagation is uniform in the direction of the wind. The boundary of the new ellipses forms the new perimeter at \\(t+\\Delta t\\).
Richards (1990, 1995) produced analytical solutions for this modelling approach, for a variety of template shapes, in the form of a pair of differential equations. A computer algorithm Richards and Bryce (1996) that utilises these equations was developed and was subsequently incorporated into fire simulation packages, including FARSITE (USA) (Finney,1994, 1998) and Prometheus (Canada) (CWFGM Steering Committee, 2004). An alternative method that utilises the elliptical geometry only is that of Knight and Coleman (1993) which is used in SiroFire (Australia) (Beer, 1990a; Coleman and Sullivan, 1996). This method provides solutions to the two main problems with the closed curve expansion approach using Huygens' wavelet principle, namely rotations in the perimeter, in which a section of perimeter turns itself inside-out, and enclosures, in which unburnt fuel is enclosed by two sections of the same perimeter (termed a 'bear hug'). A similar method was proposed by Wallace (1993).
FARSITE is widely used in the US by federal and state land management agencies for predicting fire spread across the landscape. It is based upon the BEHAVE (Andrews, 1986) fire behaviour prediction systems, which itself is based upon the spread model of Rothermel (1972). It includes models for fuel moisture content (Nelson, 2000), spotting (Albini, 1979), post-front fuel consumption (Albini and Reinhardt, 1995; Albini et al., 1995), crown-fire initiation (Van Wagner, 1977) and crown-fire spread (Rothermel, 1991). It is PC-based in MS-Windows and utilises the ARCView GIS system for describing the spatial fuel data and topography.
SiroFire was developed for operational use in Australia and utilises McArthur's fire spread models for grass (McArthur, 1966) and forest (McArthur, 1967) as well as the recommended replacement grassland model (Cheney et al., 1998) and versions of Rothermel's models configured for Australian grass and forest litter fuel, however while it was never used operationally it did find use as a training tool for volunteer bushfire firefighters. It uses a proprietary geographic format intended to reduce computation time with data derived from a number of GIS platforms. It was PC-based using DOS protected mode although would run under MS-Windows. It has now been subsumed into a risk management model, Phoenix, being developed by the University of Melbourne
The Canadian Wildland Fire Growth Model, Prometheus, is based on the Canadian Fire Behaviour Prediction (FBP) System (Forestry Canada Fire Danger Group, 1992) and utilises the wavelet propagation algorithms of Richards (1995); Richards and Bryce (1996) to simulate the spread of wildland fire across landscapes. It was initially developed by the Alberta Sustainable Resource Development, Forest Protection Division for the Canadian Forest Service, but is now a national interagency endeavour across Canada endorsed by the Canadian Interagency Forest Fire Centre. It is Windows-based, utilises maps and geographic data exported from the Esri GIS platform ARC and is intended for use as a realtime operational management tool. As with FARSITE and SiroFire, Prometheus allows the user to enter and edit fuel and meteorological data and carry out simulations of fire spread.
The symmetric nature of the template ellipse in conjunction with the application of Huygens' wavelet principle neatly provides the flank and rear spread of a fire. By relating the flank and rear spread through the ellipse geometry, the single forward rate of spread of the fire is all that is needed to simulate the spread of the entire perimeter. French et al. (1990) found that in homogeneous fuels and weather conditions, the Huygens' wavelet principle with template ellipse shape suitably modelled fire spread, with only small distortion of the fire shape. However, such a method cannot adequately handle conditions and fuels that are heterogeneous and errors introduced through changes in the conditions during the period \\(\\Delta t\\) as well as distortions in the fire perimeter due to artifacts in the Huygens' wavelet method.
Changes in conditions during the propagation period \\(\\Delta t\\) cause the predicted perimeter to over- or under-predict the spread of the perimeter because those changes are not reflected in the predicted perimeter. Reducing \\(\\Delta t\\) can reduce the impact of such changes and a flexible approach to the setting of \\(\\Delta t\\) has been used with great success (Finney, 1998).
Eklund (2001) implemented the method of Knight and Coleman (1993) as a fire propagation engine existing with a geographic database on a distributed network such as the World Wide Web (WWW).
### Raster-based simulation
In a raster-based simulation, the fire is represented by a raster grid of cells whose state is unburnt, burning or burnt. This method is computationally less intensive than that of the closed curve (vector) approach, and is much more suited to heterogenous fuel and weather conditions. However, because fuel information needs to be stored for each and every cell in the landscape, there is a trade-off between the resolution at which the data is stored and the amount of data that needs to be stored (and thus memory requirements and access times, etc)3.
Footnote 3: In vector data, fuel is stored as polygons represented by a series of data points representing the vertices of the outline of the fuel and the fuel attributes for the whole polygon. Very large areas can be stored in this fashion but with overhead in processing to determine if a point is inside the polygon.
The method of expanding the fire in this fashion is similar to that of cellular automata, in which the fire propagation is considered to be a simple set of rules defining the interaction of neighbouring cells in a lattice. I will differentiate fire propagation simulations that utilise a pre-existing fire spread model to determine the rate of fire expansion from those that are true cellular automata. The former are described here as raster- or grid-based simulations and the latter are dealt with in the following section on mathematical analogues.
Kourtz and O'Regan (1971) were the first to apply computer techniques to the modelling of fire spread across a landscape. Initially simulating the smouldering spread of small fires (\\(<0.02\\) ha) using a grid of 50 \\(\\times\\)50 square cells each of 1 ft\\({}^{2}\\) in no wind and no slope, this model was extended using a combination of Canadian and US (Rothermel, 1972) fire behaviour models (Kourtz et al., 1977) and the output was in the form of text-based graphical representation of predicted spread. King (1971) developed a model of rate-of-area increase of aerial prescribed burns (intended for use on a hand-held calculator) based on an idealised model of the growth of a single spot fire. Fransden and Andrews (1979) utilised a hexagonal lattice to represent heterogeneous fuel beds and a least-time-to-ignition heat balance model to simulate fire spread across it. Green (1983); Green et al. (1983) generalised the approach of Kourtz and O'Regan and investigated the effect of discontinuous, heterogeneous fuel in square lattices on fire shape utilising both a heat balance and a flame contact spread models and found that while fire shapes are less regular than in continuous fuels, the fires tended to become more elliptical in shape as the fire progressed, regardless of the template shape used.
Green et al. (1990) produced a landscape modelling system called IGNITE that utilised the fire spread mechanics of Green (1983). This system is a raster-based fire spread model that uses fire spread models of McArthur (1966, 1967) as retro-engineered by Noble et al. (1980) and an elliptical ignition template to predict the rate of forward spread in the form of \"time to ignition\" for each cell around a burning cell. IGNITE very easily deals with heterogenous fuels and allows the simulation of fire suppression actions through changes in the combustion characteristics of the fuel layers.
Kalabokidis et al. (1991) and Vasconcelos and Guertin (1992) introduced similar methods to spatially resolve Rothermel's spread model in BEHAVE by linking it to raster-based GIS platforms. Kalabokidis et al. (1991) developed a simulation technique that derived a 'friction' layer within the GIS for six base spread rates for which the friction value increased as spread rate decreased. This was combined with six wind speed classes to produce a map of potential fire extent contours and fireline intensity strata across a range of slope and aspect classes. Vasconcelos and Guertin (1992) developed a similar simulation package called FIREMAP that continued the earlier work of Vasconcelos et al. (1990). FIREMAP stored topographic, fuel and weather information as rasterised layers within the GIS. It is assumed that the resolution of the rasters are such that all attributes within each cell are uniform. Fire characteristics such as rate of spread, intensity, direction of maximum spread, flame length, are calculated for each cell and each weather condition to produce a database of output maps of fire behaviour. Simulation is then undertaken by calculating each cell's 'friction' or time taken for a fire front to consume a cell. Ball and Guertin (1992) extended the work of Vasconcelos by improving the method used to implement the cell to cell spread by adjusting the ROS for flank and rear spread based upon BEHAVE's cosine relation with head fire ROS. The authors found the resulting predicted fire shapes to be unnaturally angular and attribute this to the poor relation for flank spread given by BEHAVE, the regular lattice shape of the raster, and the fact that spread angles are limited, concluding that the raster structure cannot properly represent 'the continuous nature of the actual fire'.
Karafyllidis and Thanailakis (1997) developed a raster-based simulation also based on Rothermel (1972) for hypothetical landscapes. The state of each raster cell is the ratio of the area burned of the cell to the total area of the cell. The passage of the fire front is determined by the sum of the states of each cell's neighbours at each time step until the cell is completely burnt. This approach requires, as input parameter for each cell, the rate of spread of a fire in that cell based on the fuel alone. Berjak and Hearne (2002) improved the model by incorporating the effects of slope and wind on the scalar field of cell rate of spread using the slope effect model of Cheney (1981) and an empirical flame angle/wind speed function. This model was then applied to spatially heterogeneous Savanna fuels of South Africa and found to be in good agreement with observed fire spread.
FireStation (Lopes et al., 1998, 2002) and PYROCART (Perry et al., 1999) both implement Rothermel's fire spread model in a raster-based GIS platform. FireStation utilises both single and double-ellipse fire shape templates, depending on wind speed, to dictate the spread across cells. The 3-dimensional wind field across the landscape is based on local point observations extrapolated using either a linear conservation of mass approach, or a full 3-dimensional solution of the Navier-Stokes equations. Slope is treated as an 'equivalent' wind. PYROCART utilises the fire shape model of Green et al. (1990) which is a function of wind speed. It was validated against a small wildfire and its predictive accuracy (a measure of performance based on the percentage of cells predicted to be burnt compared to those that were unburnt or not predicted to burn) estimated to be 80%.
Guariso and Baracani (2002) extend the standard 2-dimensional approach to modellingthe spread of a surface fire by implementing two levels of raster-based models, one to represent surface fuel and its combustion and another to represent, once a critical threshold value has been reached, the forest canopy and its combustion. They utilise Rothermel's fire spread model with fuel models modified and validated for Mediterranean fuel complexes. To improve its capabilities as an operational tool, fire fighting resources are tracked on screen using Global Positional System (GPS). Trunfio (2004) implemented Rothermel's model using a hexagonal cell shape and found that the model did not produce the spurious symmetries found with square-shaped lattices.
### Other propagation methods
There are alternatives to the raster cell or vector ellipse template propagation methods described above, although these are less widespread in their use. Coupled fire-atmosphere models that incorporate a pre-existing fire spread model (such as given by Rothermel or McArthur) are, at their most basic, a form of propagation algorithm. The coupled fire-atmosphere model of Clark et al. (1996a,b, 2004) represents a considerable effort to link a sophisticated 3-dimensional high-resolution, non-hydrostatic mesoscale meteorological model to a fire spread model. In this particular case, the mesoscale meteorological model was originally developed for modelling atmospheric flows over complex terrain, solving the Navier-Stokes and continuity equations and includes terrain following coordinates, variable grid size, two-way interactive nesting, cloud (rain and ice) physics, and solar heating (Coen, 2005). It was originally linked to the empirical model of forest fire spread of McArthur (1967) (Clark et al., 1998) but was later revised to incorporate the spread model of Rothermel instead (Coen and Clark, 2000).
The atmosphere model is coupled to the fire spread model through the sensible (convection and conduction effects) and latent heat fluxes approximated from the fireline intensity (obtained via the ROS) predicted by the model for a given fuel specified in the model. Fuel is modelled on a raster grid of size 30 m (Coen, 2005). Fuel moisture is allowed to vary diurnally following a very simple sinusoid function based around an average daily value with a fixed lag time behind clock time (Coen, 2005). Assumptions are made about the amount of moisture evaporated prior to combustion. Fuel consumption is modelled using the BURNUP algorithm of Albini and Reinhardt (1995). Effects of radiation, convection and turbulent mixing occurring on unresolved scales (i.e. \\(<30\\) m) are 'treated crudely' without any further discussion.
The coarse nature of the rasterised fuel layer meant that a simple cell fire spread propagation technique was too reliant on the cell resolution. A fire perimeter propagation technique that is a unique mix of the raster- and the vector-based techniques was developed. Each cell in the fuel layer is allowed to burn at an independent rate, dependent upon the predicted wind speed at a height of 5 m, the predicted rate of spread and the fuel consumption rate. Four tracers aligned with the coordinate system, each with the appropriate ROS in the appropriate directions (headfire ROS is defined as that parallel to the wind direction) are used to track the spread of fire across a fuel cell. The coordinates of the tracers define a quadrilateral that occupies a fuel cell which is allowed to spread across the fuel cell. The tracers move across a fuel cell until they reach a cell boundary. If the adjacent cell is unburnt, it is ignited and a fresh set of tracers commenced for the boundaries of that cell. Meanwhile, once the tracers reach a cell boundary, they can then only move in the orthogonal direction. In this way, the quadrilateral can progress across the cell. The boundaries of all the quadrilaterals then make up the fire perimeter. The size of the quadrilateral then allows an estimate of the amount of fuel that has been consumed since the cell ignited. The fireline propagation method allows for internal fire perimeters, although it only allows one fireline per fuel cell.
The interaction between the heat output of the burning fuel and the 3-dimensional wind field results in complex wind patterns, which can include horizontal and vertical vortices. Clark et al. (1996a,b); Jenkins et al. (2001) explored these in producing fireline phenomena such as parabolic headfire shape, and convective and dynamic fingering. However, as wind speed increases (\\(>\\) 5 m s\\({}^{-1}\\) at 15 m above ground), Clark et al. (1996b) found that the coupling weakens and the wind flows through the fire.
The real utility of the coupled fire-atmosphere model, however, is the prediction of wind direction around the fire perimeter, used to drive the spread of the fire. This, in effect, replaces Huygens' wavelet approach with a much more physically direct method. However, the use of Rothermel's fire spread model for spread in directions other than in the direction of the prevailing wind is questionable and results in odd deformations in the fire perimeter when terrain or fuel are not uniform (Clark et al., 2004).
Several other workers have taken the same approach as Clark in linking a mesoscale meteorological model to a fire model. Gurer and Georgopoulos (1998) coupled an off-the-shelf mesoscale meteorological model (the Regional Atmospheric Modeling System (RAMS) (Pielke et al., 1992) with Rothermel's model of fire spread to predict gas and particulate fall out from forest fires for the purpose of safety and health. The Rothermel model is used to obtain burning area and heat for input into RAMS. Submodels are used for prediction of the emission components (CO2, CH4, etc, polycyclic aromatic hydrocarbons, etc). Simulation of the fire perimeter propagation is not undertaken. Speer et al. (2001) used a numerical weather prediction model to predict the speed and direction of the wind for input into a simple empirical model of fire spread through heathland fuel to predict the rate of forward spread (not to simulate the spread) of two wildfires in Sydney 1994.
Plourde et al. (1997) extend the application of Huygens's wavelet propagation principle as utilised by Knight and Coleman (1993). However, rather than relying on the template ellipse as the format for the next interval propagation, the authors utilise an innovative closed contour method based on a complex Fourier series function. Rather than considering the perimeter as a series of linked points that are individually propagated, the perimeter is considered as a closed continuous curve that is propagated in its entirety. A parametric description of the perimeter is derived in which the x and y coordinates of each point are encoded as a real and imaginary pair. However, as with Knight and Coleman (1993), a sufficiently fine time step is critical to precision and anomalies such as rotations and overlaps must be identified and removed. Plourde et al.,'s propagation model appears to handle heterogeneous fuel but the timestep is given as 0.05 s, resulting in very fine scale spread but with the trade-off of heavy computational requirements.
Viegas (2002) proposed an unorthodox propagation mechanism in which the fire perimeter is assumed to be a continuous entity that will endeavor to rotate to align itself with a slope to an angle of approximately 60 degrees across the slope. Based on observation of laboratory and field experiments of line fires lit at angles to the slope, Viegas constructs a fire perimeter propagation algorithm in which he redefines the flank spread of a fire burning in a cross-slope wind as a rotation of the front.
In perhaps a sign of the times, Seron et al. (2005) takes the physical model of Asensio and Ferragut (2002) and simulates it using the techniques and approaches developed for computer generated imagery (CGI). A strictly non-realtime method is used to solve the fire spread model utilising satellite imagery and the terrain (using flow analysis techniques), interpolated using kriging, to determine fuel and non-burnables such water bodies and rivers, etc. All attributes are non-dimensionalised. Wind is calculated as a vector for each cell derived from the convection form of the physical fire model (Asensio et al., 2005). This vector is then added to the terrain gradient vector. 256 x 256 cells are simulated, resulting in 131589 equations that need to be solved for each timestep, which is 0.0001 s.
## 3 Mathematical Analogues
In the broader non-wildland-fire-specific literature there is a considerable number of works published involving wildland fire spread that are not based on wildland fire behaviour. For the most part, these works implement mathematical functions that appear analogous to the spread of fires and thus are described as wildland fire spread models, while in some cases wildland fire spread is used simply as a metaphor for some behaviour. These mathematical functions include cellular automata and self-organised criticality, reaction-diffusion equations, percolation, neural networks and others. This section briefly discusses some of the fire spread-related applications of these functions.
### Cellular automata and self-organised criticality
Cellular automata (CA) are a formal mathematical idealisation of physical systems in which space and time are discretised, and physical quantities take on a finite set of values (Wolfram, 1983). CA were first introduced by Ulam and von Neumann in the late 1940s and have been known by a range of names, including cellular spaces, finite state machines, tessellation automata, homogenous structures, and cellular structures. CA can be described as discrete space/time logical universes, each obeying their own local physics (Langton, 1990). Each cell of space is in one of a finite number of states at any one time. Generally CAs are implemented as a lattice (i.e. 2D) but can be of any dimension. A CA is specified in terms of the rules that define how the state changes from one time step to the next and the rules are generally specified as functions of the states of neighbours of each cell in the previous time step. Neighbours can be defined as those cells that share boundaries, vertices or even more further removed4. The key attribute of a CA is that the rules that govern the state of any one cell are simple and based primarily upon the states of its neighbours, which can result in surprisingly complex behaviour, even with a limited number of possible states (Gardner, 1970) and can be capable of Universal Computation (Wolfram, 1986).
Footnote 4: In a 2D lattice, the cells sharing boundaries form the von Neumann neighbourhood (4 neighbours), cells sharing boundaries and vertices form the Moore neighbourhood (8 neighbours) (Albinet et al., 1986)
Due to the inherent spatial capacity with interrelations with neighbouring cells, CA have been used to model a number of natural phenomena, e.g. lattice gases and crystal growth (with Ising models) (Enting, 1977), ecological modelling (Hogeweg, 1988) and have also been applied to the field of wildland fire behaviour. Albinet et al. (1986) first introduced the concept of fire propagation in the context of CA. It is a simple idealised isotropic model of fire spread based on epidemic spread in which cells (or 'trees') receive heat from burning neighbours until ignition occurs and then proceed to contribute heat to its unburnt neighbours. They showed that the successful spread of the fire front was dependent upon a critical density of distribution of burnable cells (i.e. 'trees') and unburnable (or empty) cells, and that this critical density reduced with the increasing number of neighbours allowed to 'heat' a cell. They also found that the fire front structure was fractal with a dimension \\(\\simeq 1.8\\). The isotropic condition in which spread is purely a result of symmetrical neighbour interactions (i.e. wind or slope are not considered) is classified as percolation (discussed below). von Niessen and Blumen (1988) extended the model of Albinet et al. to include anisotropic conditions such as wind and slope in which ignition of crown and surface fuel layers was stochastic.
The idealised-more of a metaphor, really-'forest fire' model CA, along with the sandpile (avalanche) and earthquakes models, was used as a primary example of self-organised criticality (Bak et al., 1987; Bak, 1996), in which it is proposed that dynamical systems with spatial degrees of freedom evolve into a critical self-organised point that is scale invariant and robust to perturbation. In the case of the forest fire model, the isotropic model of Albinet et al. (1986) was modified and investigated by numerous workers to explore this phenomenon, eg. Bak et al. (1990); Chen et al. (1990); Drossel and Schwabl (1992, 1993); Clar et al. (1994); Drossel (1996) such that, in its simplest form, trees grow at a small fixed rate, \\(p\\), on empty sites. At a rate \\(f\\), (\\(f<<p\\)), sites are hit by lightning strikes (or matches are dropped) that starts a fire that burns if the site is occupied. The fire spreads to every occupied site connected to that burning site and so on. Burnt sites are then considered empty and can be re-colonised by new growing trees. The rate of consumption of an occupied site is immediate, thus the only relevant parameter is the ratio \\(\\theta=p/f\\), which sets the scale for the average fire size (i.e. the number of trees burnt after one lightning strike) (Grassberger 2002). Self-organised criticality occurs as a result of the rate of tree growth and the size of fire that results when ignition coincides with an occupied site, which in turn is a function of the rate of lightning strike. At large \\(\\theta\\), less frequent but larger fires occur. As \\(\\theta\\) decreases, the fires occur more often but are smaller. This result describes the principle underlying the philosophy of hazard reduction burning. The frequency distribution of fire size against number of fires follows a power law (Malamud et al., 1998; Malamud and Turcotte, 1999) similar to the frequency distributions found for sandpiles and earthquakes.
More recent work (Schenk et al., 2000; Grassberger, 2002; Pruessner and Jensen, 2002; Schenk et al., 2002) has found that the original forest fire model does not truly represent critical behaviour because it is not scale invariant; in larger lattices (\\(\\simeq 65000\\)), scaling laws are required to correct the behaviour (Pastor-Satorras and Vespignani, 2000). Reed and McKelvey (2002) compared size-distribution of actual burned areas in six regions of North America and found that a simple power-law distribution was 'too simple to describe the distribution over the full range. Rhodes and Anderson (1998) suggesting using the forest fire model as a model for the dynamics of disease epidemics.
Self-organised criticality, however, is generally only applicable to the effect of many fires over large landscapes over long periods of time, and provides no information about the behaviour of individual bushfires. There are CA that have used actual site state and neighbourhood rules for modelling fire spread but these have been based on an overly simple understanding of bushfire behaviour and their performance is questionable. Li and Magill (2000, 2003) attempted to model the spread of individual bushfires across a landscape modelled in the 2D CA lattice in which fuel is discrete and discontinuous. While they supposedly implemented the Rothermel wind speed/ROS function, their model shares more in common with the Drossel-Schwabl model than any raster-based fire spread model. Li and Magill determine critical 'tree' densities for fire spread across hypothetical landscapes with both slope and wind effects in order to study the effect of varying environmental conditions on fire spread. However, ignition of a cell or 'tree' is probabilistic, based on 'heat conditions' (or accumulated heat load from burning neighbours); and the 'tree' flammability is an arbitrary figure used to differentiate between dead dry trees and green 'fire-resistant' trees. Essentially this is the same as tree immunity as proposed by Drossel and Schwabl (1993). A critical density of around 41% was found for lattices up to 512 \\(\\times\\,\\)512 cells. The model is not compared to actual fire behaviour.
Duarte (1997) developed a CA of fire spread that utilised a probabilistic cell ignition model based on a moisture content driven extinction function based Rothermel (1972) using an idealised parameter based on fuel characteristics, fuel moisture and heat load. Fuel was considered continuous (but for differences in moisture). Duarte investigated the behaviour of the CA and found the isotropic (windless) variant associated with undirected percolation. In the presence of wind, the CA belonged to the same universality class (i.e. the broad descriptive category) as directed percolation. Duarte notes that no CA at that time could explain the parabolic headfire shape observed in experimental fires by workers such as Cheney et al. (1993).
Rather than use hard and fast rules to define the states of a CA, Mraz et al. (1999) used the concept of fuzzy logic to incorporate the descriptive and uncertain knowledge of a system's behaviour obtained from the field in a 2D CA. Fuzzy logic is a control system methodology based on an expert system in which rates of change of output variables are given instead of absolute values and was developed for systems in which input data is necessarily imprecise. Mraz et al. develop cell state rules (simple 'if-then-else' rules)in which input data (such as wind) is 'fuzzified' and output states are stochastic.
Hargrove et al. (2000) developed a probabilistic model of fire spread across a heterogeneous landscape to simulate the ecological effects of large fires in Yellowstone National Park, USA. Utilising a square lattice (each cell 50 m \\(\\times\\,\\)50 m), a stochastic model of fire spread in which the ignition of the Moore neighbourhood around a burning cell is based upon an ignition probability that is isotropic in no wind and biased in wind (using three classes of wind speed). The authors determined a critical ignition probability (isotropic) of around 0.25 in order for a fire to have a 50% chance of spreading across the lattice. Spotting is modelled based on the maximum spotting distance as determined within the SPOT module of the BEHAVE fire behaviour package (Andrews, 1986) using the three wind classes, a 3\\({}^{\\circ}\\)random angle from that of the prevailing wind direction, and the moisture content of the fuel in the target cell to determine spot fire ignition probability. Inclusion of spotting dramatically increased the ROS of the fire and the total area burned. Validation of the model, despite considerable historical weather and fire data, has not been undertaken due to difficulties in parameterising the model and the poor resolution of the historical data.
Muzy et al. (2002, 2003, 2005a,b), Dunn and Milne (2004) and Ntaimo et al. (2004) explore the application of existing computational formalisms in the construction of automata for the modeling of wildland fire spread; Muzy et al. (2002, 2003) and Ntaimo et al. using Discrete Event System Specification (DEVS or cell-DEVS) and Dunn and Milne (2004) using CIRCAL. DEVS attempts to capture the processes involved in spatial phenomena (such as fire spread) using an event-based methodology in which a discrete event (such as ignition) at a cell triggers a corresponding discrete process to occur in that cell which may or may not interact with other cells. CIRCAL is derived from a process algebra developed for electronic circuit integration and in this case provides a rigorous formalism to describe the interactions between concurrently communicating, interacting automata in a discretised landscape to encode the spatial dynamics of fire spread.
Sullivan and Knight (2004) combined a simple 2-dimensional 3-state CA for fire spread with a simplified semi-physical model of convection. This model explored the possible interactions between a convection column, the wind field and the fire to replicate the parabolic headfire shape observed in experimental grassland fires (Cheney et al., 1993). It used local cell-based spread rules that incorporated semi-stochastic rules (allowing discontinuous, non-near neighbour spread) with spread direction based on the vector summation of the mean wind field vector and a vector from the cell to the centre of convection (as determined by overall heat output of the fire as recorded in a six-stage convection column above the fire). Fire shapes closely resembled those of fires in open grassland but ROS was not investigated.
### Reaction-Diffusion
Reaction-diffusion is a term used in chemistry to describe a process in which two or more chemicals diffuse over a surface and react with one another at the interface between the two chemicals. The reaction interface in many cases forms a front which moves across the surface of the reactants and can be described using travelling wave equations. Reaction-Diffusion equations are considered one of the most general mathematical models and may be applied to a wide variety of phenomena. A reaction-diffusion equation has two main components: a reaction term that generates energy and a diffusion term in which the energy is dissipated. The general solution of a reaction-diffusion equation is that of the wave.
Watt et al. (1995) discussed the spatial dimensional reduction of a two-dimensional reaction-diffusion equation describing a simple idealised fire spread model down to a single spatial dimension. An analytical solution for the temperature and thence the speed of the wave solution, which depends on the reaction term and the upper surface cooling rate, is obtained from the reduced reaction-diffusion equation by various algebra and linearisation.
Mendez and Liebot (1997) presents a mathematical analogue model of the cellular automata 'forest fire' model of Drossel and Schwabl in which they start with a hyperbolic reaction-diffusion equation and then commence to apply particular boundary conditions in order to determine the speed of propagation of a front between unburnt or 'green' trees on one side and 'burnt' trees on the other. It is assumed that in state 0 (all green) and state 1 (all burnt) that the system is in equilibrium. Particular abstract constraints on the speed of the front are determined and the nonequilibrium of the thermodynamics between the two states are found.
Margerit and Sero-Guillaume (1998) transformed the elliptical growth model of Richards (1990) in order to find an intrinsic expression for fire front propagation. The authorsre-write the model in an optic geometric 'variational form' of the model in which the forest is represented as three-state 'dots' and the passage from unburnt or 'at rest' dots and burning or 'excited' dots is represented by a wave front. This form is a Hamilton-Jacobi that when solved gives the same result as Richards model. The authors then attempt to bring physicality to Richards' model by proposing that the wave front is a temperature isotherm of ignition. They put forward two forms: a hyperbolic equation (double derivative Laplacian) and a parabolic (single derivative Laplacian) which is the standard reaction diffusion equations (i.e. wave solutions due to both the production of energy by a reaction and to the transport of this energy by thermal diffusion and convection). After some particularly complicated algebra the authors bring the reaction-diffusion equation back to an elliptical wave solution, showing that Richards' model is actually a special case of the reaction-diffusion equation with geometry and no physics.
### Diffusion Limited Aggregation
Clarke et al. (1994) proposed a cellular automata model in which the key propagation mechanism is that of diffusion limited aggregation (DLA). DLA is an aggregation process used to explain the formation of crystals and snowflakes and is a combination of the diffusion process with restriction upon the direction of growth. DLA is related to Brownian trees, self-similarity and the generation of fractal shapes (\\(1<D<2\\)). In this case, fire ignitions ('firelets') are sent out from a fire source and survive by igniting new unburnt fuel. The direction of spread of the firelet is determined by the combination of environmental factors (wind direction, slope, fuel). If there is no fuel at a location, the firelet 'dies' and a new firelet is released from the source. This continues until no burnable fuel remains in direct connection with the original fire source. Clarke's model is modified somewhat such that a firelet can travel over fuel cells previously burned such that the fire aggregates to the outer edge from the interior and wind, fuel and terrain are used to weight or bias the direction of travel of the firelet.
The model was calibrated against an experimental fire conducted in 1986 that reached 425 ha and was mapped using infrared remote sensing apparatus every 10 minutes. Environmental conditions from the experimental fire were held constant in the model and the behaviour criteria of the cellular automata adjusted, based on the spatial pattern of the experimental burn, its temperature structure and temporal patten. Comparison of 100 fire simulations over the duration of the experimental fire found that pixels that did actually burn were predicted to burn about 80% of the time. Clarke et al. naively state that fires burning across the landscape are fractal due to the self-similarity of the fire perimeter but this ignores the fact that on a landscape scale, the fire follows the fuel and it is the distribution of the fuel that may well be fractal, not the fire as such.
A similar approach was taken by Achtemeier (2003) in which a \"rabbit\" acts analogously to a fire, following a set of simple rules dictating behaviour, e.g. rabbits eat food, rabbits jump, and rabbits reproduce. A hierarchy of rules deal with rabbit death, terrain, weather, hazards. An attempt to incorporate rules regarding atmospheric feedbacks is also included. A similar dendritic pattern of burning to that of Clarke et al. (1994) results when conditions for spread are tenuous. Strong winds results in a parabolic headfire shape.
### Percolation and fractals
Percolation is a chemical and materials science process of transport of fluids through porous materials. In mathematics it is a theory that is concerned with transport through a randomly distributed media. If the medium is a regular lattice, then percolation theory can be considered the general case of isotropic CA. Unlike CA, percolation can occur on the boundaries of the lattice cells (bond percolation), as well as the cells themselves (site percolation). Percolation theory has been used for the study of numerous physical phenomena, from CO\\({}_{2}\\) gas bubbles in ice to electrical conductivity. In addition to the CA models that have been applied to wildland fire behaviour described above, several workers have investigated the application of percolation itself to wildland fire behaviour.
Beer (1990b); Beer and Enting (1990) investigated the application of isotropic percolation theory to fire spread (without wind) by comparing predictions against a series of laboratory experiments utilising matches (with ignitable heads) placed randomly in a two-dimensional lattice. The intent was to simulate the original percolation work of Albinet et al. (1986) using von Neumann neighbours. They found that while the theory yielded qualitative information about fire spread (e.g. that as the cell occupation density increased toward the critical density, the time of active spread also increased) it was unable to reproduce quantitatively the laboratory results. Effects such as radiant heating from burning clusters of matches and convective narrowing of plumes have no analogue in site/bond percolation where only near-neighbour interactions are involved. They concluded that such models are unlikely to accurately model actual wildfires and that models based on a two-dimensional grid with nearest neighbour ignition rules are too naive.
Nahmias et al. (2000) conducted similar experimental work but on a much larger scale, investigating the critical threshold of fire spread (both with and without wind) across two-dimensional lattices. Utilising both scaled laboratory experiments and field experiments in which the fuel had been manipulated to replicate the combustible/noncombustible distribution of lattice cells in percolation, Nahmias et al. found critical behaviour in the spread of fire across the lattice. In the absence of wind, they found that the value of the critical threshold to be the same as that of percolation theory when first and second neighbours are considered. In the presence of wind, however, they observed interactions to occur far beyond second nearest neighbours which were impossible to predict or control, particularly where clusters of burning cells were involved. They conclude that a simple directed percolation model is not adequate to describe propagation under these conditions.
Ricotta and Retzlaff (2000); Caldarelli et al. (2001) investigated percolation in wildfires by analysing satellite imagery of burned area (i.e. fire scars). Both found the final burned area of wildfires to be fractal5 (fractal dimension, \\(D_{f}\\simeq 1.9\\)). Caldarelli et al. (2001) found the accessible perimeter to have a fractal dimension of \\(\\simeq 1.3\\) and and to be denser at their centres. They then show that such fractal shapes can be generated using self-organised 'dynamical' percolation in which a time-dependent probability is used to determine ignition of neighbours. Earlier, McAlpine and Wotton (1993) determined the fractal dimension of 14 fires (a mix of data from unpublished fire reports and the literature) to be \\(\\simeq 1.15\\). They then developed a method to convert perimeter predictions based on elliptical length-to-breadth to a more accurate prediction using the fractal dimension and a given measurement length scale.
Footnote 5: A fractal is a geometric shape which is recursively self-similar (i.e. on all scales), defining an associated length scale such that its dimension is not an integer, i.e. fractionalFavier (2004) developed an isotropic bond percolation model of fire spread in which two time-related parameters, medium ignitability (the time needed for the medium to ignite) and site combustibility (the time taken for the medium to burn once ignited are the key controlling factors of fire spread. Favier determined the critical values of these parameters and found fractal patterns similar to that of Caldarelli et al. (2001).
### Other methods
Other mathematical methods that have been used to model the spread of wildland fires include artificial neural networks (McCormick et al., 1999), in which a large number of processing nodes (neurons) store information (weighting) in the connections between the nodes. The weightings for each connection between the nodes is learnt by repeated comparing of input data to output data. Weightings can be non-linear but necessarily needs a large dataset on which to learn and assumes that the dataset is complete and comprehensive. A related field of endeavour is genetic algorithms (Karafyllidis, 1999, 2004), in which a pool of potential algorithms are mutated (either at random or directed) and tested against particular criteria (fitness) to determine the subset of algorithms that will the next generation of mutating algorithms. The process continues until an algorithm that can carry out the specific task evolves. This may lead to local optimisations of the algorithm that fail to perform globally. Again, the processes depends on a complete and comprehensive dataset on which to be bred.
Catchpole et al. (1989) modelled the spread of fire through a heterogeneous fuel bed as a spatial Markov chain. A Markov chain is a time-discrete stochastic process in which the previous states of a system are unrelated to the current state of the system. Thus the probability of a system being in a certain state is unrelated to the state it was in previously. In this context, Catchpole et al. treated, in one dimension, a heterogeneous fuel as a series of discrete, internally homogeneous, fuel cells in which the variation of fuel type is a Markov chain of a given transition matrix. The rate of spread in each homogeneous fuel cell is constant related only to the fuel type of that cell and to the spread rate of the cell previous. The time for a fire to travel through the _n_th cell of the chain is then a conditional probability based on the transition matrix. Taplin (1993) noted that spatial dependence of fuel types can greatly influence the variance of the rate of spread thus predicted and expanded the original model to include the effect of uncertainty in the spread rate of different fuel types.
The 'forest-fire' model of self-organised criticality (discussed above) has led to a variety of methods to investigate the behaviour of such models. These include renormalisation group (Loreto et al., 1995), bifurcation analysis (Dercole and Maggi, 2005), and small-world networks (Graham and Matthai, 2003).
## 4 Discussion
The field of computer simulation of fire spread is almost as old as the field of fire spread modelling itself and certainly as old as the field of computer simulation. While the technology of computing has advanced considerably in those years, the methods and approachesof simulating the spread of wildland fire across the landscape has not changed significantly since the early days of Kourtz and O'Regan (1971) on a CDC6400 computer. What has changed significantly has been the access to geographic data and level of complexity that can be undertaken computationally. The two areas of research covered in this paper, computer simulation of fire spread and the application of mathematical analogues to fire spread modelling are very closely related. So much so that key methods can be found in both approaches (e.g. raster modelling and cellular automata (percolation)).
The discussion of simulation techniques concentrated on the various methods of converting existing point (or one dimensional) models of rate of forward spread of a fire to two dimensions across a landscape. The most widely used method is that of Huygens' wavelet principle, which has been used in both vector (line) and raster (cell) simulation models. The critical aspect of this method is the choice of a template shape for the spawned wavelet (or firelet). The most common is that of the simple ellipse but Richards (1995) showed that other shapes are also applicable.
French (1992) found that vector-based simulations produce a much more realistic fire perimeter, particularly in off-axis spread, than raster-based simulations. However, raster-based simulations are more proficient at dealing with heterogeneous fuels than vector-based models. Historically, the requirements of raster-based models to have raster geographic data at a high enough resolution to obtain meaningful simulations has meant that vector-based models have been favoured, but the reducing cost of computer storage has seen a swing in favour of raster-based models. Increasingly, the choice of type of fire spread model is driven by the geographic information system (GIS) platform in which the geographic data resides, making the decision between the two moot.
Alternative fire propagation methods such as Clark et al. (2004) are restricted due to the specificity of the model in which the method is embedded. The coupled fire-atmosphere model is unique in the field of fire spread simulation in that it links a fully formed 3D mesoscale meteorological model of the atmosphere with a 1D fire spread model. In order to make the simulation work, the authors had to devise methods of propagating the fire perimeter at a resolution much smaller than the smallest resolvable area of the coupled model. The result is a method specific to that model and one which simply replaces the broad generalised approach of the template ellipse with a much more fundamental (but not necessarily more correct) variable spread direction around the perimeter.
The broad research area of mathematics has yielded many mathematical functions that could be seen as possible analogies to the spread of fire across the landscape. The most prevalent of these is the cellular automata model (or the much more general percolation theory). These models are well suited to spatial propagation of an entity across a medium and thus have found application in modelling epidemic disease spread, crowd motion, traffic and fire spread across the landscape. The approaches taken in such modelling range from attempting to model the thermodynamic balance of energy in combusting cells to treating the fire as a contagion with no inherent fire behaviour whatsoever apart from a moving perimeter. The later models have found great use in the exploration of critical behaviour and self-organisation.
Other approaches, such as reaction-diffusion models, have had a more physical basis to the modelling of fire spread but their application to fire assumes that fire behaviour is essentially a spatially continuous process that does not include any discontinuous processessuch as spotting. Models based on this mathematical conceit generally have to have specific components fitted to the original model to incorporate fundamental combustion processes such as spotting, non-local radiation, convection, etc.
While the mathematical analogue models discussed here may appear to have little in conjunction with real fire behaviour, they do offer a different perspective for investigating wildland fire behaviour, and in many cases are more computationally feasible in better than real time than many physical or quasi-physical models. Divorced as they are from the real world, approaches such as that of percolation theory and cellular automata provide a reasonable platform for the biological heterogeneity of fuels across the landscape. This is confirmed by the fact that many GIS platforms take such a raster view of the world. However, to simulate the spread of a fire across this (possibly 3D) surface, requires the development of rules of propagation that incorporate both local (traditional CA) rules with larger scale global rules in order to replicate the physical local and non-local processes involved in fire spread. The foremost of these is the importance of the convection column above a fire. The perimeter of a fire, although it may seem to be a loosely linked collection of independently burning segments of fuel, actually does behave has a continuous process in which the behaviour of the neighbouring fuel elements does affect the behaviour of any single fuel element. Other non-local interactions include spotting and convective and radiative heating of unburnt fuels.
Regardless of the method of fire propagation simulation that is used, the underlying fire spread model that determines the behaviour of the simulation is the critical element. The preceding two papers in this series discussed the various methods, from purely physical to quasi-physical to quasi-empirical and purely empirical, that have been developed since 1990. It is the veracity, verification and validation of these models that will dictate the quality of any fire spread simulation technique that is used to implement them. However, performance of the most accurate fire spread model will be limited, when applied in a simulation, to the quality of the input data required to run it. The greater the precision of a fire spread model, the greater the precision of the geographical, topographical, meteorological and fuel data needed to achieve that precision-one does not want a model's prediction to fail simply because a there was a tree beside a road that was not present in the input data! The need for greater and greater precision in fire spread models should be mitigated to a certain extent by the requirements of the end purpose and quantification of the range of errors to be expected in any prediction.
As in the original attempts at'simulation' using a single estimate of forward spread rate and a wall map to plot likely head fire locations, in many instances highly precise predictions of fire spread are just not warranted, considering the cost of the prediction and access to suitable high quality data. For the most part, 'ball-park' predictions of fire spread and behaviour (not unlike those found in current operational prediction systems) are more than enough for most purposes. An understanding of the range of errors, not only in the input data but also in the prediction itself, will perhaps be more efficient and effective use of resources.
In the end, there will be a point reached at which the computational and data cost of increases in prediction accuracy and precision will outweigh the cost of cost-effective suppression.
Acknowledgements
I would like to acknowledge Ensis Bushfire Research and the CSIRO Centre for Complex Systems Science for supporting this project; Jim Gould and Rowena Ball for comments on the draft manuscript; and the members of Ensis Bushfire Research who ably assisted in the refereeing process, namely Miguel Cruz and Ian Knight.
## References
* Achtemeier (2003) Achtemeier, G. L. (2003). \"Rabbit Rules\"-An application of Stephen Wolfram's \"New Kind of Science\" to fire spread modelling. In _Fifth Symposium on Fire and Forest Meteorology, 16-20 November 2003, Orlando, Florida_. American Meteorological Society.
* Albinet et al. (1986) Albinet, G., Searby, G., and Stauffer, D. (1986). Fire propagation in a 2-d random medium. _Le Journal de Physique_, 47:1-7.
* Albini (1979) Albini, F. (1979). Spot fire distance from burning trees-a predictive model. General Technical Report INT-56, USDA Forest Service, Intermountain Forest and Range Experimental Station, Odgen UT.
* Albini et al. (1995) Albini, F., Brown, J., Reinhardt, E., and Ottmar, R. (1995). Calibration of a large fuel burnout model. _International Journal of Wildland Fire_, 5:173-192.
* Albini and Reinhardt (1995) Albini, F. and Reinhardt, E. (1995). Modeling ignition and burning rate of large woody natural fuels. _International Journal of Wildland Fire_, 5:81-91.
* Alexander (1985) Alexander, M. (1985). Estimating the length to breadth ratio of elliptical forest fire patterns. In _Proceedings of the Eighth Conference on Forest and Fire Meteorology_, pages 287-304. Society of American Forecsters.
* Anderson et al. (1982) Anderson, D., Catchpole, E., de Mestre, N., and Parkes, T. (1982). Modelling the spread of grass fires. _Journal of Australian Mathematics Society, Series B_, 23:451-466.
* burn subsystem, part 1. Technical Report General Technical Report INT-194, 130 pp., USDA Forest Service, Intermountain Forest and Range Experiment Station, Ogden, UT.
* Asensio and Ferragut (2002) Asensio, M. and Ferragut, L. (2002). On a wildland fire model with radiation. _International Journal for Numerical Methods in Engineering_, 54(1):137-157.
* Asensio et al. (2005) Asensio, M., Ferragut, L., and Simon, J. (2005). A convection model for fire spread simulation. _Applied Mathematics Letters_, 18:673-677.
* Bak (1996) Bak, P. (1996). _How Nature Works: The Science of Self-organised Criticality_. Springer-Verlag Telos, New York, USA.
* Bak et al. (1990) Bak, P., Chen, K., and Tang, C. (1990). A forest-fire model and some thoughts on turbulence. _Physics Letters A_, 147(5-6):297-300.
* Bak et al. (1987) Bak, P., Tang, C., and Wiesenfeld, K. (1987). Self-organised criticality: An explanation of 1/f noise. _Physical Review Letters_, 59(4):381-384.
* Bak et al. (1996)Ball, G. and Guertin, D. (1992). Improved fire growth modeling. _International Journal of Wildland Fire_, 2(2):47-54.
* Beer (1990a) Beer, T. (1990a). The australian national bushfire model project. _Mathematical and Computer Modelling_, 13(12):49-56.
* Beer (1990b) Beer, T. (1990b). Percolation theory and fire spread. _Combustion Science and Technology_, 72:297-304.
* Beer and Enting (1990) Beer, T. and Enting, I. (1990). Fire spread and percolation modelling. _Mathematical and Computer Modelling_, 13(11):77-96.
* Berjak and Hearne (2002) Berjak, S. and Hearne, J. (2002). An improved cellular automaton model for simulating fire in a spatially heterogeneous savanna system. _Ecological Modelling_, 148(2):133-151.
* Caldarelli et al. (2001) Caldarelli, G., Frondoni, R., Gabrielli, A., Montuori, M., Retzlaff, R., and Ricotta, C. (2001). Percolation in real wildfires. _Europhysics Letters_, 56(4):510-516.
* Catchpole et al. (1989) Catchpole, E., Hatton, T., and Catchpole, W. (1989). Fire spread through nonhomogeneous fuel modelled as a Markov process. _Ecological Modelling_, 48:101-112.
* Chen et al. (1990) Chen, K., Bak, P., and Jensen, M. (1990). A deterministic critical forest fire model. _Physics Letters A_, 149(4):4.
* Cheney (1981) Cheney, N. (1981). Fire behaviour. In Gill, A., Groves, R., and Noble, I., editors, _Fire and the Australian Biota_, chapter 5, pages 151-175. Australian Academy of Science, Canberra.
* Cheney et al. (1993) Cheney, N., Gould, J., and Catchpole, W. (1993). The influence of fuel, weather and fire shape variables on fire-spread in grasslands. _International Journal of Wildland Fire_, 3(1):31-44.
* Cheney et al. (1998) Cheney, N., Gould, J., and Catchpole, W. (1998). Prediction of fire spread in grasslands. _International Journal of Wildland Fire_, 8(1):1-13.
* Clar et al. (1994) Clar, S., Drossel, B., and Schwabl, F. (1994). Scaling laws and simulation results for the self-organized critical forest-fire model. _Physical Review E_, 50:1009-1018.
* Clark et al. (2004) Clark, T., Coen, J., and Latham, D. (2004). Description of a coupled atmosphere-fire model. _International Journal of Wildland Fire_, 13(1):49-63.
* Clark et al. (1998) Clark, T., Coen, J., Radke, L., Reeder, M., and Packham, D. (1998). Coupled atmosphere-fire dynamics. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 67-82.
* Clark et al. (1996a) Clark, T. L., Jenkins, M. A., Coen, J., and Packham, D. (1996a). A coupled atmosphere-fire model: Convective feedback on fire-line dynamics. _Journal of Applied Meteorology_, 35(6):875-901.
* Clark et al. (1996b) Clark, T. L., Jenkins, M. A., Coen, J. L., and Packham, D. R. (1996b). A coupled atmosphere-fire model: Role of the convective froude number and dynamic fingering at the fireline. _International Journal of Wildland Fire_, 6(4):177-190.
* Clark et al. (1998)Clarke, K. C., Brass, J. A., and Riggan, P. J. (1994). A cellular automaton model of wildfire propagation and extinction. _Photogrammetric Engineering and Remote Sensing_, 60(11):1355-1367.
* Coen (2005) Coen, J. L. (2005). Simulation of the Big Elk Fire using coupled atmosphere/fire modeling. _International Journal of Wildland Fire_, 14(1):49-59.
* Coen and Clark (2000) Coen, J. L. and Clark, T. L. (2000). Coupled atmosphere-fire model dynamics of a fireline crossing a hill. In _Third Symposium on Fire and Forest Meteorology, 9-14 January 2000, Long Beach, California._, pages 7-10.
* Coleman and Sullivan (1996) Coleman, J. and Sullivan, A. (1996). A real-time computer application for the prediction of fire spread across the australian landscape. _Simulation_, 67(4):230-240.
* CWFGM Steering Committee (2004) CWFGM Steering Committee (2004). _Prometheus User Manual v.3.0.1_. Canadian Forest Service.
* Dercole and Maggi (2005) Dercole, F. and Maggi, S. (2005). Detection and continuation of a border collision bifurcation in a forest fire model. _Applied Mathematics and Computation_, 168:623-635.
* Drossel (1996) Drossel, B. (1996). Self-organized criticality and synchronisation in a forest-fire model. _Physical Review Letters_, 76(6):936-939.
* Drossel and Schwabl (1992) Drossel, B. and Schwabl, F. (1992). Self-organized critical forest-fire model. _Physical Review Letters_, 69:1629-1632.
* Drossel and Schwabl (1993) Drossel, B. and Schwabl, F. (1993). Forest-fire model with immune trees. _Physica A_, 199:183-197.
* Duarte (1997) Duarte, J. (1997). Bushfire automata and their phase transitions. _International Journal of Model Physics C_, 8(2):171-189.
* Dunn and Milne (2004) Dunn, A. and Milne, G. (2004). Modelling wildfire dynamics via interacting automata. In Sloot et al. (2004), pages 395-404.
* Eklund (2001) Eklund, P. (2001). A distributed spatial architecture for bush fire simulation. _International Journal of Geographical Information Science_, 15(4):363-378.
* Emmons (1963) Emmons, H. (1963). Fire in the forest. _Fire Research Abstracts and Reviews_, 5(3):163-178.
* Emmons (1966) Emmons, H. (1966). Fundamental problems of the free burning fire. _Fire Research Abstracts and Reviews_, 8(1):1-17.
* Enting (1977) Enting, I. (1977). Crystal growth models and ising models: disorder points. _Journal of Physics C: Solid State Physics_, 10:1379-1388.
* Favier (2004) Favier, C. (2004). Percolation model of fire dynamic. _Physics Letters A_, 330(5):396-401.
* Finney (1994) Finney, M. (1994). Modeling the spread and behaviour of prescribed natural fires. In _Proceedings of the 12th Conference on Fire and Forest Meteorology, October 26-28 1993, Jekyll Island, Georgia_, pages 138-143.
* Finney (1998) Finney, M. (1998). FARSITE: Fire area simulator-model development and evaluation. Technical Report Research Paper RMRS-RP-4, USDA Forest Service.
* Fardard and Ritter (1996)Forestry Canada Fire Danger Group (1992). Development and structure of the Canadian Forest Fire Behavior Prediction System. Information Report ST-X-3, Forestry Canada Science and Sustainable Development Directorate, Ottawa, ON.
* Fransden and Andrews (1979) Fransden, W. and Andrews, P. (1979). Fire behaviour in non-uniform fuels. Research paper int-232, USDA Forest Service, Intermountain Forest and Range Experiment Station.
* French (1992) French, I. (1992). Visualisation techniques for the computer simulation of bushfires in two dimensions. Master's thesis, Department of Computer Science, University College, University of New South Wales, Australian Defence Force Academy.
* French et al. (1990) French, I., Anderson, D., and Catchpole, E. (1990). Graphical simulation of bushfire spread. _Mathematical and Computer Modelling_, 13(12):67-71.
* Gardner (1970) Gardner, M. (1970). The fantastic combinations of John Conway's new solitary game of \"Life\". _Scientific American_, 222:120-123.
* art. no. 036109. _Physical Review E_, 6803(3):6109-6109.
* Grassberger (2002) Grassberger, P. (2002). Critical behaviour of the drossel-schwabl forest fire model. _New Journal of Physics_, 4:17.1-17.15.
* Green (1983) Green, D. (1983). Shapes of simulated fires in discrete fuels. _Ecological Modelling_, 20(1):21-32.
* Green et al. (1983) Green, D., Gill, A., and Noble, I. (1983). Fire shapes and the adequacy of fire-spread models. _Ecological Modelling_, 20(1):33-45.
* Green et al. (1990) Green, D., Tridgell, A., and Gill, A. (1990). Interactive simulation of bushfires in heterogeneous fuels. _Mathematical and Computer Modelling_, 13(12):57-66.
* Grishin (1997) Grishin, A. (1997). _Mathematical modeling of forest fires and new methods of fighting them_. Publishing House of Tomsk State University, Tomsk, Russia, english translation edition. Translated from Russian by Marek Czuma, L Chikina and L Smokotina.
* Guariso and Baracani (2002) Guariso, G. and Baracani,. (2002). A simulation software of forest fires based on two-level cellular automata. In Viegas, D., editor, _Proceedings of the IV International Conference on Forest Fire Research 2002 Wildland Fire Safety Summit, Luso, Portugal, 18-23 November 2002_.
* Gurer and Georgopoulos (1998) Gurer, K. and Georgopoulos, P. G. (1998). Numerical modeling of forest fires within a 3-d meteorological/dispersion model. In _Second Symposium on Fire and Forest Meteorology, 11-16 January 1998, Pheonix, Arizona_, pages 144-148.
* Hargrove et al. (2000) Hargrove, W., Gardner, R., Turner, M., Romme, W., and Despain, D. (2000). Simulating fire patterns in heterogeneous landscapes. _Ecological Modelling_, 135(2-3):243-263.
* Hogeweg (1988) Hogeweg, P. (1988). Cellular automata as a paradigm for ecological modeling. _Applied Mathematics and Computation_, 27:81-100.
* Jenkins et al. (2001) Jenkins, M. A., Clark, T., and Coen, J. (2001). Coupling atmospheric and fire models. In Johnson, E. and Miyanishi, K., editors, _Forest Fires: Behaviour and Ecological Effects_, chapter 5, pages 257-302. Academic Press, San Diego, CA, 1st edition.
* Hogeweg (1988)Kalabokidis, K., Hay, C., and Hussin, Y. (1991). Spatially resolved fire growth simulation. In _Proceedings of the 11th Conference on Fire and Forest Meteorology, April 16-19 1991, Missoula, MT_, pages 188-195.
* Karafyllidis (1999) Karafyllidis, I. (1999). Acceleration of cellular automata algorithms using genetic algorithms. _Advances in Engineering Software_, 30(6):419-437.
* Karafyllidis (2004) Karafyllidis, I. (2004). Design of a dedicated parallel processor for the prediction of forest fire spreading using cellular automata and genetic algorithms. _Engineering Applications of Artificial Intelligence_, 17(1):19-36.
* Karafyllidis and Thanailakis (1997) Karafyllidis, I. and Thanailakis, A. (1997). A model for predicting forest fire spreading using cellular automata. _Ecological Modelling_, 99(1):87-97.
* Karplus (1977) Karplus, W. J. (1977). The spectrum of mathematical modeling and systems simulation. _Mathematics and Computers in Simulation_, 19(1):3-10.
* King (1971) King, N. (1971). Simulation of the rate of spread of an aerial prescribed burn. _Australian Forest Research_, 6(2):1-10.
* Knight and Coleman (1993) Knight, I. and Coleman, J. (1993). A fire perimeter expansion algorithm based on huygens' wavelet propagation. _International Journal of Widland Fire_, 3(2):73-84.
* Kourtz et al. (1977) Kourtz, P., Nozaki, S., and O'Regan, W. (1977). Forest fires in a computer : A model to predict the perimeter location of a forest fire. Technical Report Information Report FF-X-65, Fisheries and Environment Canada.
* Kourtz and O'Regan (1971) Kourtz, P. and O'Regan, W. (1971). A model for a small forest fire to simulate burned and burning areas for use in a detection model. _Forest Science_, 17(1):163-169.
* Langton (1990) Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and emergent computation. _Physica D: Nonlinear Phenomena_, 42(1-3):12-37.
* Lee (1972) Lee, S. (1972). Fire research. _Applied Mechanical Reviews_, 25(3):503-509.
* Li and Magill (2000) Li, X. and Magill, W. (2000). Modelling fire spread under environmental influence using a cellular automaton approach. _Complexity International_, 8:(14 pages).
* Li and Magill (2003) Li, X. and Magill, W. (2003). Critical density in a fire spread model with varied environmental conditions. _International Journal of Computational Intelligence and Applications_, 3(2):145-155.
* Lopes et al. (1998) Lopes, A., Cruz, M., and Viegas, D. (1998). Firestation-an integrated system for the simulation of wind flow and fire spread over complex topography. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 741-754.
* Lopes et al. (2002) Lopes, A., Cruz, M., and Viegas, D. (2002). Firestation-an integrated software system for the numerical simulation of fire spread on complex topography. _Environmental Modelling & Software_, 17(3):269-285.
* Loreto et al. (1995) Loreto, V., Pietronero, L., Vespignani, A., and Zapperi, S. (1995). Renormalisation group approach to the critical behaviour of the forest-fire model. _Physical Review Letters_, 75(3):465-468.
* Lopes et al. (2003)* Malamud et al. (1998) Malamud, B., Morein, G., and Turcotte, D. (1998). Forest fires: An example of self-organised critical behaviour. _Science_, 281:1840-1842.
* Malamud and Turcotte (1999) Malamud, B. and Turcotte, D. (1999). Self-organised criticality applied to natural hazards. _Natural Hazards_, 20:93-116.
* Margerit and Sero-Guillaume (1998) Margerit, J. and Sero-Guillaume, O. (1998). Richards' model, hamilton-jacobi equations and temperture field equations of forest fires. In _III International Conference on Forest Fire Research. 14th Conference on Fire and Forest Meteorology Luso, Portugal, 16-20 November 1998. Vol 1._, pages 281-294.
* McAlpine and Wotton (1993) McAlpine, R. and Wotton, B. (1993). The use of fractal dimension to improve wildland fire perimeter predictions. _Canadian Journal of Forest Research_, 23:1073-1077.
* McArthur (1966) McArthur, A. (1966). Weather and grassland fire behaviour. Technical Report Leaflet 100, Commonwealth Forestry and Timber Bureau, Canberra.
* McArthur (1967) McArthur, A. (1967). Fire behaviour in eucalypt forests. Technical Report Leaflet 107, Commonwealth Forestry and Timber Bureau, Canberra.
* a complex systems approach using artificial neural networks. In _Proceedings of the The Joint Fire Science Conference and Workshop, June 15-17, 1999, Boise, ID_. [http://jfsp.nifc.gov/conferenceproc/](http://jfsp.nifc.gov/conferenceproc/).
* Mendez and Liebot (1997) Mendez, V. and Liebot, J. E. (1997). Hyperbolic reaction-diffusion equations for a forest fire model. _Physical Review E_, 56(6):6557-6563.
* Morvan et al. (2004) Morvan, D., Larini, M., Dupuy, J., Fernandes, P., Miranda, A., Andre, J., Sero-Guillaume, O., Calogine, D., and Cuinas, P. (2004). Euifrelab: Behaviour modelling of wildland fires: a state of the art. Deliverable D-03-01, EUFIRELAB. 33 p.
* Mraz et al. (1999) Mraz, M., Zimic, N., and Virant, J. (1999). Intelligent bush fire spread prediction using fuzzy cellular automata. _Journal of Intelligent and Fuzzy Systems_, 7(2):203-207.
* Muzy et al. (2005a) Muzy, A., Innocenti, E., Aiello, A., Santucci, J., Santoni, P., and Hill, D. (2005a). Modelling and simulation of ecological propagation processes: application to fire spread. _Environmental Modelling and Software_, 20(7):827-842.
* Muzy et al. (2002) Muzy, A., Innocenti, E., Aiello, A., Santucci, J.-F., and Wainer, G. (2002). Cell-devs quantization techniques in a fire spreading application. In Ycesan, E., Chen, C.-H., Snowdon, J., and Charnes, J., editors, _Proceedings of the 2002 Winter Simulation Conference_.
* Muzy et al. (2005b) Muzy, A., Innocenti, E., Aiello, A., Santucci, J.-F., and Wainer, G. (2005b). Specification of discrete event models for fire spreading. _Simulation_, 81(2):103-117.
* Muzy et al. (2003) Muzy, A., Innocenti, E., Santucci, J. F., and Hill, D. R. C. (2003). Optimization of cell spaces simulation for the modeling of fire spreading. In _Annual Simulation Symposium_, pages 289-296.
* Nahmias et al. (2000) Nahmias, J., Tephany, H., Duarte, J., and Letaconnoux, S. (2000). Fire spreading experiments on heterogeneous fuel beds. applications of percolation theory. _Canadian Journal of Forest Research_, 30(8):1318-1328.
* Nahmias et al. (2002)Nelson, RM, J. (2000). Prediction of diurnal change in 10-h fuel stick moisture content. _Canadian Journal of Forest Research_, 30(7):1071-1087.
* Noble et al. (1980) Noble, I., Bary, G., and Gill, A. (1980). Mcarthur's fire-danger meters expressed as equations. _Australian Journal of Ecology_, 5:201-203.
* Ntaimo et al. (2004) Ntaimo, L., Zeigler, B. P., Vasconcelos, M. J., and Khargharia, B. (2004). Forest fire spread and suppression in devs. _Simulation_, 80(10):479-500.
* Pastor et al. (2003) Pastor, E., Zarate, L., Planas, E., and Arnaldos, J. (2003). Mathematical models and calculation systems for the study of wildland fire behaviour. _Progress in Energy and Combustion Science_, 29(2):139-153.
* Pastor-Satorras and Vespignani (2000) Pastor-Satorras, R. and Vespignani, A. (2000). Corrections to the scaling in the forest-fire model. _Physical Review E_, 61:4854-4859.
* Peet (1965) Peet, G. (1965). A fire danger rating and controlled burning guide for the northern jarrah (euc. marginata sm.) forest of western australia. Technical Report Bulletin No 74, Forests Department, Perth, Western Australia.
* Perry (1998) Perry, G. (1998). Current approaches to modelling the spread of wildland fire: a review. _Progress in Physical Geography_, 22(2):222-245.
* Perry et al. (1999) Perry, G. L., Sparrow, A. D., and Owens, I. F. (1999). A gis-supported model for the simulation of the spatial structure of wildland fire, cass basin, new zealand. _Journal of Applied Ecology_, 36(4):502-502.
* rams. _Meteorology and Atmospheric Physics_, 49:69-91.
* Plourde et al. (1997) Plourde, F., Doan-Kim, S., Dumas, J., and Malet, J. (1997). A new model of wildland fire simulation. _Fire Safety Journal_, 29(4):283-299.
* Pruessner and Jensen (2002) Pruessner, G. and Jensen, H. J. (2002). Broken scaling in the forest-fire model. _Physical Review E_, 65(5):056707 (8 pages).
* Reed and McKelvey (2002) Reed, W. and McKelvey, K. (2002). Power-law behaviour and parametric models for the size-distribution of forest fires. _Ecological Modelling_, 150(3):239-254.
* Rhodes and Anderson (1998) Rhodes, C. and Anderson, R. (1998). Forest-fire as a model for the dynamics of disease epidemics. _Journal of The Franklin Institute_, 335B(2):199-211.
* Richards (1990) Richards, G. (1990). An elliptical growth model of forest fire fronts and its numerical solution. _International Journal for Numerical Methods in Engineering_, 30:1163-1179.
* Richards (1995) Richards, G. (1995). A general mathematical framework for modeling two-dimensional wildland fire spread. _International Journal of Wildland Fire_, 5:63-72.
* Richards and Bryce (1996) Richards, G. and Bryce, R. (1996). A computer algorithm for simulating the spread of wildland fire perimeters for heterogeneous fuel and meteorological conditions. _International Journal of Wildland Fire_, 5(2):73-79.
* Ritter et al. (1998)Ricotta, C. and Retzlaff, R. (2000). Self-similar spatial clustering of wildland fires: the example of a large wildfire in spain. _Internation Journal of Remote Sensing_, 21(10):2113-2118.
* Rothermel (1972) Rothermel, R. (1972). A mathematical model for predicting fire spread in wildland fuels. Research Paper INT-115, USDA Forest Service.
* Rothermel (1991) Rothermel, R. (1991). Predicting behavior and size of crown fires in the northern rocky mountains. Research Paper INT-438, USDA Forest Service.
* Schenk et al. (2000) Schenk, K., Drossel, B., Clar, S., and Schwabl, F. (2000). Finite-size effects in the self-organised critical forest-fire model. _European Physical Journal B_, 15:177-185.
* Schenk et al. (2002) Schenk, K., Drossel, B., and Schwabl, F. (2002). Self-organized critical forest-fire model on large scales. _Physical Review E_, 65(2):026135 (8 pages).
* Seron et al. (2005) Seron, F., Gutierrez, D., Magallon, J., Ferrgut, L., and Asensio, M. (2005). The evolution of a wildland forest fire front. _The Visual Computer_, 21:152-169.
* Sloot et al. (2004) Sloot, P. M. A., Chopard, B., and Hoekstra, A. G., editors (2004). _Cellular Automata, 6th International Conference on Cellular Automata for Research and Industry, ACRI 2004, Amsterdam, The Netherlands, October 25-28, 2004, Proceedings_, volume 3305 of _Lecture Notes in Computer Science_. Springer.
* Speer et al. (2001) Speer, M., Leslie, L., Morison, R., Catchpole, W., Bradstock, R., and Bunker, R. (2001). Modelling fire weather and fire spread rates for two bushfires near sydney. _Australian Meteorological Magazine_, 50(3):241-246.
* Sullivan (2007a) Sullivan, A. (2007a). A review of wildland fire spread modelling, 1990-present, 1: Physical and quasi-physical models. arXiv:0706.3074v1[physics.geo-ph], 46 pp.
* Sullivan (2007b) Sullivan, A. (2007b). A review of wildland fire spread modelling, 1990-present, 2: Empirical and quasi-empirical models. arXiv:0706.4128v1[physics.geo-ph], 32 pp.
* Sullivan and Knight (2004) Sullivan, A. and Knight, I. (2004). A hybrid cellular automata/semi-physical model of fire growth. In _Proceedings of the 7th Asia-Pacific Conference on Complex Systems, 6-10 December 2004, Cairns_, pages 64-73.
* Taplin (1993) Taplin, R. (1993). Sources of variation for fire spread rate in non-homogeneous fuel. _Ecological Modelling_, 68:205-211.
* Trunfio (2004) Trunfio, G. A. (2004). Predicting wildfire spreading through a hexagonal cellular automata model. In Sloot et al. (2004), pages 385-394.
* Van Wagner (1969) Van Wagner, C. (1969). A simple fire-growth model. _The Forestry Chronicle_, 45(1):103-104.
* Van Wagner (1977) Van Wagner, C. (1977). Conditions for the start and spread of crown fire. _Canadian Journal of Forest Research_, 7(1):23-24.
* simulation of fire growth with a geographic information system. _International Journal of Wildland Fire_, 2(2):87-96.
* Van Vasconcelos, M., Guertin, P., and Zwolinski, M. (1990). FIREMAP: Simulations of fire behaviour, a gis-supported system. In _Krammes, JS (ed.). Effects of Fire Management of Southwestern Natural Resources, Proceedings of the Symposium. Nov. 15-17, 1988, Tucson, AZ. USDA Forest Service General Technical Report RM-GTR-191_, pages 217-221.
* Viegas (2002) Viegas, D. (2002). Fire line rotation as a mechanism for fire spread on a uniform slope. _International Journal of Wildland Fire_, 11(1):11-23.
* von Niessen and Blumen (1988) von Niessen, W. and Blumen, A. (1988). Dynamic simulation of forest fires. _Canadian Journal of Forest Research_, 18:805-812.
* Wallace (1993) Wallace, G. (1993). A numerical fire simulation-model. _International Journal of Wildland Fire_, 3(2):111-116.
* Watt et al. (1995) Watt, S., Roberts, A., and Weber, R. (1995). Dimensional reduction of a bushfire model. _Mathematical and Computer Modelling_, 21(9):79-83.
* Weber (1991) Weber, R. (1991). Modelling fire spread through fuel beds. _Progress in Energy Combustion Science_, 17(1):67-82.
* Williams (1982) Williams, F. (1982). Urban and wildland fire phenomenology. _Progress in Energy Combustion Science_, 8:317-354.
* Wolfram (1983) Wolfram, S. (1983). Statistical mechanics of cellular automata. _Reviews of Modern Physics_, 55:601-644.
* Wolfram (1986) Wolfram, S. (1986). _Theory and Application of Cellular Automata_. Advanced Series on Complex Systems-Volume 1. World Scientific, Singapore.
Figure 2: A schematic of the application of Huygens wavelet principle to fire perimeter propagation. In the simple case of homogeneous fuel, a uniform template ellipse, whose geometry is defined by the chosen fire spread model, length:breadth ratio model and the given period of propagation \\(\\Delta t\\), is applied to each node representing the current perimeter. The new perimeter at \\(t+\\Delta t\\) is defined by the external points of all new ellipses.
Figure 1: Ellipse geometry determined from the fire spread and length:breadth model. a+c determined from the predicted rate of spread, b is determined from a and length:breadth model. | In recent years, advances in computational power and spatial data analysis (GIS, remote sensing, etc) have led to an increase in attempts to model the spread and behvaiour of wildland fires across the landscape. This series of review papers endeavours to critically and comprehensively review all types of surface fire spread models developed since 1990. This paper reviews models of a simulation or mathematical analogue nature. Most simulation models are implementations of existing empirical or quasi-empirical models and their primary function is to convert these generally one dimensional models to two dimensions and then propagate a fire perimeter across a modelled landscape. Mathematical analogue models are those that are based on some mathematical conceit (rather than a physical representation of fire spread) that coincidentally simulates the spread of fire. Other papers in the series review models of an physical or quasi-physical nature and empirical or quasi-empirical nature. Many models are extensions or refinements of models developed before 1990. Where this is the case, these models are also discussed but much less comprehensively.
Ensis1 Bushfire Research 2
Footnote 1: A CSIRO/Scion Joint Venture
PO Box E4008, Kingston, ACT 2604, Australia
email: [email protected] or [email protected]
phone: +61 2 6125 1693, fax: +61 2 6125 4676
version 3.0 | Write a summary of the passage below. |
arxiv-format/0707_1823v1.md | ### L-band radiometric behaviour of pine forests for a variety of surface moisture conditions
J.P. Grant (1,2), J.-P. Wigneron (2), A.A. Van de Griend (1), F. Demontoux (3), G. Ruffie (3), A. Della Vecchia (4), N. Skou (5), B. Le Crom (3)
_(1) Vrije Universiteit Amsterdam, Dept. Hydrology & Geo-environmental Sciences, De Boelelaan 1085, 1081 HV, Amsterdam, The Netherlands, (2) INRA, EPHYSE, Bordeaux, France, (3) IMS (PIOM), Universite de Bordeaux, France, (4) Universita di Roma Tor Vergata, Italy, (5) Orsted-DTU, Technical University of Denmark_
[email protected]
## 1 Introduction
Soil moisture is a key variable controlling the exchange of heat and moisture between the land and the atmosphere through evaporation. There is currently a lack of global soil moisture observations, which are necessary to improve our knowledge of the water cycle and to contribute to better weather and climate forecasting. For this reason the European Space Agency (ESA) has developed the Soil Moisture and Ocean Salinity (SMOS) mission, to be launched in 2008, as part of its Living Planet Programme (e.g. Kerr _et al._, 2001).
SMOS will carry a dual-polarization, multi-angle (0\\({}^{\\circ}\\)-55\\({}^{\\circ}\\)) L-band radiometer and provide maps of surface soil moisture over land surfaces and salinity over the oceans. Temporal resolution will be 2-3 days, and spatial resolution will be around 40 km at nadir. At this spatial resolution, many land surface pixels will be inhomogeneous. Few pixels will contain 100% bare soil, therefore the influence of vegetation on the signal should be accounted for. A vegetation layer will attenuate the soil emission and add its own emission to the signal, an effect which increases under wet conditions. Most studies on this subject have focussed on crops. However, a large amount of SMOS pixels will also contain partial forest cover and at present there is little existing knowledge of the influence of this vegetation type on the L-band signal. Most studies on L-band forest radiometry are based on modelling or very short-term field observations (e.g. Lang _et al._ (2001); Ferrazzoli _et al._ (2002); Saleh _et al._ (2004); Della Vecchia _et al._ (2006)). This was the reason to conduct the long-term field experiment 'Bray 2004' over a pine forest and study the effect of the forest on the L-band signal under varying surface moisture conditions (Grant _et al._, 2006).
From the literature comes increasing evidence that a litter layer will also contribute substantially to the above-canopy emission (Schmugge _et al._, 1988; Jackson & Schmugge, 1991; Saleh _et al._, 2006). Therefore, Bray soil and litter dielectric properties were measured at the IMS/PIOM Laboratory, in order to model the resulting emissivity of a soil-litter system (Le Crom _et al._, 2006).
The objective of this study is _first_, to describe the L-band signal above forests for varying surface moisture conditions and _second_, to present the results of litter permittivity measurements. This will give a first insight into the properties of the different forestlayers at L-band for varying hydrological conditions. The resulting information will eventually be incorporated into the forward model of the SMOS Level 2 algorithm: L-band Microwave Emission of the Biosphere (L-MEB) (Wigneron _et al._, 2006).
## 2 Methods and Materials
### Site description
The Bray site lies within the forest of Les Landes, southwest of Bordeaux, France (latitude 44\\({}^{\\circ}\\)42\\({}^{\\prime}\\) N, longitude 0\\({}^{\\circ}\\)46\\({}^{\\prime}\\) W, altitude 61 m). Les Landes forest is a production forest consisting mainly of Maritime Pines (_Pinus pinmaster_ Ait). The trees at the Bray site were 34 years in age at the time of measurement, giving the stand an approximate height of 22 m. The trees are distributed in parallel rows along a northeast-southwest axis with an inter-row spacing of 4 m. Maximum (summer) values for canopy LAI and cover fraction were around 2.15 and 0.35 respectively (from measurements by INRA-Bordeaux). The understory consists mostly of grass (mainly _Molinia caerulea_ L. Moench) and had maximum LAI and cover fraction values of around 2.48 and 0.65 respectively (from measurements by INRA-Bordeaux).
The soils are sandy and hydromorphic podzols, with dark organic matter in the first 60 cm. The percentage of sand in the soil surface layer generally exceeds 80%. On top of the soil lies a distinct litter layer, the upper part of which consists mainly of dead grass and the lower part of grass roots, pine needles and other organic matter. In places the layer thickness exceeded 10 cm, and the large biomass was also indicated by measurements of water content resulting in values of over 10 kgm\\({}^{-2}\\).
### Measurements
#### 2.2.1 Remote Sensing
Microwave measurements were done with the dual-polarization L-band (1.41 GHz) radiometer EMIRAD of the Technical University of Denmark, of which technical details can be found in (Sobierg, 2002).
At the Bray site, the radiometer was mounted on a 40 m mast over the forest, giving it a footprint of approximately 600 m\\({}^{2}\\) at an incidence angle of 45\\({}^{\\circ}\\). Measurements were done automatically at incidence angles of 25\\({}^{\\circ}\\), 30\\({}^{\\circ}\\), 35\\({}^{\\circ}\\), 40\\({}^{\\circ}\\), 45\\({}^{\\circ}\\), 50\\({}^{\\circ}\\), 55\\({}^{\\circ}\\) and 60\\({}^{\\circ}\\) from nadir and averaged to half-hourly values for the final data analysis. A sky calibration was done at intervals throughout the six-month period. Only horizontally polarized measurements were available for this experiment. Full experimental details can be found in (Grant _et al._, 2006).
A thermal infrared (IR) radiometer (Heitronics KT 15.85D; 9.6 - 11.5 \\(\\upmu\\)m) was fixed next to the microwave instrument to give measurements of surface temperature over approximately the same footprint.
#### 2.2.2 Field measurements
Soil temperature was measured at four different locations at depths of 1, 2, 4, 8, 16, 32, 64 and 100 cm below the soil surface, using thermocouples made by INRA and a CR21X Campbell Scientific data logger. Temperature measurements were taken every 10 s and averaged to half-hourly values. The same method was used to record litter temperatures at 1, 3 and 5 cm above the mineral soil surface.
For the measurements of soil and litter moisture content, 3 ThetaProbes (Delta-T Devices Ltd., type ML2x) were placed in the soil layer and 3 in the litter layer. The ThetaProbes each consisted of 4 rods of 60 mm length, which were placed in the respective layer at an angle of approximately 20\\({}^{\\circ}\\). The probes were connected to a CS21X Campbell Scientific data logger, which averaged the measurements taken every 10 seconds to give half hourly values. Periodic soil and litter samples were taken at random locations at the site for calibration purposes. Dry bulk density of the Bray soil was 1.25 gcm\\({}^{3}\\) from previous experiments. Again, full experimental details can be found in (Grant _et al._, 2006).
N.B. Soil and litter moisture contents are given in volumetric and gravimetric percentages respectively. Unit conversion is as follows: 1 % = 1 m\\({}^{3}\\)m\\({}^{3}\\).
#### 2.2.3 Laboratory measurements (litter)
After taking soil and litter samples at the Bray site, laboratory measurements of the dielectric properties of the soil and litter were done using a wave guide technique. This method enabled the use of samples wide enough to account for the layer heterogeneity. Sample thickness was 1 cm for soil and 2 cm for litter. Waveguide dimensions were 129.27 x 54.77 mm. The samples were held inside the guide using a support with as a base a 100 \\(\\upmu\\)m thick Mylar sheet, considered to be quasi-transparent for the electromagnetic waves. The electromagnetic parameters of the samples were determined using the Nicolson Ross Weir method (NRW) for rectangular waveguides. The principle of the calculation is based on the fact that introduction of the sample into the guide produces a change of characteristic impedance. Full experimental & modeling details can be found in (Le Crom _et al._, 2006).
### Calculations
Emissivity calculations were based on the Rayleigh-Jeans approximation for the microwave domain:
\\[e_{\\mathrm{surf}}\\left(\\theta,P\\right)=T_{\\mathrm{B}}\\left(\\theta,P\\right)/T_{ \\mathrm{surf}} \\tag{1}\\]
where \\(T_{\\mathrm{B}}\\) is the observed brightness temperature, \\(\\theta\\) and \\(P\\) are the incidence angle and polarization of the measurement, respectively, and \\(T_{\\mathrm{surf}}\\) and \\(e_{\\mathrm{surf}}\\) are the temperature and emissivity, respectively, of the emitting surface. Emissivity is a function of soil moisture, which relationship can be described with a dielectric mixing model (the method of Dobson _et al._ (1985) is used here and in L-MEB) and the Fresnel equations.
L-MEB includes a calculation for an effective ground-canopy temperature (Wigneron _et al._, 2006), which was used here to find \\(T_{\\mathrm{surf}}\\):
\\[T_{\\mathrm{surf}}=A_{\\mathrm{t}}\\cdot T_{\\mathrm{canopy}}+\\left(1-A_{\\mathrm{ t}}\\right)\\cdot T_{\\mathrm{soil}} \\tag{2}\\]
with (\\(0\\leq A_{\\mathrm{t}}\\leq 1\\)):
\\[A_{\\mathrm{t}}=B_{\\mathrm{t}}\\cdot\\left(1-\\Gamma(\\theta,P)\\right) \\tag{3}\\]
where \\(B_{\\mathrm{t}}\\) is a canopy type-dependent parameter, and:
\\[\\Gamma(\\theta,P)=e^{-\\tau_{0}/\\cos\\theta} \\tag{4}\\]
where \\(\\Gamma(\\theta,P)\\) is the canopy transmissivity and \\(\\tau_{0}\\) is the vegetation optical depth at nadir.
In this study, the effective soil temperature \\(T_{\\mathrm{soil}}\\) was calculated according to the method described in (Ulaby _et al._, 1986), using soil temperature and dielectric (i.e. moisture) profiles. Canopy temperature \\(T_{\\mathrm{canopy}}\\) was taken from the IR measurements, which were found to show 96% correlation with branch temperatures measured at Bray. \\(A_{\\mathrm{t}}\\) was found by optimizing for \\(B_{\\mathrm{t}}\\) and \\(\\tau_{0}\\), which resulted in \\(B_{\\mathrm{t}}=0.49\\pm 0.13\\) and \\(\\tau_{0}=0.62\\pm 0.24\\).
### Modelling
Modelling was done using the L-MEB model (Wigneron _et al._, 2006), which for a vegetation-covered surface is based on a simplified radiative transfer model, also known as the \\(\\tau\\)-\\(\\omega\\) model:
\\[T_{\\mathrm{B}}=T_{\\mathrm{g}}e_{\\mathrm{g}}\\Gamma+T_{\\mathrm{v}}\\left(1- \\omega\\right)\\left(1-\\Gamma\\right)\\left(1+\\Gamma\\cdot\\left(1-e_{\\mathrm{g}} \\right)\\right) \\tag{5}\\]
where the subscripts 'g' and 'v' denote ground and vegetation, respectively, and \\(\\omega\\) is the single scattering albedo of the vegetation canopy.
This model accounts for _1)_ direct vegetation emission, _2)_ soil emission attenuated by the canopy and _3)_ vegetation emission reflected by the soil and attenuated by the canopy.
## 3 Results and Discussion
### Temperature and moisture
Correlation between measured brightness temperatures and the effective ground-canopy temperature was found to be 85%. Figure 1 gives a visual example of this relationship for the period Julian Day (JD) 271-276. The high correlation indicates that in this case much of the horizontally polarized L-band signal is dominated by temperature influences.
Figure 2 shows brightness temperatures plotted against incidence angle for 'wet' (soil moisture SM > 25%) and 'dry' (SM < 15% and no precipitation). There is a clear difference in 'angular signal' for both conditions, showing that multi-angular measurements, such as those of SMOS, contain information on surface moisture conditions.
The pattern of a decreasing emissivity with increasing viewing angle is a typical soil pattern, whereas a typical canopy pattern shows less angular influence. Soil emission decreases under wet conditions, whereas canopy emission increases. A wet bare soil shows a higher range and lower values of emissivity compared to a dry bare soil, and the patterns shown in figure 2 are therefore not unexpected. However, the possible influence of a litter layer on the above-canopy signal should still be considered and investigated.
Figure 1: Measured brightness temperatures \\(T_{\\mathrm{B}}\\) (top) and calculated surface temperature \\(T_{\\mathrm{surf}}\\) (bottom) for the period JD 271-276; R\\({}^{2}\\) = 0.85.
A strong relation between soil and litter moisture was found at this site (figure 3), thus making it difficult to decouple the effects of soil and litter moisture on the above-canopy signal.
When above-canopy emissivity is plotted against either soil or litter moisture, as in figure 4 (45\\({}^{\\circ}\\) measurements only), a very small dynamic range is found: \\(\\sim\\) 0.04 change in emissivity for a \\(\\sim\\) 20% range in soil moisture or a \\(\\sim\\) 60% range in litter moisture (similar ranges; see fig.3). Average emissivity values are high. This shows that a forest system such as that found at Bray has a very low sensitivity to variations in soil moisture and it is therefore doubtful whether soil moisture content can be retrieved with a meaningful precision in this kind of environment.
### Litter
Figures 5 and 6 show the results of permittivity measurements of soil and litter, respectively. Only the real part (\\(\\varepsilon\\)') of the dielectric constant is shown here, as at 1.4 GHz this is the main part of the dielectric constant affecting emissivity. The figures give a band of permittivity values, in order to include the effects of medium heterogeneity and errors of measurement. A detailed explanation and further measurements can be found in (Le Crom _et al._, 2006).
The figures show that for similar moisture ranges (from fig. 3), the range in soil permittivity is twice that of litter. Therefore, if a substantial litter layer is present at a given site, but ignored, this could result in a severe underestimation of soil moisture content at higher wetness conditions (from a dielectric mixing
Figure 4: Emissivity calculated from brightness temperature measurements _vs._ soil (left) and litter (right) moisture content.
Figure 5: Field of soil permittivity (\\(\\varepsilon\\)’) _vs._ moisture content.
Figure 3: Soil moisture (vol.%) _vs._ litter moisture (grav.%) for the Bray site; R\\({}^{2}\\) = 0.84.
Figure 6: Field of litter permittivity (\\(\\varepsilon\\)’) _vs._ moisture content.
Figure 2: Measured brightness temperatures \\(T_{\\mathrm{B}}\\) for incidence angles 25\\({}^{\\circ}\\)-60\\({}^{\\circ}\\). Left: ‘wet’ measurements at times when SM > 25%; right: ‘dry’ measurements at times when SM < 15% and no precipitation.
model (e.g. Dobson _et al._, 1985) and the Fresnel equations).
From figures 5 and 6, and the relationship given in fig. 3, it was possible to model the emissivity of a soil overlain by a 3 cm litter layer, as a function of soil moisture content (Le Crom _et al._, 2006). The result is shown in figure 7. This information will be used for future evaluation and/or adaptation of the L-MEB model to account for the effect of a litter layer on the L-band signal.
Using L-MEB (eq. 5) and assumed values of \\(\\Gamma\\) = 0.39 (at a 45\\({}^{\\circ}\\) angle) and \\(\\omega\\) = 0.08 for the Bray canopy (from Della Vecchia and Ferrazzoli, 2006), we calculated the ground emissivity, which in this case is the soil+litter emissivity.
Results are shown in figure 8, where above-canopy (fig. 4), ground surface (soil+litter system) and smooth soil (from Dobson _et al._ (1985)) emissivities are compared as a function of soil moisture content. Theoretically, the ground surface emissivity in this figure should be the same as the result given in figure 7. However, probably due to the use of different models and differences in layer thickness, the results are not exactly the same, although the magnitude of the range is similar. The figures should therefore be taken as a first indication of the emissivity ranges involved.
## 4 Summary and Conclusion
The greater part of the horizontally polarized L-band signal is dominated by temperature influences. Variations in soil and/or litter moisture are visible in the angular signal and in the above-canopy microwave emission, although the dynamic range of this last effect is very small. This, together with the fact that emissivity values are very high, is possibly due to the presence of a substantial litter layer.
Decoupling of soil and litter effects is difficult because of a strong correlation between soil and litter moisture. Therefore, laboratory measurements and modelling were done at IMS laboratory to improve our understanding of this issue. For similar moisture ranges, the range in soil permittivity was found to be twice that of litter. Ignoring the presence of a litter layer could therefore result in a severe underestimation of soil moisture content at higher wetness conditions.
Results of these studies will be used for future evaluation/adaptation of the L-MEB model used for SMOS.
## References
* Part II: Dielectric Mixing Models. _IEEE Transactions on Geoscience and Remote Sensing_, 23, 35-46.
* Della Vecchia and Ferrazzoli (2006) Della Vecchia, A. and Ferrazzoli, P., 2006, A Large-Scale Approach to Estimate L-band Emission from Forest Covered Surfaces. ESA Technical Note SO-TN-TV-GS-0001-01.a, SMPPD Study, February 2006.
* Della Vecchia _et al._ (2006) Della Vecchia, A., Saleh, K., Ferrazzoli, P., Gurriero, L and Wigneron, J.-P., 2006, Simulating L-band Emission of Coniferous Forests Using a Discrete Model and a Detailed Geometrical Representation. _IEEE Transactions on Geoscience and Remote Sensing Letters_, 3(3), 364-368.
* Ferrazzoli _et al._ (2002) Ferrazzoli, P., Gurriero, L. and Wigneron, J.-P., 2002, Simulating L-band Emission of Forests in View of Future Satellite Applications. _IEEE Transactions on Geoscience and Remote Sensing_, 40(12), 2700-2708.
Figure 8: Comparison of above-canopy, ground surface and smooth soil emissivities as a function of soil moisture content.
Figure 7: Field of emissivity as a function of moisture content for a soil overlain by a 3 cm litter layer.
Grant, J.P., Wigneron, J.-P., Van de Griend, A.A., Schmidl Sobjerg, S. and Skou, N., 2006, First results of the 'Bray 2004' field experiment on L-band forest radiometry - microwave signal behaviour for varying conditions of surface moisture. _Remote Sensing of Environment_, submitted. Jackson, T.J. and Schmugge, T.J., 1991, Vegetation Effects on the Microwave Emission of Soils. _Remote Sensing of Environment_, 36, 203-212. Kerr, Y.H., Waldteufel, P., Wigneron, J.-P., Font, J. and Berger, M., 2001, Soil Moisture Retrieval from Space: The Soil Moisture and Ocean Salinity (SMOS) Mission. _IEEE Transactions on Geoscience and Remote Sensing._, 39(8), 1729-1735. Lang, R.H., Utku, C., De Matthaeus, P., Chauban, N. and Le Vine, D.M., 2001, ESTAR and Model Brightness Temperatures over Forests: Effects of soil moisture. Proceedings of IGARSS-01, Sydney. Le Crom, B., Demontoux, F., Ruffie, G., Wigneron, J.-P. and Grant, J.P., 2006, Electromagnetic Characterization of Soil-Litter Media, Application to the simulation of the microwave emissivity of the ground surface in forests. _IEEE Transactions on Geoscience and Remote Sensing_, in preparation. Saleh, K., Wigneron, J.-P., Calvet, J.-C., Lopez-Baeza, E., Ferrazzoli, P., Berger, M., Wursteisen, P., SImmonds, L. and Miller, J., 2004, The EuroSTARRS Airborne Campaign in Support of the SMOS Mission: First results over land surfaces. _International Journal of Remote Sensing_, 25(1), 177-194. Saleh, K., Wigneron, J.-P., De Rosnay, P., Calvet, J.-C., Escorihuela, M.J., Kerr, Y. and Waldteufel, P., 2006, Impact of Rain Interception by Vegetation and Mulch on the L-band Emission of Natural Grass (SMOSREX Experiment). _Remote Sensing of Environment_, 101(1), 127-139. Schmugge, T.J., Wang, J. R. and Asrar, G, 1988, Results from the Push Broom Microwave Radiometer Flights over the Konza Prairie in 1985. _IEEE Transactions on Geoscience and Remote Sensing_, 26(5), 590-596. Sobjerg Schmid, S., 2002, Polarimetric Radiometers and their Applications. PhD Thesis, Technical University of Denmark, 144 pages. Ulaby, F., Moore, and Fung, A., 1986, Microwave Remote Sensing: Active and Passive, Vol. III: From theory to applications, Artech House, Dedham, MA. 1537-1539. Wigneron, J.-P., Kerr, Y., Waldteufel, P., Saleh, K., Richaume, P., Ferrazzoli, P., Escorihuela, M.-J., Grant, J.P., Hornbuckle, B., De Rosnay, P., Calvet, J.-C., Pellarin, T., Gurney, R. and Matzler, C., 2006, L-band Microwave Emission of the Biosphere (L-MEB) Model, Results from calibration against experimental data sets over crop fields. _Remote Sensing of Environment_, in press. | _From July-December 2004 the experimental campaign 'Bray 2004' was conducted in the confierous forest of Les Landes near Bordeaux, France, using a multi-angle L-band (1.4 GHz) radiometer to measure upwelling radiation above the forest. At the same time, ground measurements were taken of soil and litter moisture content. This experiment was done in the context of the upcoming SMOS mission in order to improve our understanding of the behaviour of the L-band signal above forested areas. Very little information exists on this subject at the moment, especially for varying hydrological conditions. Furthermore, additional measurements were done at the University of Bordeaux (IMS laboratory) to determine the dielectric behaviour of a litter layer such as that found at the Bray site. There is some evidence that this layer may have a different influence on the L-band signal than either the soil or the vegetation, however the exact behaviour of the litter layer and the extent of its influence on the L-band signal are as yet unknown. This paper presents 1) results of the Bray experiment describing the behaviour of the above-canopy L-band emissivity for different conditions of ground moisture and 2) the relationship between soil and litter moisture content and results of the laboratory experiments on litter dielectric properties. Together this will give a first insight into the L-band radiometric properties of the different forest layers for varying hydrological conditions._ | Condense the content of the following passage. |
arxiv-format/0707_1954v1.md | # Bandlimited Field Reconstruction for Wireless Sensor Networks
Alessandro Nordio Carla-Fabiana Chiasserini Emanuele Viterbo
Politecnico di Torino - Dipartimento di Elettronica
C. Duca degli Abruzzi 24, I-10129 Torino (Italy)
e-mail: <name>@polito.it
## I Introduction
One of the most popular applications of wireless sensor networks is environmental monitoring. In general, a physical phenomenon (hereinafter also called sensor field or physical field) may vary over both space and time, with some band limitation in both domains. In this work, we address the problem of sampling and reconstruction of a spatial field at a fixed time instant. We focus on a bandlimited field (e.g., pressure and temperature), and assume that sensors are randomly deployed over a geographical area to sample the phenomenon of interest.
Data are transfered from the sensors to a common data-collecting unit, the so-called sink node. In this work, however, we are concerned only with the reconstruction of the sensor field, and we do not address issues related to information transport. Thus, we assume that all data is correctly received at the sink node. Furthermore, we assume that the sensors have a sufficiently high precision so that the quantization error is negligible, and the sensors position is known at the sink node. The latter assumption implies that nodes are either located at pre-defined positions, or, if randomly deployed, their location can be acquired (see [8, 9, 10] for a description of node location methods in sensor networks).
Our objective is to investigate the relation between the network topology and the probability of successful reconstruction of the field of interest. The success of the reconstruction algorithm strongly depends on the given machine precision, since it may fail to invert some ill-conditioned Toeplitz matrix (see Section III).
More specifically, we pose the following question: _under which conditions on the network topology (i.e., on the sample distribution) the sink node successfully reconstructs the signal with a given probability?_ The solution to the problem seems to be hard to find, even under the simplifying assumptions we described above.
The main contributions of our work are summarized below.
1. We first consider deterministic sensor locations. By reviewing irregular sampling theory [1], we show some sufficient conditions on the number of sensors to be deployed and on how they should be spatially spaced so as to successfully reconstruct the measured field.
2. We then consider a random network topology and analyze the problem using random matrix theory. We identify the conditions under which the filed reconstruction is successful with a fixed probability, and we show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth.
3. Finally we provide the theoretical basis to estimate the required number of active sensors, given the field bandwidth.
## II Related work
Few papers have addressed the problem of sampling and reconstruction in sensor networks. Efficient techniques for spatial sampling in sensor networks are proposed in [2, 3]. In particular [2] presents an algorithm to determine which sensor subsets should be selected to acquire data from an area of interest and which nodes should remain inactive to save energy. The algorithm chooses sensors in such a way that the node positions can be mapped into a blue noise binary pattern. In [3], an adaptive sampling is described, which allows the central data-collector to vary the number of active sensors, i.e., samples, according to the desired resolution level. Data acquisition is also studied in [4], where the authors consider a unidimensional field, uniformly sampled at the Nyquist frequency by low precision sensors. The authors show that the number of sensors (i.e., samples) can be traded-off with the precision of sensors. The problem of the reconstruction of a bandlimited signal from an irregular set of samples at unknown locations is addressed in [5]. There, different solution methods are proposed, and the conditions for which there exist multiple solutions or a unique solution are discussed.
Note that our work significantly differs from the studies above because we assume that the sensors location are known (or can be determined [8, 9, 10]) and the sensor precision is sufficiently high so that the quantization error is negligible. The question we pose is instead under which conditions (on the network system) the reconstruction of a bandlimited signal is successful with a given probability.
## III Irregular sampling of band-limited signals
Let us consider the one-dimensional model where \\(r\\) sensors, located in the normalized interval \\([0,1)\\), measure the value of a band-limited signal \\(p(t)\\). As a first step, we assume that the position of the sensors sampling the field are deterministic and known, and the sensors can represent each sample with a sufficient number of bits so that the quantization error is negligible. Let \\(t_{q}\\in[0,1)\\) for \\(q=1\\ldots,r\\) be the deterministic locations of the sampling points ordered increasingly and \\(p(t_{q})\\) the corresponding samples.
A strictly band-limited signal over the interval \\([0,1)\\) can be written as the weighted sum of \\(M^{\\prime}\\) harmonics in terms of Fourier series
\\[p(t)=\\sum_{k=-M^{\\prime}}^{M^{\\prime}}a_{k}\\mathrm{e}^{2\\pi\\mathrm{i}kt} \\tag{1}\\]
Note that for real valued signals the Fourier coefficients satisfy the relation \\(a_{k}^{*}=a_{-k}\\) and that the series (1) can be represented as a sum of cosines.
The reconstruction problem can be formulated as follows:given \\(r\\) pairs \\([t_{q},p(t_{q})]\\) for \\(q=1,\\ldots,r\\) and \\(t_{q}\\in[0,1)\\) find the band-limited signal in (1) uniquely specified by the sequence of its Fourier coefficients \\(a_{k}\\)._
Let the reconstructed signal be
\\[\\hat{p}(t)=\\sum_{k=-M}^{M}\\hat{a}_{k}\\mathrm{e}^{2\\pi\\mathrm{i}kt} \\tag{2}\\]
where the \\(\\hat{a}_{k}\\) are the corresponding Fourier coefficients up to the \\(M\\)-th harmonic. In general, the reconstruction procedure will minimize \\(\\|p(t)-\\hat{p}(t)\\|^{2}\\) if \\(M<M^{\\prime}\\) and give \\(p(t)=\\hat{p}(t)\\) if \\(M=M^{\\prime}\\).
Consider the \\((2M+1)\\times r\\) matrix \\(\\mathbf{F}\\) whose \\((k,q)\\)-th element is defined by
\\[(\\mathbf{F})_{k,q}=\\frac{1}{\\sqrt{r}}\\mathrm{e}^{2\\pi\\mathrm{i}kt_{q}} k=-M,\\ldots,M\\] \\[q=1,\\ldots,r\\]
the vector \\(\\hat{\\mathbf{a}}=[\\hat{a}_{-M},\\ldots,\\hat{a}_{0},\\ldots,\\hat{a}_{M}]^{\\mathrm{ T}}\\) of size \\(2M+1\\) and the vector
\\(\\mathbf{p}=[p(t_{1}),\\ldots,p(t_{r})]^{\\mathrm{T}}\\). We have the following linear system [1]:
\\[\\mathbf{FF}^{\\dagger}\\hat{\\mathbf{a}}=\\mathbf{F}\\mathbf{p} \\tag{3}\\]
where \\((\\cdot)^{\\dagger}\\) is the conjugate transpose operator. Let us denote \\(\\mathbf{T}=\\mathbf{FF}^{\\dagger}\\) and \\(\\mathbf{b}=\\mathbf{F}\\mathbf{p}\\), hence (3) becomes \\(\\mathbf{T}\\hat{\\mathbf{a}}=\\mathbf{b}\\) and then \\(\\hat{\\mathbf{a}}=\\mathbf{T}^{-1}\\mathbf{b}\\).
When the samples are equally spaced in the interval \\([0,1)\\), i.e., \\(t_{q}=(q-1)/r\\), we observe that the matrix \\(\\mathbf{F}\\) is a unitary matrix (\\(\\mathbf{FF}^{\\dagger}=\\mathbf{T}=\\mathbf{I}_{2M+1}\\)) 1 and its rows are orthonormal vectors of an inverse DFT matrix. In this case (3) gives the first \\(M\\) Fourier coefficients of sample sequence \\(\\mathbf{p}\\).
Footnote 1: The symbol \\(\\mathbf{I}_{n}\\) represents the \\(n\\) by \\(n\\) identity matrix
When the samples \\(t_{q}\\) are not equally spaced, the matrix \\(\\mathbf{F}\\) is no longer unitary and the matrix \\(\\mathbf{T}\\) becomes a \\((2M+1)\\times(2M+1)\\) Hermitian Toeplitz matrix
\\[\\mathbf{T}=\\mathbf{T}^{\\dagger}=\\left(\\begin{array}{cccc}r_{0}&r_{1}&\\cdots &r_{2M}\\\\ r_{-1}&r_{0}&\\cdots&r_{2M-1}\\\\ &&\\ddots&\\\\ r_{-2M}&&\\cdots&r_{0}\\end{array}\\right)\\]where
\\[(\\mathbf{T})_{k,m}=r_{k-m}=\\frac{1}{r}\\sum_{q=1}^{r}\\mathrm{e}^{2\\pi \\mathrm{i}(k-m)t_{q}}\\hskip 28.452756ptk,m=-M\\ldots,M \\tag{4}\\]
The above Toeplitz matrix \\(\\mathbf{T}\\) is uniquely defined by the \\(4M+1\\) variables
\\[r_{\\ell}=\\frac{1}{r}\\sum_{q=1}^{r}\\mathrm{e}^{2\\pi\\mathrm{i}\\ell t_{q}}\\hskip 28.452756pt \\ell=-2M,\\ldots 2M \\tag{5}\\]
The solution of (3), which involves the inversion of \\(\\mathbf{T}\\), requires some care if the condition number of \\(\\mathbf{T}\\) (or equivalently of \\(\\mathbf{F}\\)) becomes large. We recall that the condition number of \\(\\mathbf{T}\\) is defined as
\\[\\kappa=\\frac{\\lambda_{\\max}}{\\lambda_{\\min}} \\tag{6}\\]
where \\(\\lambda_{\\max}\\) and \\(\\lambda_{\\min}\\) are the largest and the smallest eigenvalues of \\(\\mathbf{T}\\), respectively. The base-10 logarithm of \\(\\kappa\\) is an estimate of how many base-10 digits are lost in solving a linear system with that matrix.
In practice, matrix inversion is usually performed by algorithms which are very sensitive to small eigenvalues, especially when smaller than the machine precision. For this reason in [1] a preconditioning technique is used to guarantee a bounded condition number when the maximum separation between consecutive sampling points is not too large. More precisely, by defining \\(w_{q}=(t_{q+1}-t_{q-1})/2\\) for \\(q=1\\ldots,r\\), where \\(t_{0}=t_{r}-1\\) and \\(t_{r+1}=1+t_{1}\\), and by letting \\(\\mathbf{W}=\\text{diag}(w_{1},\\ldots,w_{r})\\), the preconditioned system becomes
\\[\\mathbf{T}_{w}\\mathbf{\\hat{a}}=\\mathbf{b}_{w}\\]
where \\(\\mathbf{T}_{w}=\\mathbf{FWF}^{\\dagger}\\) and \\(\\mathbf{b}_{w}=\\mathbf{FWp}\\). Let us define the maximum gap between consecutive sampling points as
\\[\\delta=\\max(t_{q}-t_{q-1}).\\]
In [1] it is shown that, when \\(\\delta<1/2M\\),we have:
\\[\\kappa(\\mathbf{T}_{w})\\leq\\left(\\frac{1+2\\delta M}{1-2\\delta M}\\right)^{2} \\tag{7}\\]
This result generalizes the Nyquist sampling theorem to the case of irregular sampling, but only gives a _sufficient_ condition for perfect reconstruction when the condition number is compatible with the machine precision. Unfortunately, when \\(\\delta>1/2M\\), the result (7) does not hold.
In Figure 1 and 2 we present two examples of reconstructed signals from irregular sampling, using (3). Figure 1 refers to the case \\(M=10\\) and \\(r=26\\), where the samples have been randomly selected over the interval \\([0,0.8)\\). The signal is perfectly reconstructed even if large gaps are present (\\(\\delta>0.2\\), i.e., \\(\\delta>1/2M\\)). In Figure 2, \\(r=21\\) samples of the same signal of Figure 1 have been taken randomly over the entire window \\([0,1)\\). Due to the bad conditioning of the matrix \\(\\mathbf{T}\\) (i.e., very low eigenvalues), the algorithm fails in reconstructing the signal due to machine precision underflow.
Driven by these observations, the objective of our work is to provide conditions for the successful reconstruction of the sampled field, by using a probabilistic approach. In the following we give a probabilistic description of the condition number, without explicitly considering preconditioning.
## IV The random matrix approach: unsuccessful signal reconstruction
The above results are based on deterministic locations of the sampling points. In this section we discuss instead the case where the sampling points \\(t_{q}\\) are i.i.d. random variables with uniform distribution \\(\\mathcal{U}[0,1)\\). In other words we consider the case where the matrix \\(\\mathbf{T}\\) is random and completely defined by the random vector \\(\\mathbf{t}=[t_{1},\\ldots,t_{r}]\\). We introduce here the parameter \\(\\beta\\) as the ratio of the two-sided signal bandwidth \\(2M+1\\) and the number of sensors \\(r\\)
\\[\\beta=\\frac{2M+1}{r}. \\tag{8}\\]
In the following we consider the asymptotic case where the values of \\(M\\) and \\(r\\) grow to infinity while \\(\\beta\\) is kept constant. We then show that properties of systems with finite \\(M\\) and \\(r\\) are well approximated by the asymptotic results.
We focus here on the expression of the probability of unsuccessful signal reconstruction, i.e., the probability that the reconstruction algorithm fails given the machine precision \\(\\epsilon\\), the signal bandwidth \\(M\\), and the number of sensors \\(r\\). For a given realization of \\(\\mathbf{T}\\) and for finite values of \\(M\\) and \\(r\\) we denote by \\(\\boldsymbol{\\lambda}=[\\lambda_{1},\\ldots,\\lambda_{2M+1}]\\) the vector of eigenvalues, and by \\(\\lambda_{\\min}=\\min(\\boldsymbol{\\lambda})\\) and \\(\\lambda_{\\max}=\\max(\\boldsymbol{\\lambda})\\) the minimum and maximum eigenvalues, respectively. Also let \\(f_{M,\\beta}(x)\\) be the empirical probability density function (pdf) of the eigenvalues of \\(\\mathbf{T}\\) for a finite \\(M\\) and \\(\\beta\\) and let \\(f_{\\beta}(x)\\) be the limiting eigenvalue pdf in the asymptotic case (i.e., when \\(M\\) and \\(r\\) grow to infinity with constant \\(\\beta\\)) [6]. The random variable \\(\\lambda_{\\min}=\\min(\\boldsymbol{\\lambda})\\), and the condition number \\(\\kappa\\) have pdf \\(f_{M,\\beta}^{\\min}(x)\\) and \\(f_{M,\\beta}^{\\kappa}(x)\\), respectively. The corresponding cumulative density functions (cdf) are denoted by \\(F_{M,\\beta}(x)\\), \\(F_{\\beta}(x)\\), \\(F_{M,\\beta}^{\\min}(x)\\), and \\(F_{M,\\beta}^{\\kappa}(x)\\).
### _Some properties of the eigenvalue distribution_
We first analyze by Montecarlo simulation some properties of the distribution \\(f_{M,\\beta}(x)\\). Figure 3 shows histograms of \\(f_{M,\\beta}(x)\\) for \\(M=1,4,10,90\\), \\(\\beta=0.25\\), and bin width of \\(0.1\\). Notice that, as \\(M\\) increases with constant \\(\\beta\\), the histograms of \\(f_{M,\\beta}(x)\\) seem to converge to \\(f_{\\beta}(x)\\), only depending on \\(\\beta\\). Indeed, looking at the figure, one can notice that the difference between the curves for \\(M=10\\) and \\(M=90\\) is negligible. Although we report in Figure 3 only the case for \\(\\beta=0.25\\), we observed the same behavior for any value of \\(\\beta\\). We therefore conclude that \\(M=10\\) is large enough to provide a good approximation of \\(f_{\\beta}(x)\\).
In Figure 4 we show histograms of \\(f_{M,\\beta}(x)\\) for \\(\\beta=0.15,0.25,0.35,0.45,0.55\\) and values of \\(M\\) around \\(100\\). For \\(\\beta\\) larger than \\(0.35\\) the distribution shows oscillations and tends to infinity while \\(x\\) approaching \\(0\\). On the other hand, for \\(\\beta\\) lower than \\(0.35\\) the pdf does not oscillate and tends to \\(0\\) while \\(x\\) approaching \\(0\\). In order to better understand this behavior for small \\(x\\), which can be heavily affected by the bin width, in Figure 5 we consider the cdf \\(F_{M,\\beta}(x)\\) in the log-log scale, for various values of \\(\\beta\\) ranging from \\(0.1\\) to \\(0.8\\) and \\(M=200\\). The dashed curves represent the simulated cdf. Surprisingly they show a linear behavior for small values of \\(x\\) and for any value of \\(\\beta\\). This is evidenced by the solid lines which are the tangents to the dashed curves at \\(F_{M,\\beta}(x)=10^{-2}\\). The slope of the lines is parameterized by \\(\\beta\\). In our simulations the machine precision is approximately \\(\\epsilon=10^{-16}\\) and, hence, values of \\(x<\\epsilon\\) cannot be represented since they are treated as zero by the algorithm. Indeed the simulated pdfs loose their linear behavior while approaching \\(x=\\epsilon\\) (see the case \\(\\beta=0.8\\) in Figure 5). We conclude that for \\(x\\ll 1\\) the cdf \\(F_{\\beta}(x)\\) can be approximated by
\\[F_{\\beta}(x)\\approx bx^{a} \\tag{9}\\]
where \\(a=a(\\beta)\\) and \\(b=b(\\beta)\\) are both functions of \\(\\beta\\). By deriving (9) with respect to \\(x\\) we obtain the approximate expression for the pdf:
\\[f_{\\beta}(x)\\approx a(\\beta)b(\\beta)x^{a(\\beta)-1} \\tag{10}\\]
From (10) it can be seen that the function \\(a(\\beta)\\) represents the slope of \\(F_{\\beta}(x)\\) in the log-log scale for \\(x\\ll 1\\). Note that in order \\(x^{a(\\beta)-1}\\) to be integrable in \\([0,c)\\), for any positive constant \\(c\\)the condition \\(a(\\beta)>0\\) should be satisfied. Note also from Figure 5 that the slope \\(a(\\beta)=1\\) is obtained for \\(\\beta\\approx 0.35\\). For this value of \\(\\beta\\) the approximate pdf is constant for \\(x\\ll 1\\), which is consistent with the results in Figure 4.
Some additional considerations can be drawn from Figure 6, which presents the pdf of \\(f_{M,\\beta}(x)\\) for \\(\\beta=0.25,0.50,0.75\\) and \\(M=200\\). It is interesting to note that for any value of \\(\\beta\\), large eigenvalues are less likely to appear than very small eigenvalues. This is evident by observing that for \\(x\\gg 1\\) the pdf falls to \\(-\\infty\\) much faster than for \\(x\\ll 1\\). This consideration is of great relevance when discussing the condition number distribution.
### _Distribution of the minimum eigenvalue_
For finite \\(M\\) the cdf of \\(\\lambda_{\\min}\\) can be computed as follows
\\[F_{M,\\beta}^{\\min}(x) = \\mathbb{P}(\\lambda_{\\min}<x|M)\\] \\[= \\mathbb{P}(\\min(\\boldsymbol{\\lambda})<x|M)\\]
In general the random variables \\(\\lambda_{1},\\ldots,\\lambda_{2M+1}\\) are not independent. However, considering sufficiently large values of \\(M\\) (namely, \\(M\\geq 10\\)), we can write the following upper bound for \\(F_{M,\\beta}^{\\min}(x)\\):
\\[F_{M,\\beta}^{\\min}(x)\\leq(2M+1)F_{\\beta}(x). \\tag{11}\\]
This is obtained by assuming that the eigenvalues are independent with pdf equal to the limiting eigenvalue distribution. The simulation results presented in Figure 7 confirm the expression in (11). The figure shows the cdfs of \\(\\lambda\\) and \\(\\lambda_{\\min}\\) in the log-log scale for \\(\\beta=0.25,0.50,0.75\\) and \\(M=40\\). The cdf of \\(\\lambda_{\\min}\\) also shows a linear behavior for \\(x\\ll 1\\). In the log-log scale, according to (11), the two cdfs should be separated by \\(\\log_{10}(2M+1)\\). In our case: \\(M=40\\) and \\(\\log_{10}(2M+1)\\approx 1.91\\). As is evident from the figure, this upper bound is extremely tight, especially for low values of \\(\\beta\\).
### _Distribution of the condition number_
Here we describe the condition number distribution. The condition number is defined by (6). As noted at the end of Section IV-A the minimum eigenvalue dominates the ratio \\(\\lambda_{\\max}/\\lambda_{\\min}\\). This fact is more evident in Figure 8, where we compare the distributions of the condition number and of the minimum eigenvalue, for \\(\\beta=0.25\\) and \\(M=10,20,40\\). The three dashed curves on the left represent the pdf of the minimum eigenvalue. The solid lines on the right represent the pdf of the condition number for the same values of \\(M\\). The two set of distributions look very similar. We define \\(y=\\log_{10}x\\), \\(\\gamma_{M,\\beta}^{\\min}(y)=\\log_{10}f_{M,\\beta}^{\\min}(10^{y})\\) and \\(\\gamma_{M,\\beta}^{\\kappa}(y)=\\log_{10}f_{M,\\beta}^{\\kappa}(10^{y})\\). By observing the results in Figure 8, the following relation holds:
\\[\\gamma_{M,\\beta}^{\\kappa}(y)\\approx\\gamma_{M,\\beta}^{\\min}(-y+d)\\]
where \\(d\\) is a parameter. In the plot, for each value of \\(M\\) the circles represent the above approximation where the parameter \\(d\\) is set to \\(1/3\\). The same considerations hold for any value of \\(\\beta\\). Converting the above approximation into the linear scale, we obtain:
\\[f_{M,\\beta}^{\\kappa}(x)\\approx f_{M,\\beta}^{\\min}\\left(\\frac{10^{d}}{x}\\right)\\]
and by taking the derivative of both sides of (11) with respect to \\(x\\), we finally obtain
\\[f_{M,\\beta}^{\\kappa}(x)\\approx(2M+1)f_{\\beta}\\left(\\frac{10^{d}}{x}\\right)\\]
which holds for \\(x\\gg 1\\).
### _Summary_
In this section we have given numerical evidence of the following facts:
* the condition number distribution is dominated by the distribution of the minimum eigenvalue of \\(\\mathbf{T}\\);
* the distribution of the minimum eigenvalue is upper bounded by a simple function of the asymptotic distribution of the eigenvalues of \\(\\mathbf{T}\\).
Thus, in the following we focus on \\(f_{\\beta}(x)\\); indeed, knowing \\(f_{\\beta}(x)\\) we could obtain the probability that the minimum eigenvalue is below a certain threshold, i.e., that the condition number is less the machine precision.
## V Some analytic results on the eigenvalue pdf
We now derive some analytic results on the asymptotic eigenvalue distribution, \\(f_{\\beta}(x)\\). Ideally we would like to analytically compute \\(f_{\\beta}(x)\\), however such a calculation seems to be prohibitive. Therefore, as a first step we compute the closed form expression of the moments of the asymptotic eigenvalue distribution, \\(\\mathbb{E}[\\lambda^{p}]\\). Note that, if all moments are available, the an analytic expression of \\(f_{\\beta}(x)\\) can be derived through its moment generating function, by applying the inverse Laplace transform.
In the limit for \\(M\\) and \\(r\\) growing to infinity with constant \\(\\beta\\) the expression of \\(\\mathbb{E}[\\lambda^{p}]\\) can be easily obtained from the powers of \\(\\mathbf{T}\\). Indeed \\(\\mathbf{T}\\) is an Hermitian matrix and can be decomposed as \\(\\mathbf{T}=\\mathbf{U}\\boldsymbol{\\Lambda}\\mathbf{U}^{\\dagger}\\), where \\(\\boldsymbol{\\Lambda}=\\text{diag}(\\boldsymbol{\\lambda})\\) is a diagonal matrix containing the eigenvalues of \\(\\mathbf{T}\\) and \\(\\mathbf{U}\\) is the matrix of eigenvectors. It follows that
\\[\\mathsf{Tr}\\{\\mathbf{T}^{p}\\} = \\mathsf{Tr}\\left\\{\\left(\\mathbf{U}\\boldsymbol{\\Lambda}\\mathbf{U} ^{\\dagger}\\right)^{p}\\right\\} \\tag{12}\\] \\[= \\mathsf{Tr}\\{\\mathbf{U}\\boldsymbol{\\Lambda}^{p}\\mathbf{U}^{ \\dagger}\\}\\] \\[= \\mathsf{Tr}\\{\\mathbf{U}^{\\dagger}\\mathbf{U}\\boldsymbol{\\Lambda}^ {p}\\}\\] \\[= \\mathsf{Tr}\\{\\boldsymbol{\\Lambda}^{p}\\}\\] \\[= \\sum_{i=1}^{2M+1}\\lambda_{i}^{p}\\]
Then:
\\[\\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{2M+1}\\mathsf{Tr}\\{\\mathbb{E}\\left[ \\mathbf{T}^{p}\\right]\\} = \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{2M+1}\\mathbb{E}\\left[\\sum_{i=0}^{2 M}\\lambda_{i}^{p}\\right] \\tag{13}\\] \\[= \\mathbb{E}\\left[\\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{2M+1}\\sum_{i=0}^{2M}\\lambda_{i}^{p}\\right]\\] \\[= \\mathbb{E}\\left[\\lambda^{p}\\right]\\]
Please notice that since \\(\\mathbf{T}\\) is a Toeplitz matrix the Grenander-Szego [7] theorem could be employed in the limit for \\(M\\rightarrow+\\infty\\). Unfortunately in this case the theorem is not applicable since all entries of \\(\\mathbf{T}\\) depend on the matrix size \\(M\\).
From (13) and (5) we obtain:
\\[\\mathbb{E}[\\lambda^{p}]=\\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\ \\sum_{\\mathbf{q}\\in\\mathcal{Q}}\\ \\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\mathbb{E} \\left[\\exp\\left(2\\pi\\mathrm{i}\\sum_{i=1}^{p}t_{q_{i}}(\\ell_{i}-\\ell_{[i+1]}) \\right)\\right] \\tag{14}\\]
where
\\[\\mathcal{Q} = \\{\\mathbf{q}\\ |\\ \\mathbf{q}=[q_{1},\\ldots,q_{p}],\\ \\ q_{i}=1,\\ldots,r\\}\\] \\[\\mathcal{L} = \\{\\mathbf{l}\\ |\\ \\mathbf{l}=[\\ell_{1},\\ldots,\\ell_{p}],\\ \\ \\ell_{i}=0,\\ldots,2M\\}\\]and where the sign \\([\\cdot]\\) refers to the modulo \\(p\\) operator2. The average is performed over the random vector \\(\\mathbf{t}=[t_{1},\\ldots,t_{r}]\\).
Footnote 2: For simplicity here we follow the convention \\([p]=p\\) and \\([p+1]=1\\).
Let now \\(\\mathcal{P}\\) be the set of integers from 1 to \\(p\\)
\\[\\mathcal{P}=\\{1,\\ldots,p\\}. \\tag{15}\\]
Let \\(\\mathbf{q}\\in\\mathcal{Q}\\) and let \\(1\\leq k(\\mathbf{q})\\leq p\\) be the number of distinct values assumed by the entries of \\(\\mathbf{q}\\). Such values can be arranged, in order of appearance, in the vector \\(\\hat{\\mathbf{q}}=[\\hat{q}_{1},\\ldots,\\hat{q}_{k(\\mathbf{q})}]\\) where the entries \\(\\hat{q}_{j}\\) are all distinct. Using \\(\\mathbf{q}\\) and \\(\\hat{\\mathbf{q}}\\) we create the subsets \\(\\mathcal{P}_{1}(\\mathbf{q}),\\ldots,\\mathcal{P}_{k(\\mathbf{q})}(\\mathbf{q})\\) of \\(\\mathcal{P}\\) defined by
\\[\\mathcal{P}_{j}(\\mathbf{q})=\\{i\\in\\mathcal{P}\\ |\\ q_{i}=\\hat{q}_{j}\\}\\,. \\tag{16}\\]
Such subsets are non-empty and disjoint (\\(\\mathcal{P}_{j}\
eq\\emptyset\\), \\(\\underset{j}{\\cup}\\mathcal{P}_{j}=\\mathcal{P}\\), and \\(\\mathcal{P}_{j}\\cap\\mathcal{P}_{h}=\\emptyset\\) for \\(j\
eq h\\)). Finally we define \\(\\tau(\\mathbf{q})\\)
\\[\\tau(\\mathbf{q})=\\left\\{\\mathcal{P}_{1}(\\mathbf{q}),\\ldots,\\mathcal{P}_{k( \\mathbf{q})}(\\mathbf{q})\\right\\}\\]
as the partition of \\(\\mathcal{P}\\) induced by \\(\\mathbf{q}\\).
**Example 1:** Let \\(p=6\\) and \\(\\mathbf{q}=[4,9,5,5,4,3]\\). Then, by (15), \\(\\mathcal{P}=\\{1,2,3,4,5,6\\}\\). We have \\(k(\\mathbf{q})=4\\) distinct values which we arrange, in order of appearance, in the vector \\(\\hat{\\mathbf{q}}=[4,9,5,3]\\). Then
\\[\\mathcal{P}_{1}(\\mathbf{q})=\\{1,5\\} (q_{1}=q_{5}=\\hat{q}_{1}),\\] \\[\\mathcal{P}_{2}(\\mathbf{q})=\\{2\\} (q_{2}=\\hat{q}_{2}),\\] \\[\\mathcal{P}_{3}(\\mathbf{q})=\\{3,4\\} (q_{3}=q_{4}=\\hat{q}_{3}),\\] \\[\\mathcal{P}_{4}(\\mathbf{q})=\\{6\\} (q_{6}=\\hat{q}_{4}),\\] and \\(\\tau(\\mathbf{q})=\\{\\{1,5\\},\\{2\\},\\{3,4\\},\\{6\\}\\}\\).
For any given \\(\\mathbf{q}\\in\\mathcal{Q}\\), using the definition of \\(\\mathcal{P}_{j}(\\mathbf{q})\\), we notice that the argument of the average operator in (14) factorizes in \\(k(\\mathbf{q})\\) parts, i.e.
\\[\\exp\\left(2\\pi\\mathrm{i}\\sum_{i=1}^{p}t_{q_{i}}(\\ell_{i}-\\ell_{[i+1]})\\right)= \\prod_{j=1}^{k(\\mathbf{q})}\\exp\\left(2\\pi\\mathrm{i}t_{\\hat{q}_{j}}\\sum_{i\\in \\mathcal{P}_{j}(\\mathbf{q})}\\ell_{i}-\\ell_{[i+1]}\\right)\\]each depending on a single random variable \\(t_{\\hat{q}_{j}}\\). Then from (14) we have:
\\[\\mathbb{E}[\\lambda^{p}] = \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\ \\sum_{\\mathbf{q}\\in\\mathcal{Q}}\\ \\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\ \\mathbb{E}_{\\mathbf{t}}\\left[\\prod_{j=1}^{k(\\mathbf{q})}\\exp \\left(2\\pi\\mathrm{i}t_{\\hat{q}_{j}}\\sum_{i\\in\\mathcal{P}_{j}(\\mathbf{q})}\\ell _{i}-\\ell_{[i+1]}\\right)\\right] \\tag{17}\\] \\[= \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\ \\sum_{\\mathbf{q}\\in\\mathcal{Q}}\\ \\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\prod_{j=1}^{k(\\mathbf{q})}\\ \\mathbb{E}_{\\hat{t}_{j}}\\left[\\exp \\left(2\\pi\\mathrm{i}t_{\\hat{q}_{j}}\\sum_{i\\in\\mathcal{P}_{j}(\\mathbf{q})}\\ell _{i}-\\ell_{[i+1]}\\right)\\right]\\] \\[= \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\ \\sum_{\\mathbf{q}\\in\\mathcal{Q}}\\ \\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\prod_{j=1}^{k(\\mathbf{q})} \\delta\\left(\\sum_{i\\in\\mathcal{P}_{j}(\\mathbf{q})}\\ell_{i}-\\ell_{[i+1]}\\right)\\]
where \\(\\delta(\\cdot)\\) is the Kronecker's delta. Expression (17) can be further simplified by observing that
* there exist \\(r(r-1)\\cdots(r-k+1)=r!/(r-k)!\\) vectors \\(\\mathbf{q}\\in\\mathcal{Q}\\) generating a certain given partition of \\(\\mathcal{P}\\) made of \\(k\\) subsets,
* for a given \\(\\mathbf{q}\\) the expression \\[\\zeta_{2M}(\\mathbf{q})=\\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\prod_{j=1}^{k(\\mathbf{q})}\\ \\delta\\left(\\sum_{i\\in\\mathcal{P}_{j}(\\mathbf{q})}\\ell_{i}-\\ell_{[i+1]}\\right)\\] (18) is a polynomial in the variable \\(2M\\), since it represents the number of points with integer coordinates contained in the hypercube \\([0,\\ldots,2M]^{p}\\) and satisfying the \\(k(\\mathbf{q})\\) constraints \\[\\sum_{i\\in\\mathcal{P}_{j}(\\mathbf{q})}\\ell_{i}-\\ell_{[i+1]}=0\\] (19) We show in Appendix I that one of these constraints is always redundant and that the number of linearly independent constraints is exactly \\(k(\\mathbf{q})-1\\). By consequence the polynomial \\(\\zeta_{2M}(\\mathbf{q})\\) has degree \\(p-k(\\mathbf{q})+1\\).
Let \\(\\mathcal{T}_{p}\\) be the set of distinct partitions of \\(\\mathcal{P}\\) generated by all vectors \\(\\mathbf{q}\\in\\mathcal{Q}\\), then from (17) we obtain:
\\[\\mathbb{E}[\\lambda^{p}] = \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\ \\sum_{\\mathbf{q}\\in\\mathcal{Q}}\\ \\sum_{\\mathbf{l}\\in\\mathcal{L}}\\ \\prod_{j=1}^{k(\\mathbf{q})}\\delta\\left(\\sum_{i\\in \\mathcal{P}_{j}(\\mathbf{q})}\\ell_{i}-\\ell_{[i+1]}\\right) \\tag{20}\\] \\[\\stackrel{{(a)}}{{=}} \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\sum_{\\tau\\in\\mathcal{ T}_{p}}\\sum_{\\mathbf{q}\\Rightarrow\\tau}\\zeta_{2M}(\\mathbf{q})\\] \\[\\stackrel{{(b)}}{{=}} \\lim_{\\begin{subarray}{c}M,r\\rightarrow+\\infty\\\\ \\frac{2M+1}{r}=\\beta\\end{subarray}}\\frac{1}{(2M+1)r^{p}}\\sum_{\\tau\\in\\mathcal{ T}_{p}}\\frac{r!}{(r-k(\\tau))!}\\zeta_{2M}(\\tau)\\]where
* the notation \\(\\sum_{\\mathbf{q}\\Rightarrow\\tau}\\) represents the sum over all vectors \\(\\mathbf{q}\\) generating a certain given partition \\(\\tau\\),
* the equality \\((a)\\) has been obtained by substituting (18), and
* the equality \\((b)\\) holds because the number of vectors \\(\\mathbf{q}\\) generating a given partition \\(\\tau\\) is \\(r!/(r-k(\\tau))!\\).
We point out that the functions \\(k(\\mathbf{q})\\) and \\(\\zeta_{2M}(\\mathbf{q})\\) depend only on the partition \\(\\tau(\\mathbf{q})\\) induced by \\(\\mathbf{q}\\). Since in the third line of (20) we removed the dependence on the vectors \\(\\mathbf{q}\\), the expression of \\(\\mathbb{E}[\\lambda^{p}]\\) is now function of the partitions \\(\\tau\\) only. Then with a little abuse of notation, in the following we refer to the functions \\(k\\) and \\(\\zeta_{2M}\\) as \\(k(\\tau)\\) and \\(\\zeta_{2M}(\\tau)\\), respectively.
Taking the limit we finally obtain:
\\[\\mathbb{E}[\\lambda^{p}] = \\sum_{\\tau\\in\\mathcal{T}_{p}}v(\\tau)\\beta^{p-k(\\tau)} \\tag{21}\\] \\[= \\sum_{k=1}^{p}\\left(\\sum_{\\tau\\in\\mathcal{T}_{p,k}}v(\\tau)\\right) \\beta^{p-k}\\]
where \\(\\mathcal{T}_{p,k}\\) is the subset of \\(\\mathcal{T}_{p}\\) only containing partitions of size \\(k\\), and
\\[v(\\tau)=\\lim_{M\\rightarrow+\\infty}\\frac{\\zeta_{2M}(\\tau)}{(2M)^{p-k+1}}\\]
i.e. \\(v(\\tau)\\) is the coefficient3 of degree \\((2M)^{p-k+1}\\) of the polynomial \\(\\zeta_{2M}(\\tau)\\). Since \\(1\\leq k\\leq p\\) from (21) we note that \\(\\mathbb{E}[\\lambda^{p}]\\) is a polynomial in \\(\\beta\\) of degree \\(\\beta^{p-1}\\). Again, for the sake of clarity we give an example:
Footnote 3: Notice also that the coefficient \\(v(\\tau)\\) represents the volume of the _convex polytope_ described by the constraints (19) when the variables \\(\\ell_{i}\\) are considered real and limited to the interval \\([0,1]\\). By consequence \\(0\\leq v(\\tau)\\leq 1\\).
**Example 2:** Let \\(p=6\\) and \\(\\mathbf{q}\\) given by Example 1. The partition is \\(\\tau=\\{\\{1,5\\},\\{2\\},\\{3,4\\},\\{6\\}\\}\\). Then the set of \\(k(\\tau)=4\\) constraints (19) are given by:
\\[\\ell_{1}+\\ell_{5} = \\ell_{2}+\\ell_{6}\\] \\[\\ell_{2} = \\ell_{3}\\] \\[\\ell_{3}+\\ell_{4} = \\ell_{4}+\\ell_{5}\\] \\[\\ell_{6} = \\ell_{1}\\]
The last equation is redundant since can be obtained summing up the first three constraints. Simplifying we obtain \\(\\ell_{1}=\\ell_{6}\\), and \\(\\ell_{2}=\\ell_{3}=\\ell_{5}\\). Since each variable \\(\\ell_{i}\\) ranges from \\(0\\) to \\(2M\\), the number of integer solutions satisfying the constraints is exactly \\(\\zeta_{2M}(\\tau)=(2M+1)^{3}\\), and then \\(v(\\tau)=1\\).
To compute (21) we need to enumerate the partitions \\(\\tau\\in\\mathcal{T}_{p}\\). First of all we notice that \\(\\mathcal{T}_{p}\\) represents the set of partitions of a \\(p\\)-element set and thus has cardinality \\(|\\mathcal{T}_{p}|=B(p)\\) where \\(B(p)\\) is the \\(p\\)-th _Bell number_ or _exponential number_[11], and that the subset \\(\\mathcal{T}_{p,k}\\) has cardinality \\(S_{p,k}\\) which is a _Stirling number of the second kind_[12]. An effective way to enumerate such partitions is to build a tree of depth \\(p\\) as in Figure 9. A label is given to each node, starting from the root which is labeled by \"a\". The rule for building the tree is as follows: each node \\(\\mathcal{N}\\) generates \\(m+1\\) leaves, labeled in increasing order starting from \"a\", and \\(m\\) is the number of distinct labels in the path from the root to the node \\(\\mathcal{N}\\). The number of leaves of such a tree of depth \\(p\\) is given by \\
Using the procedure described above we can derive in closed form any moment of \\(\\lambda\\). Here we report the first few moments:
\\[\\mathbb{E}[\\lambda] = 1\\] \\[\\mathbb{E}[\\lambda^{2}] = 1+\\beta\\] \\[\\mathbb{E}[\\lambda^{3}] = 1+3\\beta+\\beta^{2}\\] \\[\\mathbb{E}[\\lambda^{4}] = 1+6\\beta+\\frac{20}{3}\\beta^{2}+\\beta^{3}\\] \\[\\mathbb{E}[\\lambda^{5}] = 1+10\\beta+\\frac{70}{3}\\beta^{2}+\\frac{40}{3}\\beta^{3}+\\beta^{4}\\]
In practice the algorithm complexity prevents us from computing moments of order greater than \\(p=12\\). To the best of our knowledge, a closed form expression of the generic moment of \\(\\lambda\\) is still unknown. If all moments were available, then an analytic expression of \\(f_{\\beta}(x)\\) could be derived through its moment generating function \\(\\Psi_{\\beta}(s)\\)
\\[\\Psi_{\\beta}(s)=\\int_{0}^{+\\infty}f_{\\beta}(x)\\mathrm{e}^{sx}\\,\\mathrm{d}x= \\sum_{p=0}^{+\\infty}\\frac{\\mathbb{E}[\\lambda^{p}]}{p!}s^{p} \\tag{22}\\]
by applying the inverse Laplace transform.
### _Validation_
We compare the moments of \\(\\lambda\\) obtained by simulation with those obtained with the above closed form analysis. Table I compares the exact values of the moments of \\(f_{\\beta}(x)\\), and the values obtained by Montecarlo simulation, for \\(\\beta=0.25,0.50,0.75\\) and \\(p=1,\\ldots,5\\). For each value of \\(\\beta\\) the Table shows three columns. The first column, labeled \"Sim\" presents the values obtained by simulation, using \\(M=200\\). The second column, labeled \"Exact\", reports the values obtained using (17) _without_ taking the limit (i.e., using finite values of \\(M\\) and \\(r\\)). The third column, labeled \"Limit\", presents the limit values obtained through (21). The excellent match between simulation analytic results shows the validity of our findings.
## VI Conclusions
We considered a large-scale wireless sensor network sampling a physical field, and we investigated the relationship between the network topology and the probability of successful field reconstruction. In the case of deterministic sensor locations, we derived some sufficient conditions for successful reconstruction, by reviewing the literature on irregular sampling. Then, we considered random network topologies, and employed random matrix theory. By doing so, we were able to derive some conditions under which the field can be successfully reconstructed with a given probability.
A great deal of work still has to be done. However, to the best of our knowledge, this work is the first attempt at solving the problem of identifying the conditions on random network topologies for the reconstruction of sensor fields. Furthermore, we believe that the basis we provided for an analytical study of the problem can be of some utility in other fields besides sensor networks.
## Appendix A The constraints
Let us consider a vector of integers \\(\\mathbf{q}\\) of size \\(p\\) partitioning the set \\(\\mathcal{P}=\\{1,\\ldots,p\\}\\) in \\(k\\) subsets \\(\\mathcal{P}_{j}\\), \\(1\\leq j\\leq k\\) and the set of \\(k\\) constraints
\\[\\sum_{i\\in\\mathcal{P}_{j}}\\ell_{i}-\\ell_{[i+1]}=0. \\tag{23}\\]
We first show that one of such constraint is always redundant.
### _Redundant constraint_
Choose an integer \\(j\\), \\(1\\leq j\\leq k\\). Summing up together the constraints, except the \\(j\\)-th, we get
\\[0 = \\sum_{\\begin{subarray}{c}h=1\\\\ h\
eq j\\end{subarray}}^{k}\\sum_{i\\in\\mathcal{P}_{h}}\\ell_{i}-\\ell_{[i+1]} \\tag{24}\\] \\[= \\sum_{i\\in\\mathcal{P}/\\mathcal{P}_{j}}\\ell_{i}-\\ell_{[i+1]}\\] \\[= \\sum_{i\\in\\mathcal{P}}\\ell_{i}-\\ell_{[i+1]}-\\sum_{i\\in\\mathcal{P }_{j}}\\ell_{i}-\\ell_{[i+1]}\\] \\[= -\\sum_{i\\in\\mathcal{P}_{j}}\\ell_{i}-\\ell_{[i+1]}\\]
which gives the \\(j\\)-th constraint
\\[\\sum_{i\\in\\mathcal{P}_{j}}\\ell_{i}-\\ell_{[i+1]}=0.\\]
Thus one of the constraints (19) is always redundant. We now show that the remaining \\(k-1\\) constraints are linearly independent.
### _Linear independence_
The \\(k\\) constraints (19), after some simplifications, can be rearranged in the form
\\[\\mathbf{A}\\mathbf{l}^{\\mathrm{T}}=\\mathbf{0}\\]
where \\(\\mathbf{A}\\) is a \\(k\\times p\\) matrix and \\(\\mathbf{l}=[\\ell_{1},\\ldots,\\ell_{p}]\\). We have previously shown that the rank of \\(\\mathbf{A}\\) is such that
\\[\\rho(\\mathbf{A})\\leq k-1 \\tag{25}\\]
since one constraint is redundant and \\(k\\leq p\\). We prove now that the rank of \\(\\mathbf{A}\\) is exactly \\(k-1\\).
It is possible to write \\(\\mathbf{A}\\) as \\(\\mathbf{A}=\\mathbf{A}^{\\prime}-\\mathbf{A}^{\\prime\\prime}\\) where \\((\\mathbf{A}^{\\prime})_{ji}=1\\) if \\(i\\in\\mathcal{P}_{j}\\), and \\(0\\) elsewhere. The matrix \\(\\mathbf{A}^{\\prime}\\) has rank \\(k\\) since its rows are linearly independent due to the fact that subsets \\(\\mathcal{P}_{j}\\) have empty intersection. Similarly \\((\\mathbf{A}^{\\prime\\prime})_{ji}=1\\) if \\([i-1]\\in\\mathcal{P}_{j}\\), and \\(0\\) elsewhere. In practice the matrix \\(\\mathbf{A}^{\\prime\\prime}\\) is the matrix \\(\\mathbf{A}^{\\prime}\\) circularly shifted by one position to the right. Hence it can be written as
\\[\\mathbf{A}^{\\prime\\prime}=\\mathbf{A}^{\\prime}\\mathbf{Z}\\]where \\(\\mathbf{Z}\\) is the \\(p\\times p\\)_right-shift matrix_, i.e. the entries of the \\(i\\)-th row of \\(\\mathbf{Z}\\) are zeroes except for a \"1\" at position \\([i+1]\\). By consequence
\\[\\mathbf{A}=\\mathbf{A}^{\\prime}-\\mathbf{A}^{\\prime}\\mathbf{Z}=\\mathbf{A}^{ \\prime}(\\mathbf{I}_{p}-\\mathbf{Z}),\\]
where
\\[(\\mathbf{I}_{p}-\\mathbf{Z})=\\left[\\begin{array}{ccccc}+1&-1&0&\\cdots&0\\\\ 0&\\ddots&\\ddots&\\ddots&\\vdots\\\\ \\vdots&\\ddots&&\\ddots&0\\\\ 0&&\\ddots&\\ddots&-1\\\\ -1&0&\\cdots&0&+1\\end{array}\\right]\\]
has rank \\(\\rho(\\mathbf{I}_{p}-\\mathbf{Z})=p-1\\). By consequence, using the property
\\[\\rho(\\mathbf{A}) = \\rho(\\mathbf{A}^{\\prime}(\\mathbf{I}_{p}-\\mathbf{Z})) \\tag{26}\\] \\[\\geq \\rho(\\mathbf{A}^{\\prime})+\\rho(\\mathbf{I}_{p}-\\mathbf{Z})-p\\] \\[= k-1\\]
Considering together (25) and (26) we conclude \\(\\rho(\\mathbf{A})=k-1\\).
## References
* [1] H. G. Feichtinger, K. Grochenig, T. Strohmer, \"Efficient numerical methods in non-uniform sampling theory,\" _Numerische Mathematik_, Vol. 69, 1995, pp. 423-440.
* [2] M. Perillo, Z. Ignjatovic, W. Heinzelman, \"An energy conservation method for wireless sensor networks employing a blue noise spatial sampling t,\" _3rd International Symposium on Information Processing in Sensor Networks (IPSN 2004)_, Apr. 2004.
* [3] R. Willett, A. Martin, R. Nowak, \"Backcasting: adaptive sampling for sensor networks,\" _3rd International Symposium on Information Processing in Sensor Networks (IPSN 2004)_, Apr. 2004.
* [4] P. Ishwar, A. Kumar, K. Ramchandran, \"Distributed sampling for dense sensor networks: a bit-conservation principle,\" _3rd International Symposium on Information Processing in Sensor Networks (IPSN 2003)_, Apr. 2003.
* [5] P. Marziliano, M. Vetterli, \"Reconstruction of Irregularly Sampled Discrete-Time Bandlimited Signals with Unknown Sampling Locations,\" _IEEE Transactions on Signal Processing,_ Vol. 48, No. 12, Dec. 2000, pp. 3462-3471.
* [6] A. Tulino and S. Verdu, _Random Matrix Theory and Wireless Communications_, now Publishers, The Nederlands, 2004.
* [7] U. Grenander and G. Szego, _Toeplitz forms and their Applications_, Bekeley, CA, Univ. of California Press.
* [8] J. Hightower and G. Borriello, \"Location Systems for Ubiquitous Computing,\" _IEEE Computer_, Vol. 34, No. 8, pp. 57-66, August 2001.
* [9] L. Hu and D. Evans, \"Localization for Mobile Sensor Networks,\" _Tenth Annual International Conference on Mobile Computing and Networking (ACM MobiCom 2004)_, Philadelphia, PA, September-October 2004.
* [10] D. Moore, J. Leonard, D.'Rus, and S. Teller, \"Robust Distributed Network Localization with Noisy Range Measurements,\" _Second ACM Conference on Embedded Networked Sensor Systems (SenSys '04)_, Baltimore, MD, November 2004. pp. 50-61.
* A Wolfram Web Resource. [http://mathworld.wolfram.com/BellNumber.html](http://mathworld.wolfram.com/BellNumber.html)
* A Wolfram Web Resource. [http://mathworld.wolfram.com/StirlingNumberoftheSecondKind.html](http://mathworld.wolfram.com/StirlingNumberoftheSecondKind.html)
Fig. 1: Example of a reconstructed signal from irregular sampling, for \\(r=26\\), \\(M=10\\), \\(\\beta=0.807\\)Fig. 3: Histograms of \\(f_{M,\\beta}(x)\\) for \\(\\beta=0.25\\) and increasing values of \\(M\\)
Fig. 2: Example of a badly reconstructed signal due to numerical instability for \\(r=21\\), \\(M=10\\), \\(\\beta=1\\)Fig. 5: Cumulative density function of \\(F_{M,\\beta}(x)\\) in the log-log scale for some values of \\(\\beta\\)
Fig. 9: Partitions tree | Wireless sensor networks are often used for environmental monitoring applications. In this context sampling and reconstruction of a physical field is one of the most important problems to solve. We focus on a bandlimited field and find under which conditions on the network topology the reconstruction of the field is successful, with a given probability. We review irregular sampling theory, and analyze the problem using random matrix theory. We show that even a very irregular spatial distribution of sensors may lead to a successful signal reconstruction, provided that the number of collected samples is large enough with respect to the field bandwidth. Furthermore, we give the basis to analytically determine the probability of successful field reconstruction.
**Keywords:** Irregular sampling, random matrices, Toeplitz matrix, eigenvalue distribution. | Summarize the following text. |
arxiv-format/0707_4067v1.md | # The Equation of State of Dense Matter: from Nuclear Collisions to Neutron Stars
G. F. Burgio
Istituto Nazionale di Fisica Nucleare, Sez. di Catania, Via S. Sofia 64, 95123 Catania, Italy [email protected]
November 3, 2021
## 1 Introduction
In the last few years, the study of the equation of state of nuclear matter has stimulated an intense theoretical activity. The interest for the nuclear EoS lies, to a large extent, in the study of compact objects, i.e., supernovae and neutron stars. In particular, the structure of a neutron star is very sensitive to the compressibility and the symmetry energy. For that, several phenomenological and microscopic models of the EoS have been developed. The former models include nonrelativistic mean field theory based on Skyrme interactions [1] and relativistic mean field theory based on meson-exchange interactions (Walecka model) [2]. The latter ones include nonrelativistic Brueckner-Hartree-Fock (BHF) theory [3] and its relativistic counterpart, the Dirac-Brueckner (DB) theory [4, 5], and the nonrelativistic variational approach also corrected by relativistic effects [6]. In these approaches the parameters of the interaction are fixed by the experimental nucleon-nucleon (NN) and/or nucleon-meson scattering data.
One of the most advanced microscopic approaches to the EoS of nuclear matter is the Brueckner theory. In the recent years, it has made a rapid progress in several aspects: (i) The convergence of the Brueckner-Bethe-Goldstone (BBG) expansion has been firmly established [7]. (ii) The addition of three-body forces (TBF) permitted to the agreement with the empirical saturation properties [8, 9]. (iii) The extension of the BHF approach has to the description of nuclear matter containing also hyperons [10], thus leading to a more realistic modeling of neutron stars [11, 12].
In the present paper we review these issues and present our results for neutron star structure based on the resulting EoS of dense hadronic matter, also supplemented by an eventual transition to quark matter at high density. A comparison with available experimental data from heavy ion collisions and neutron stars' observations will be discussed.
## 2 The Equation of State from the BBG approach
The Brueckner-Bethe-Goldstone (BBG) theory [3] is based on a linked cluster expansion of the energy per nucleon of nuclear matter. The basic ingredient in this many-body approach is the Brueckner reaction matrix \\(G\\), which is the solution of the Bethe-Goldstone equation
\\[G(\\rho;\\omega)=v+v\\sum_{k_{a}k_{b}}\\frac{|k_{a}k_{b})Q\\langle k_{a}k_{b}|}{ \\omega-e(k_{a})-e(k_{b})}G(\\rho;\\omega), \\tag{1}\\]
where \\(v\\) is the bare nucleon-nucleon (NN) interaction, \\(\\rho\\) is the nucleon number density, \\(\\omega\\) is the starting energy, and \\(|k_{a}k_{b}\\rangle Q\\langle k_{a}k_{b}|\\) is the Pauli operator. \\(e(k)=e(k;\\rho)=\\frac{\\hbar^{2}}{2m}k^{2}+U(k;\\rho)\\) is the single particle energy, and the Brueckner-Hartree-Fock (BHF) approximation for the single particle potential \\(U(k;\\rho)\\) reads \\(U(k;\\rho)=\\sum_{k^{\\prime}\\leq k_{F}}\\langle kk^{\\prime}|G(\\rho;e(k)+e(k^{ \\prime}))|kk^{\\prime}\\rangle_{a}\\) (the subscript \"\\(a\\)\" indicates antisymmetrization of the matrix element). In the BHF approximation the energy per nucleon is
\\[\\frac{E}{A}=\\frac{3}{5}\\frac{\\hbar^{2}\\ k_{F}^{2}}{2m}+D_{\\rm BHF}\\,D_{\\rm BHF}(n)=\\frac{1}{2A}\\sum_{k,k^{\\prime}\\leq k_{F}}\\langle kk^{\\prime}| G(\\rho;e(k)+e(k^{\\prime}))|kk^{\\prime}\\rangle_{a} \\tag{2}\\]
In this scheme, the only input quantity we need is the bare NN interaction \\(v\\) in the Bethe-Goldstone equation (1). In this sense the BBG approach can be considered as a microscopic approach. However, it is well known that two-body forces are not able to explain some nuclear properties (e.g., binding energy of light nuclei, and saturation point of nuclear matter), and three-body forces (TBF) have to be introduced. In the framework of the Brueckner theory, a rigorous treatment of TBF would require the solution of the Bethe-Faddeev equation, describing the dynamics of three bodies embedded in the nuclear matter. In practice a much simpler approach is employed, namely the TBF is reduced to an effective, density-dependent, two-body force by averaging over the third nucleon in the medium, taking account of the nucleon-nucleon correlations. This effective two-body force is added to the bare two-body force and recalculated at each step of the iterative procedure.
Both phenomenological and microscopic TBF have been used in the BHF approach. The phenomenological TBF is widely used in the literature, in particular the Urbana IX TBF [13] for variational calculations of finite nuclei and nuclear matter [6], and contains a two-pion exchange potential, which is attractive at low density, and a phenomenological repulsive term, more effective at high density. The microscopic TBF is based on meson-exchange mechanisms accompanied by the excitation of nucleonicresonances [8, 9], and produces a remarkable improvement of the saturation properties of nuclear matter [9].
Let us now compare the EoS predicted by the BHF approximation with the same two-body force (Argonne \\(v_{18}\\)[14]) and different TBF [15]. In the left panel of Fig. 1 we display the EoS both for symmetric matter (lower curves) and pure neutron matter (upper curves). We show results obtained for several cases, i.e., i) only two-body forces are included (long dashed lines), ii) TBF treated within the phenomenological Urbana IX model (dashed lines), and the microscopic meson-exchange approach (solid lines). For completeness, we also show results obtained with variational calculations (full squares) [6]. We notice that the EoS for symmetric matter with TBF reproduces the correct nuclear matter saturation point in all approaches. Moreover, up to a density of \\(\\rho\\approx 0.4\\ {\\rm fm}^{-3}\\) the BHF EoS calculated with TBF are in fair agreement with the variational calculations, whereas at higher density the microscopic TBF turns out to be the most repulsive. In all cases, the incompressibility at saturation is compatible with the values extracted from phenomenology, i.e., \\(K\\approx 210\\ {\\rm MeV}\\). In the right panel of Fig. 1 we display the symmetry energy as a function of the nucleon density \\(\\rho\\). Within the BHF approach, the symmetry energy has been calculated within the so-called \"parabolic approximation\" for the binding energy of nuclear matter with arbitrary proton fraction [16]. We observe results in agreement with the characteristics of the EoS shown in the left panel, namely, the stiffest EoS yields larger symmetry energies compared to the ones obtained with the Urbana phenomenological TBF and the variational calculations. This leads to a different proton fraction in beta-stable nuclear matter. We notice that the symmetry energy calculated (with or without TBF) at the saturation point yields a value \\(E_{\\rm sym}\\approx 30\\ {\\rm MeV}\\), compatible with nuclear phenomenology.
In the last few years it became popular to compare the various microscopic and phenomenological EoS with the allowed region in the pressure-density plane, as
Figure 1: In the left panel, symmetric matter (lower curves) and pure neutron matter (upper curves) EoS, calculated within the BBG approach, are shown. Variational many-body calculations are also displayed (full squares). The symmetry energy is shown in the right panel.
determined by Danielewicz et al. [17]. In that paper the authors consider both the in-plane transverse flow and the elliptic flow measured in different experiments on \\(Au+Au\\) collisions at energies between 0.2 and 10 GeV/A. From the data, Danielewicz et al. could estimate the pressure for symmetric matter. In Fig. 2 the set of microscopic EoS discussed is displayed along with the allowed pressure region (shaded area). Both the EoS derived from BHF with Urbana IX TBF and the variational one are in agreement with the phenomenological analysis, while the BHF EoS with microscopic TBF turns out to be only marginally compatible, since at higher density it becomes too stiff and definitely falls outside the allowed region. Additional analyses of flow data, as reported by the FOPI Collaboration [18], and subthreshold \\(K^{+}\\) production [19] confirm a soft equation of state in the same density range (see C. Fuchs contribution to this conference).
## 3 Hyperons in nuclear matter
While at moderate densities \\(\\rho\\approx\\rho_{0}\\) the matter inside a neutron star consists only of nucleons and leptons, at higher densities several other species of particles may appear due to the fast rise of the baryon chemical potentials with density. Among these new particles are strange baryons, namely, the \\(\\Lambda\\), \\(\\Sigma\\), and \\(\\Xi\\) hyperons. Due to its negative charge, the \\(\\Sigma^{-}\\) hyperon is the first strange baryon expected to appear with increasing density in the reaction \\(n+n\\to p+\\Sigma^{-}\\), in spite of its substantially larger mass compared to the neutral \\(\\Lambda\\) hyperon (\\(M_{\\Sigma^{-}}=1197\\ {\\rm MeV},M_{\\Lambda}=1116\\ {\\rm MeV}\\)). Other species might appear in stellar matter, like \\(\\Delta\\) isobars along with pion and kaon condensates.
We have generalized the study of the nuclear EoS with the inclusion of the \\(\\Sigma^{-}\\) and \\(\\Lambda\\) hyperons in the BHF many-body approach. To this purpose, one requires in principle nucleon-hyperon (NH) and hyperon-hyperon (HH) potentials. In our work we use the Nijmegen soft-core NH potential [20], that is well adapted to the existing experimental
Figure 2: Different EoS are compared with the phenomenological constraint extracted by Danielewicz et al. [17] (shaded area). Solid (dashed) line: BHF EoS with microscopic (phenomenological) TBF. Long dashed line : variational EoS.
NH scattering data. Unfortunately, up to date no HH scattering data exist and therefore no reliable HH potentials are available. Hence we neglected HH potentials in our BHF calculations [11]. Nevertheless, the importance of HH potentials should be minor as long as the hyperonic partial densities remain limited.
In Fig. 3 (left panel) we show the chemical composition of the resulting beta-stable and asymmetric nuclear matter containing hyperons. We observe rather low hyperon onset densities of about 2-3 times normal nuclear matter density for the appearance of the \\(\\Sigma^{-}\\) and \\(\\Lambda\\) hyperons, almost independently on the adopted TBF. Moreover, an almost equal percentage of nucleons and hyperons are present in the stellar core at high densities. A strong deleptonization of matter takes place, and this can have far reaching consequences for the onset of kaon condensation [21]. The resulting EoS is displayed in the right panel of Fig. 3. The upper curves show the EoS when stellar matter is composed only of nucleons and leptons, whereas the lower curves show calculations with nucleons and hyperons. We notice that the inclusion of hyperons produces a much softer EoS, no matter the TBF adopted in the nucleonic sector. These remarkable results are due to the inclusion of hyperons as additional degrees of freedom, and we do not expect substantial changes when introducing refinements of the theoretical framework, such as hyperon-hyperon potentials, hyperonic TBF, relativistic corrections, etc.
The consequences for the structure of the neutron stars are illustrated in Fig. 4, where we display the resulting neutron star mass-radius curves, obtained solving the Tolman-Oppenheimer-Volkoff equations [22]. We notice that the BHF EoS calculated with the microscopic TBF produces the largest gravitational masses, with the maximum mass of the order of 2.3 \\(M_{\\odot}\\), whereas the phenomenological TBF yields a maximum mass of about 1.9 \\(M_{\\odot}\\). In the latter case, neutron stars are characterized by smaller radii and larger central densities, i.e., the Urbana TBF produce more compact stellar objects. One should notice that, although different TBF still yield quite different maximum masses, the presence of hyperons equalizes the results, leading now to a maximum mass of less than 1.3 solar masses for all the nuclear TBF. This result is in contradiction with the measured value of the Hulse-Taylor pulsar mass, PSR1913+16, which amounts to 1.44
Figure 3: The particle concentrations (left panel) are shown as function of the baryon density. Long dashed curves are calculations performed with the microscopic TBF, whereas solid lines represent the Urbana TBF calculations. In the right panel the corresponding EoS are shown.
\\(M_{\\odot}\\). The only remaining possibility in order to reach significantly larger maximum masses appears to be the transition to another phase of dense (quark) matter inside the star. This is indeed a reasonable assumption, since already geometrically the concept of distinguishable baryons breaks down at the densities encountered in the interior of a neutron star. This will be discussed in the following.
## 4 Quark matter
The results obtained with a purely hadronic EoS call for an estimate of the effects due to the hypothetical presence of quark matter in the interior of the neutron star. Unfortunately, the current theoretical description of quark matter is burdened with large uncertainties, seriously limiting the predictive power of any theoretical approach at high baryonic density. For the time being we can therefore only resort to phenomenological models for the quark matter EoS, and try to constrain them as well as possible by the few experimental information on high-density baryonic matter.
One of these constraints is the phenomenological observation that in heavy ion collisions at intermediate energies (\\(10\\mbox{ MeV}/A\\lesssim E/A\\lesssim 200\\mbox{ MeV}/A\\)) no evidence for a transition to a quark-gluon plasma has been found up to about \\(3\\rho_{0}\\). We have taken this constraint in due consideration, and used an extended MIT bag model [23] (including the possibility of a density dependent bag \"constant\") and the color dielectric model [24], both compatible with this condition [25]. For completeness, we have also used the Nambu-Jona-Lasinio model [26].
In order to study the hadron-quark phase transition in neutron stars, we have performed the Maxwell construction, so demanding a sharp phase transition. We have found that the phase transition in the extended MIT bag model takes place at a large baryon density, \\(\\rho\\approx 0.6\\mbox{ fm}^{-3}\\), and at larger baryon density in the NJL model [26]. On the contrary, the transition density in the CD model is \\(\\rho\\approx 0.05\\mbox{ fm}^{-3}\\). This implies a large difference in the structure of hybrid stars. In fact, whereas stars built with the CD model have at most a mixed phase at low density and a pure quark core at higher
Figure 4: The mass-radius relation is plotted for EoS without hyperons (upper curves), and with hyperons (lower curves). See text for details.
density, the ones obtained with the MIT bag model contain a hadronic phase, followed by a mixed phase and a pure quark interior. The scenario is again different within the Nambu-Jona-Lasinio model, where at most a mixed phase is present, but no pure quark phase.
The final result for the structure of hybrid neutron stars is shown in Fig. 5, displaying the mass-radius relation. It is evident that the most striking effect of the inclusion of quark matter is the increase of the maximum mass with respect to the case with hyperons, now reaching about 1.5 \\(M_{\\odot}\\). At the same time, the typical neutron star radius is reduced by about 3 km to typically 9 km. Hybrid neutron stars are thus more compact than purely hadronic ones and their central energy density is larger. In Fig. 5 we also display some observational constraints. The first one demands that any reliable EoS should be able to reproduce the recently reported high pulsar mass of \\(2.1\\pm 0.2\\) M\\({}_{\\odot}\\) for PSR J0751+1807 [27]. Extending this value even to \\(2\\sigma\\) confidence level (\\({}^{+0.4}_{-0.5}M_{\\odot}\\)) means that masses of at least 1.6 M\\({}_{\\odot}\\) have to be allowed. The other constrain comes from a recent analysis of the thermal radiation of the isolated pulsar RX J1856 which determines a lower bound for its mass-radius relation that implies a rather stiff EoS [28]. Both constraints indicate that the EoS should be rather stiff at high density. Moreover, if quark matter is present in the neutron stars' interiors, this would require additional repulsive contributions in the quark matter EoS.
## 5 Conclusions
In this paper we reported the theoretical description of nuclear matter in the BHF approach, with the application to neutron star structure calculation. We pointed out the important role of TBF at high density, which is, however, strongly compensated by the inclusion of hyperons. The resulting hadronic neutron star configurations have maximum masses of less than 1.4 \\(M_{\\odot}\\), and the presence of quark matter inside the star is required in order to reach larger values.
Figure 5: The mass-radius relation is shown for the several cases discussed in the text, along with some observational constraints.
Concerning the treatment of quark matter, we have joined the corresponding EoS with the hadronic one, and reached maximum masses of about 1.7 \\(M_{\\odot}\\). The value of the maximum mass of neutron stars obtained according to our analysis appears rather robust with respect to the uncertainties of the nuclear and the quark matter EoS. Therefore, the experimental observation of a very heavy (\\(M\\gtrsim\\) 1.7 \\(M_{\\odot}\\)) neutron star would suggest that serious problems are present for the current theoretical modelling of the high-density phase of nuclear matter. In any case, one can expect a well defined hint on the high-density nuclear matter EoS.
## References
* [1] Bonche P, Chabanat E, Haensel P, Meyer J, and Schaeffer R 1998, Nucl. Phys. **A635** 231.
* [2] Serot B D, and Walecka J D 1986, Adv. Nucl. Phys. **16** 1.
* [3] Baldo M, in _Nuclear Methods and the Nuclear Equation of State_, International Review of Nuclear Physics, Vol.8, (World Scientific, Singapore, 1999).
* [4] Machleidt R 1989, Adv. Nucl. Phys. **19** 189; Li G Q, Machleidt R, and Brockmann R 1992, Phys. Rev. **C45** 2782; Krastev P G, and Sammarruca F 2006, Phys. Rev. **C74** 025808.
* [5] Gross-Boelting T, Fuchs C, Faessler A 1999, Nucl. Phys.**A 648** 105; van Dalen E, Fuchs C, Faessler A 2004, Nucl. Phys.**A 744** 227; 2005, Phys. Rev. Lett. **95** 022302; 2007, Eur. Phys. J. **A 31** 29.
* [6] Akmal A, Pandharipande V R, and Ravenhall D G 1998, Phys. Rev. **C58** 1804.
* [7] Day B D 1981, Phys. Rev. **C24** 1203; Song H Q, Baldo M, Giansiracusa G, and Lombardo U 1998, Phys. Rev. Lett. **81** 1584; Baldo M, Fiasconaro A, Song H Q, Giansiracusa G, and Lombardo U 2002, Phys. Rev. **C65** 017303.
* [8] Grange P, Lejeune A, Martzloff M, and Mathiot J-F 1989, Phys. Rev. **C40** 1040.
* [9] Zuo W, Lejeune A, Lombardo U, and Mathiot J-F 2002, Prog. Theor. Phys. Suppl. **146** 478.
* [10] Schulze H-J, Baldo M, Lombardo U, Cugnon J, and Lejeune A 1998, Phys. Rev. **C57** 704.
* [11] Baldo M, Burgio G F, and Schulze H-J 2000, Phys. Rev. **C61** 055801.
* [12] Schulze H-J, Polls A, Ramos A, and Vidana I 2006, Phys. Rev. **C73** 058801.
* [13] Carlson J, Morales J, Pandharipande V R, and Ravenhall D G 2003, Phys. Rev. **C68** 025802.
* [14] Wiringa R B, Stoks V G J, and Schiavilla R 1995, Phys. Rev. **C51** 38.
* [15] Zhou X R, Burgio G F, Lombardo U, Schulze H-J and Zuo W 2004, Phys. Rev. **C 69** 018801.
* [16] Bombaci I, and Lombardo U 1991, Phys. Rev. **C44** 1892.
* [17] Danielewicz P, Lacey R, and Lynch W G 2002, Science **298** 1592.
* [18] Stoicea G et al. [FOPI Collaboration] 2004, Phys. Rev. Lett. **92** 072303.
* [19] Sturm C et al. [KaoS Collaboration] 2001, Phys. Rev. Lett. **86** 39; Schmah A et al. [KaoS Collaboration] 2005, Phys. Rev. **C71** 064907.
* [20] Maessen P M M, Rijken Th A, and de Swart J J 1989, Phys. Rev. **C40** 2226.
* [21] Li A, Burgio G F, Lombardo U, and Zuo W 2006, Phys. Rev. **C74** 055801.
* [22] Shapiro S and Teukolsky S A, _Black Holes, White Dwarfs, and Neutron Stars_, (John Wiley & Sons, New York, 1983)
* [23] Chodos A, Jaffe R L, Johnson K, Thorn C B, and Weisskopf V F 1974, Phys. Rev. **D9** 3471.
* [24] Pirner H J, Chanfray G, and Nachtmann O 1984, Phys. Lett. **B147** 249; Drago A, Tambini U, and Hjorth-Jensen M 1996, Phys. Lett. **B380** 13.
* [25] Maieron C, Baldo M, Burgio G F, and Schulze H-J 2004, Phys. Rev. **D70** 043010.
* [26] Baldo M, Buballa M, Burgio G F, Neumann F, Oertel M, and Schulze H-J 2003, Phys. Lett. **B562** 153.
* [27] Nice D J, Splaver E M, Stairs I H, Lohmer O, Jessner A, Kramer M, and Cordes J M 2005, Astrophys. J. **634** 1242.
* [28] Trumper J E, Burwitz V, Haberl F, and Zavlin V E 2004, Nucl. Phys. Proc. Suppl. **132** 560. | The Equation of State (EoS) of dense matter represents a central issue in the study of compact astrophysical objects and heavy ion reactions at intermediate and relativistic energies. We have derived a nuclear EoS with nucleons and hyperons within the Brueckner-Hartree-Fock approach, and joined it with quark matter EoS. For that, we have employed the MIT bag model, as well as the Nambu-Jona-Lasinio (NJL) and the Color Dielectric (CD) models, and found that the NS maximum masses are not larger than 1.7 solar masses. A comparison with available data supports the idea that dense matter EoS should be soft at low density and quite stiff at high density. | Give a concise overview of the text below. |
arxiv-format/0707_4193v2.md | # On the Interaction of Jupiter's Great Red Spot and Zonal Jet Streams
Sushil Shetty\\({}^{1}\\)
\\({}^{1}\\)Dept. of Mechanical Engineering, University of California, Berkeley, CA 94720
\\({}^{2}\\)Applied Science and Technology Program, University of California, Berkeley, CA 94720
Xylar S. Asay-Davis\\({}^{2}\\)
\\({}^{1}\\)Dept. of Mechanical Engineering, University of California, Berkeley, CA 94720
\\({}^{2}\\)Applied Science and Technology Program, University of California, Berkeley, CA 94720
Philip S. Marcus\\({}^{1,2}\\)
\\({}^{1}\\)Dept. of Mechanical Engineering, University of California, Berkeley, CA 94720
\\({}^{2}\\)Applied Science and Technology Program, University of California, Berkeley, CA 94720
November 8, 2018
## 1 Introduction
Only a relatively thin (\\(\\sim\\)10 km) outer layer of Jupiter's atmosphere containing the visible clouds and vortices is accessible by direct observation. Most of the details of the underlying layers, such as the vertical stratification, must therefore be determined indirectly. In this paper, we present one such indirect method. In particular, we usethe observed velocity field of a persistent Jovian vortex to determine quantities relevant to both outer and underlying layers. These quantities are the potential vorticity of the vortex, the potential vorticity of the neighboring jet streams, the flow in the underlying layers, and the Rossby deformation radius \\(L_{r}\\), which is a measure of the vertical stratification. We demonstrate the method using _Voyager_ 1 observations of the Great Red Spot (GRS).
We are not the first to use the GRS velocity field as a probe of the Jovian atmosphere (Dowling and Ingersoll 1988, 1989; Cho et al. 2001). However, our approach differs from previous ones in several significant respects. First, the GRS velocity field is sufficiently noisy that we do not, unlike in previous analyses, take spatial derivatives of the velocity to compute potential vorticity. Instead, we solve the _inverse problem_: We identify several \"traits\" of the GRS velocity field, where a trait is a feature of the velocity field that is unambiguously quantifiable from the noisy data. We then construct a model for the flow and determine \"best-fit\" values for the model parameters such that the model velocity field reproduces the observed traits. Furthermore, for a given set of parameter values, we construct the model velocity field so that it is an _exact steady solution_ of the equations that govern the flow. For the _Voyager_ 1 data, we find that a best-fit model (i.e., a trait-reproducing steady solution) determined in this manner agrees with the entire GRS velocity field to within the observational uncertainties.
A second way in which our study differs from previous ones is that we explicitly compute the interaction between the GRS and its neighboring jet streams. We show that the interaction controls the aspect ratio of the GRS's potential vorticity anomaly, which is relevant to recent observations that show the aspect ratio of the GRS's cloud cover to be a function of time (Simon-Miller et al. 2002). The changing cloud cover, if symptomatic of changes in the GRS's potential vorticity anomaly, would be indicative of a change in the interaction and a corresponding change in the best-fit values of the parameters that govern the interaction. Finally, in this study, we quantify the relationship between individual traits and individual parameters. When a trait is nearly independent of all parameters except for one or two, a clear physical understanding is obtained between \"cause\" (a model parameter) and \"effect\" (a GRS trait).
Our philosophy is to use a model with the fewest free parameters that is an exact steady solution to the least complex governing equation, yet can still reproduce the observed velocity to within its uncertainties. The danger of more complex models is that they have larger degrees of freedom. By varying parameters they can fit the observed velocity but misidentify the relevant physics. For the _Voyager_ 1 data considered here, we use the 1.5-layer reduced gravity quasigeostrophic (QG) equations and a model with nine free parameters. The _Voyager_ 1 data can be reproduced with this model and _does not warrant_ models with more free parameters or governing equations with more complexity.
The rest of the paper is organized as follows. In SS2 we determine the GRS velocity field from _Voyager_ 1 observations and then identify traits of the velocity. In SS3 we review the governing equations and describe a decomposition of the flow around the GRS into a near-field, a far-field, and an interaction-field. In SS4 we define the model and list its free parameters. In SS5 we determine best-fit parameter values, i.e., parameter values for which the model reproduces the traits. In SS6 we discuss the physical implications of the best-fit model, and in SS7 conclude with an outline for future work.
## 2 GRS velocity field
### Determination of GRS velocity
In Mitchell et al. (1981), _Voyager_ 1 images were used to determine the GRS velocity field by dividing the displacement of a cloud feature in a pair of images by the time interval between the images (typically one Jovian day or \\(\\approx\\)10 hours). The cloud features were identified by hand rather than by an automated approach such as Correlation-Image-Velocimetry (CIV: Fincham and Spedding 1997), and may therefore contain spurious velocities on account of misidentifications. Furthermore, dividing cloud displacement by time does not account for the curvature of a cloud trajectory, since in ten hours, a cloud feature in the high-speed collar travels almost a third of the way across the GRS. However, due to the unavailability of the original navigated images, we use the Mitchell velocities, but remove some of the errors by a procedure described in appendix A. The procedure leads to the removal of 220 of the original 1100 measured cloud displacements and the addition of 7100 synthetic measurements. The net result is that the uncertainty in the velocity field is reduced from \\(\\approx 9\\) m s\\({}^{-1}\\) to \\(\\approx 7\\) m s\\({}^{-1}\\). Fig. 1 shows the processed GRS velocity field. Consistent with previous analyses, the velocity field shows a quiescent core and high-speed collar. The inner part of the collar has anticyclonic vorticity, and the outer part has cyclonic vorticity. The peak velocities in the collar are \\(140\\pm 7\\) m s\\({}^{-1}\\) and the peak velocities in the core are \\(7\\pm 7\\) m s\\({}^{-1}\\).
The GRS is embedded in a zonal (east-west) flow. The zonal mean of this flow, averaged over 142 Jovian days, was computed from _Voyager_ 2 images (Limaye 1986), and is shown in Fig. 2. Between \\(15^{\\circ}\\)S\\({}^{1}\\) and \\(30^{\\circ}\\)S, the profile is characterized by a westward-going jet stream that peaks at \\(\\approx 19.5^{\\circ}\\)S, and an eastward-going jet stream that peaks at \\(\\approx 26.5^{\\circ}\\)S. The uncertainty in the profile is 7 m s\\({}^{-1}\\). (Note that most likely due to navigational errors (Limaye 1986), the published profile must be shifted north by \\(0.5^{\\circ}\\) so as to be consistent with the navigated latitudes of _Voyager_ 1 in Fig. 1.) The GRS was observed to drift westward at a rate of 3-4 m s\\({}^{-1}\\) with respect to System III during the _Voyager_ epoch (Dowling and Ingersoll 1988).
### Pitfalls to be avoided when analyzing GRS velocity
We do not compute quantities by taking spatial derivatives of the velocity data, as this tends to amplify small length scale noise. For example, in the high-speed collar, we found that the uncertainty in vorticity obtained by differentiating the velocity is \\(\\approx 35\\%\\) of the maximum vorticity. If vorticities must be found, it is usually better to integrate the velocity to obtain a circulation and then divide by an area to obtain a local average vorticity. We also do not average the velocity locally, which is a standard way of reducing noise. For example, if the GRS velocity is averaged over length scales greater than \\(2^{\\circ}\\), the peak velocities and vorticities are severely diminished. This is due to the fact that an averaging length of \\(2^{\\circ}\\) is too large; it corresponds to \\(\\approx 2500\\) km, which is the length scale over which the velocity changes by order unity (cf. the width of the high-speed collar). Finally, we do not obtain a quantity by adding two numbers of similar magnitude but opposite sign, so that the resulting sum is of order or smaller than the uncertainty in each of the numbers being summed. For example, if the velocity is assumed to be divergence-free, the vertical derivative of the vertical velocity \\(\\partial v_{z}/\\partial z\\) can be obtained by computing the negative of the horizontal divergence \\(\\partial v_{x}/\\partial x+\\partial v_{y}/\\partial y\\). However, a simple scaling argument shows that the horizontal divergence is smaller than each partial derivative term separately, and
Figure 1: Velocity vectors of the GRS with respect to System III as determined from _Voyager_ 1 images. The velocities were determined by dividing the displacement of a cloud feature in a pair of images by the time between the two images (Mitchell et al., 1981), and then correcting for the fact that cloud trajectories over typical image separation times are not straight lines (see §2a). The \\(23^{\\circ}\\)S latitude is defined to be the principal east-west (E–W) axis, and the \\(77^{\\circ}\\)W longitude is defined to be the principal north-south (N–S) axis.
in particular, is of the same order as the uncertainty in each term (which is relatively large because the terms are derivatives of noisy data). Thus \\(\\partial v_{z}/\\partial z\\) computed in this fashion would have order unity uncertainties.
### Traits of GRS velocity
The traits that we consider are derived from the north-south (N-S) velocity along the principal east-west (E-W) axis and from the east-west (E-W) velocity along the principal north-south (N-S) axis. The E-W and N-S principal axes are defined to be the 23\\({}^{\\circ}\\)S latitude and the 77\\({}^{\\circ}\\)W longitude respectively. The point of intersection of the principal axes is roughly the centroid of the GRS as inferred from its clouds. The velocity profiles along the axes are shown in Fig. 3. To better understand the pitfalls of local averaging, Figs. 3a and 3c show the velocities from Fig. 1 for points that lie within \\(\\pm 0.7^{\\circ}\\) of the axes, while Figs. 3b and 3d show the velocities that lie within \\(\\pm 1.4^{\\circ}\\) of the axes. The axes labels \\(x\\) and \\(y\\) in the figure denote local E-W and N-Scartesian coordinates. Based on the figure, we define the following to be _traits_ of the velocity field: (1) the northward-going jet and southward-going jet in Figs. 3(a)-(b) that peak at \\(x=\\pm 9750\\pm 500\\) km respectively and have peak magnitude \\(V_{\\rm max}^{NS}=95\\pm 7\\) m s\\({}^{-1}\\), (2) the small magnitude N-S velocity in \\(|x|\\leq 6000\\) km, (3) the eastward-going jet and westward-going jet in Figs. 3(c)-(d) that peak at \\(y=-3500\\pm 500\\) km and \\(y=5500\\pm 500\\) km respectively, and have peak magnitude \\(140\\pm 7\\) m s\\({}^{-1}\\), (4) the small magnitude E-W velocity in \\(|y|\\leq 2000\\) km. Traits (2) and (4) illustrate the quiescent interior of the GRS. Traits (1) and (3) illustrate the high-speed collar. The uncertainties in peak velocities are from the global estimate in SS2a. The uncertainties in peak locations are not rigorous. They are from an estimate of the spatial scatter of points near the peak location. Henceforth, traits (1) and (2) will be referred to as the N-S velocity traits, and traits (3) and (4) as the E-W velocity traits.
## 3 Governing equations
### 1.5-layer reduced gravity QG approximation
We do not model the whole sphere, but only a domain that extends from \\(15^{\\circ}\\)S to \\(30^{\\circ}\\)S. For the flow in this domain, we adopt the 1.5-layer reduced gravity QG equations on a beta-plane (Ingersoll and Cuong 1981). A derivation of the equations and the justification for their use can be found in Dowling (1995). Briefly, the layers correspond to an upper layer (also called \"weather\" layer) of constant density \\(\\rho_{1}\\) and a much deeper lower layer of constant density \\(\\rho_{2}>\\rho_{1}\\). The upper layer contains the visible clouds and vortices while the lower layer contains a steady zonal flow. The two layers are dynamically equivalent to a single layer with rigid bottom topography \\(h_{b}\\) and effective gravity \\(g\\equiv g_{J}(\\rho_{2}-\\rho_{1})/\\rho_{2}\\), where \\(g_{J}\\) is the true gravity in the weather layer, and the bottom topography is a parametrization of the flow in the lower layer. The governing equation for the system advectively conserves a potential vorticity \\(q\\):
\\[\\frac{Dq}{Dt}\\equiv\\left(\\frac{\\partial}{\\partial t}+{\\bf v}\\cdot\
abla\\right) q=0, \\tag{1}\\]
\\[q(x,y,t)\\equiv\
abla^{2}\\psi-\\frac{\\psi}{L_{r}^{2}}+\\frac{gh_{b}(y)}{L_{r}^{2 }f_{0}}+\\beta\\mbox{y}. \\tag{2}\\]
Here \\(x\\) and \\(y\\) are the local E-W and N-S coordinates, \\(\\psi\\) is the streamfunction, \\({\\bf v}\\equiv{\\bf\\hat{z}}\\times\
abla\\psi\\) is the weather layer velocity, \\({\\bf\\hat{z}}\\) is the local vertical unit vector, \\(\\beta\\) is the local gradient of the Coriolis parameter \\(f(y)\\), \\(f_{0}\\) is the local value of \\(f(y)\\), and \\(L_{r}\\) is the local Rossby deformation radius. Since \\(g\\) appears only in combination with \\(h_{b}\\), we shall refer to \\(gh_{b}\\) as the bottom topography. Restricting \\(gh_{b}\\) to be a function of \\(y\\) alone restricts the flow in the lower layer to be steady and zonal with no vortices. The case \\(gh_{b}=0\\), or \"flat\" bottom topography, corresponds to the lower layer being at rest in the rotating frame.
### The near-field
We assume that the GRS is a compact region (or patch) of anomalous potential vorticity. We denote the potential vorticity distribution of the GRS by \\(q^{GRS}(x,y)\\), and for reasons that will become clear below, we refer to \\(q^{GRS}\\) as the near-field. We define the streamfunction and velocity of the near-field to be:
\\[q^{GRS}(x,y)\\equiv\\left(\
abla^{2}-1/L_{r}^{2}\\right)\\psi^{GRS}(x,y) \\tag{3}\\]
\\[\\mathbf{v}^{GRS}\\equiv\\mathbf{\\hat{z}}\\times\
abla\\psi^{GRS}. \\tag{4}\\]
The velocity induced by a QG patch decays as \\(\\exp(-r/L_{r})\\), where \\(r\\) is the distance from the patch boundary (Marcus, 1990, 1993). Due to the exponential decay of
Figure 3: Parts (a) and (b) show the north-south component of the velocity from Fig. 1, for points that lie within \\(0.7^{\\circ}\\) and \\(1.4^{\\circ}\\) respectively, of the principal east-west axis. Parts (c) and (d) show the east-west component of the velocity for points that lie within \\(0.7^{\\circ}\\) and \\(1.4^{\\circ}\\) respectively, of the principal north-south axis. Parts (a), (b), (c), and (d) contain 903, 1992, 586, and 1163 points respectively.
velocity, a region of fluid that contains the patch and whose average radius is a few \\(L_{r}\\) greater than the patch radius will have a circulation (or integrated vorticity) that is approximately zero. It would therefore be incorrect under the QG approximation to refer to the vorticity of the GRS as anticyclonic since its net vorticity is zero. On the other hand, the _potential vorticity_ of the GRS is anticyclonic, as is the vorticity of most of its quiescent interior and the inner portion of its high-speed collar, but the vorticity of the outer portion of its collar is cyclonic (which is easily verified by noting that the azimuthal velocity in that region falls off faster than \\(1/r\\)).
### The far-field
The region of flow two or three deformation radii distant from the patch boundary, where the influence of the GRS is small, is defined to be the far-field. We assume the far-field flow to be zonal and independent of time and longitude. Eq. 2 then provides a relationship between the far-field velocity \\(\\mathbf{v}^{\\infty}\\equiv v_{x}^{\\infty}(y)\\mathbf{\\hat{x}}\\), and the far-field potential vorticity \\(q^{\\infty}(y)\\):
\\[q^{\\infty}(y)=\\left(\\frac{d^{2}}{dy^{2}}-\\frac{1}{L_{r}^{2}}\\right)\\psi^{ \\infty}(y)+\\frac{gh_{b}(y)}{f_{0}L_{r}^{2}}+\\beta y. \\tag{5}\\]
For all calculations in this paper, \\(v_{x}^{\\infty}(y)\\) is prescribed from Fig. 2 and the corresponding streamfunction \\(\\psi^{\\infty}\\) from \\(-\\int v_{x}^{\\infty}\\ dy\\). At \\(22.5^{\\circ}\\)S, which is the center of the domain, \\(\\beta=4.6\\times 10^{-12}\\) m\\({}^{-1}\\) s\\({}^{-1}\\) and \\(f_{0}=-1.4\\times 10^{-4}\\) s\\({}^{-1}\\). Thus if \\(L_{r}\\) were known, Eq. 5 shows that specifying \\(q^{\\infty}\\) is equivalent to specifying \\(gh_{b}\\).
### The interaction field
Let \\(q(x,y)\\) be a steady solution of QG Eqs. 1-2 that consists of an anomalous patch of potential vorticity embedded in a far-field flow that is zonal and steady. We decompose \\(q\\) into three components:
\\[q(x,y)\\equiv q^{\\infty}(y)+q^{GRS}(x,y)+q^{INT}(x,y). \\tag{6}\\]
The superposition of \\(q^{GRS}\\) and \\(q^{\\infty}\\) is not an exact solution because the far-field flow is deflected around the patch. We define the interaction potential vorticity \\(q^{INT}\\) to represent the deflection of flow such that the total \\(q\\) given by Eq. 6 is an exact solution of the QG equations. Note that by definition, \\(q^{INT}\\) asymptotes to zero both far from and near the patch. We define the interaction streamfunction and velocity to be:
\\[q^{INT}(x,y)\\equiv\\left(\
abla^{2}-1/L_{r}^{2}\\right)\\psi^{INT}(x,y) \\tag{7}\\]
\\[\\mathbf{v}^{INT}\\equiv\\mathbf{\\hat{z}}\\times\
abla\\psi^{INT}. \\tag{8}\\]
With these definitions, the total velocity \\(\\mathbf{v}\\) and its streamfunction \\(\\psi\\) are superpositions of the near, interaction, and far-field components:
\\[\\psi=\\psi^{\\infty}+\\psi^{GRS}+\\psi^{INT} \\tag{9}\\]\\[\\mathbf{v}=\\mathbf{v}^{\\infty}+\\mathbf{v}^{GRS}+\\mathbf{v}^{INT}. \\tag{10}\\]
Note that in the linear relationships between the potential vorticity and streamfunction given by Eqs. 5, 3, and 7, it is only the far-field component that contains the inhomogeneous bottom topography and \\(\\beta\\) terms.
## 4 Model definition
### Model for far-field \\(q^{\\infty}\\)
Laboratory experiments (Sommeria et al. 1989; Solomon et al. 1993) and numerical simulations (Cho and Polvani 1996; Marcus et al. 2000) show that if the weather layer is stirred and allowed to come to equilibrium, the potential vorticity organizes itself into a system of east-west bands. The bands have approximately uniform \\(q\\) and are separated by steep meridional gradients of \\(q\\). The meridional gradients are all positive (i.e., have the sign as \\(\\beta\\)) so that \\(q(y)\\) monotonically increases from the south to the north pole like a \"staircase\". The corresponding \\(\\mathbf{v}^{\\infty}\\) has alternating eastward-going and westward-going jet streams, with eastward-going jet streams occurring at every gradient or \"jump\". Recent measurements (Read et al. 2006b) of the Jovian \\(q^{\\infty}\\) are not entirely consistent with this picture, for they show gradients near _both_ eastward-going and westward-going jet streams. We therefore model \\(q^{\\infty}\\) between 30\\({}^{\\circ}\\)S and 15\\({}^{\\circ}\\)S by:
\\[q^{\\infty}(y)\\equiv\\sum_{i=1}^{2}\\frac{\\Delta Q_{i}}{2}\\left(\\tanh\\frac{y-y_ {i}}{\\delta_{i}}+1\\right). \\tag{11}\\]
The jumps for this model occur at \\(y_{i}\\) and have strength \\(\\Delta Q_{i}\\), where \\(\\Delta Q_{i}\\) can be positive or negative. The strictly positive \\(\\delta_{i}\\) are a measure of the steepness of each jump. For all results in this paper, the jump locations were fixed at \\(y_{1}=26.0^{\\circ}\\)S and \\(y_{2}=20.0^{\\circ}\\)S, which are near the jet streams in Fig. 2. The free parameters for the model are \\(\\Delta Q_{i}\\) and \\(\\delta_{i}\\) for i=1,2. (Models with up-to four jumps near each jet stream were also tested, and the results were consistent with the ones presented here.)
### Model for near-field \\(q^{GRS}\\)
We model the spatially compact \\(q^{GRS}\\) as a piecewise-constant function obtained by the superposition of \\(M\\) nested patches of uniform potential vorticity. The patches are labelled \\(i=1,2,\\cdots,M\\) from innermost to outermost patch. The principal E-W diameter of a patch is denoted by \\((D_{x})_{i}\\), and we define \\(q^{GRS}_{i}\\) such that \\(q^{GRS}=q^{GRS}_{1}\\) within the boundary of the innermost patch (\\(i=1\\)), \\(q^{GRS}=q^{GRS}_{2}\\) between the boundary of the innermost patch (\\(i=1\\)) and the boundary of the next larger patch (\\(i=2\\)), and so on. The free parameters for the model are \\(M\\), \\(q^{GRS}_{i}\\), and \\((D_{x})_{i}\\) for \\(=1,2,\\cdots,M\\). Once the free parameters for \\(q^{GRS}\\) and \\(q^{\\infty}\\) are specified, along with the value of \\(L_{r}\\), the iterative method given in appendix B can be used to compute the interaction-field such that the total \\(q\\) is a steady solution of the governing equations.
Note that the _shapes_ of patch boundaries are not free but are also computed by the iterative method.
## 5 Determination of best-fit parameter values
### Decoupling of N-S velocity traits from far-field \\(q^{\\infty}\\)
Here we show that the N-S velocity traits are insensitive to the far-field potential vorticity described by Eq. 11. Fig. 4 shows a model computed using the iterative method in appendix B for the parameter values given in Table 2. The middle column of Fig. 4 shows that for the E-W velocity along the N-S axis, all three velocity components, \\(\\mathbf{v}^{\\infty}\\), \\(\\mathbf{v}^{GRS}\\), and \\(\\mathbf{v}^{INT}\\), contribute significantly. However, the rightmost column shows that for the N-S velocity along the E-W axis, \\(\\mathbf{v}^{\\infty}\\) has no contribution (by definition), and the contribution of \\(\\mathbf{v}^{INT}\\) is negligibly small2. Only \\(\\mathbf{v}^{GRS}\\) contributes significantly. We therefore conclude that the N-S traits depend primarily on parameters associated with \\(q^{GRS}\\) and are insensitive to parameters associated with \\(q^{\\infty}\\). This decoupling leads to a logical order for determining the best-fit parameter values. The ordering is given in Table 1 and begins with the determination of \\((D_{x})_{1}\\) from the N-S traits. A more rigorous justification for the ordering is given in appendix C.
Footnote 2: The contribution of \\(\\mathbf{v}^{INT}\\) is small because \\(q^{INT}\\) comprises two highly (E–W)–elongated slivers north and south of the GRS (first column, bottom row of Fig. 4). The associated \\(\\mathbf{v}^{INT}\\) follows highly (E–W)–elongated closed streamlines approximately concentric to \\(q^{INT}\\). Therefore, along the E–W axis, \\(\\mathbf{v}^{INT}\\) is primarily in the E–W direction.
### Determination of best-fit \\(L_{r}\\) and \\(q^{GRS}\\) from N-S velocity traits
Here we show that an \\(M=2\\) model is sufficient to capture the N-S velocity traits to within the observational uncertainties. For brevity, the terms _interior_ and _exterior_ are used in reference to the regions \\(|x|<(D_{x})_{1}/2\\) and \\(|x|>(D_{x})_{1}/2\\) respectively.
For \\(M=1\\), \\(q^{GRS}\\) is a patch of uniform potential vorticity. Models were computed
\\begin{table}
\\begin{tabular}{|l|l|} \\hline Observable trait & Model Parameter \\\\ \\hline Distance between peaks in N–S velocity along E–W axis & E–W diameter of GRS’s potential vorticity: \\((D_{x})_{1}\\) \\\\ Magnitude of peak N–S velocity along E–W axis & Family of possible \\(q_{1}^{GRS}\\) and \\(L_{r}\\) \\\\ N–S velocity along E–W axis for \\(|x|\\geq(D_{x})_{1}/2\\) & Unique \\(q_{1}^{GRS}\\) and \\(L_{r}\\) from family \\\\ N–S velocity along E–W axis for \\(|x|\\leq(D_{x})_{1}/2\\) & GRS’s interior potential vorticity: \\(q_{2}^{GRS}\\), \\((D_{x})_{2}\\) \\\\ E–W velocity along N–S axis & Far-field potential vorticity: \\(\\Delta Q_{i}\\), \\(\\delta_{i}\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Relationship between an observable trait of the Great Red Spot and the model parameter that it determines. The ordering of the table, from top to bottom, is the order in which each model parameter is determined. A rigorous justification for the ordering is given in appendix C.
for different values of \\((D_{x})_{1}\\), \\(q_{1}^{GRS}\\), and \\(L_{r}\\). For each model, the peaks of the N-S velocity along the E-W axis were found to occur at \\(x=\\pm(D_{x})_{1}/2\\). The best-fit value of \\((D_{x})_{1}=19500\\) km was therefore inferred from trait 1 in SS2c. Next, the best-fit values of \\(q_{1}^{GRS}\\) and \\(L_{r}\\) were constrained using the observed peak magnitude \\(V_{\\rm max}^{NS}\\) of the N-S velocity along the E-W axis. In particular, for a given value of \\(L_{r}\\), the value of \\(q_{1}^{GRS}\\) was chosen so that the model reproduced the observed peak magnitude. By repeating this process for several values of \\(L_{r}\\), a two-parameter family (i.e., a function of \\(L_{r}\\) and \\(q_{1}^{GRS}\\)) of models that simultaneously capture the observed peak locations
Figure 4: (left to right) Potential vorticity, E–W velocity along the N–S axis, and N–S velocity along the E–W axis. A gray-scale map is used for potential vorticity with black representing the most cyclonic fluid and white the most anticyclonic fluid. (top to bottom) The components due to the total \\(q\\), the components due to \\(q^{\\infty}\\) alone, the components due to \\(q^{GRS}\\) alone, and the components due to \\(q^{INT}\\) alone. The total \\(q\\) is a (uniformly translating) solution of the QG Equations. It was computed for the parameter values in Table 2 using the iterative method in appendix B. Note that the slivers of \\(q^{INT}\\) (left column bottom row) are due to the displacement of the “jumps” or steep gradients in \\(q^{\\infty}\\) as they follow streamlines that deflect around the GRS.
and peak magnitude was obtained. Some family members are shown in Fig 5. Note that the models do not capture the observed width of the northward and southward-going jets. In particular, for sufficiently small \\(L_{r}\\) (\\(L_{r}=1300\\) km), the model captures the rate of velocity fall-off in the interior but not in the exterior. For sufficiently large \\(L_{r}\\) (\\(L_{r}=2400\\) km) the opposite is true. For other values of \\(L_{r}\\), the rate of fall-off is too fast or too slow in both regions. To overcome this shortcoming, models with \\(M=2\\) were considered.
For \\(M=2\\), \\(q^{GRS}\\) is the superposition of two nested patches of uniform potential vorticity. Best-fit values of \\(L_{r}\\), \\(q_{1}^{GRS}\\), and \\((D_{x})_{1}\\) were taken from the \\(M=1\\) model in Fig. 5 that captures the velocity profile in the exterior. As shown in Fig. 6, for \\(q_{2}^{GRS}=q_{1}^{GRS}\\), the \\(M=2\\) model does not capture the velocity profile in the interior. However, if \\(q_{2}^{GRS}\\) is changed holding all other parameters fixed, only the interior flow changes (provided \\((D_{x})_{1}-(D_{x})_{2}\\geq 2L_{r}\\)). Thus with all other parameters fixed, a genetic algorithm (Zohdi 2003) was used to determine values for \\(q_{2}^{GRS}\\) and \\((D_{x})_{2}\\) that minimize the \\(L_{2}\\)-norm difference between the model N-S velocity along the E-W axis and the observed velocity in the interior. The parameter values obtained are listed in Table 2 and Fig. 7 shows that the N-S traits are captured for these parameter values. (Models with \\(M=3,4\\) were also tested, and the results were consistent with the ones presented here.)Fig. 8 shows that the E-W traits are captured for these parameter values. The corresponding \\(q^{\\infty}\\) is shown in Fig. 9. The velocities for this trait-capturing model were found to match the GRS velocities in Fig. 1 to within the observational uncertainties.
## 6 Physical implications of best-fit model
Figure 5: N–S velocity along the E–W axis for models with \\(M=1\\). Crosses are _Voyager_ 1 data from Fig. 3a. Solid black curve has \\(q_{11}^{GRS}=10.5\\times 10^{-5}\\) s\\({}^{-1}\\) and \\(L_{r}=2400\\) km. Dashed curve has \\(q_{11}^{GRS}=6.5\\times 10^{-5}\\) s\\({}^{-1}\\) and \\(L_{r}=3800\\) km. Solid gray curve has \\(q_{11}^{GRS}=19.5\\times 10^{-5}\\) s\\({}^{-1}\\) and \\(L_{r}=1300\\) km. All models have the best-fit value of \\((D_{x})_{1}=19500\\) km and were computed using the iterative method in appendix B. The N–S velocity along the E–W axis is insensitive to parameters of the far-field because of decoupling as described in §5a.
### Cloud morphology and GRS's potential vorticity anomaly
The models show that the peak north-south velocities along the principal east-west axis occur at \\(x=\\pm(D_{x})_{1}/2\\), where \\((D_{x})_{1}\\) is the principal east-west diameter of the GRS's potential vorticity anomaly. Thus the best-fit value of \\((D_{x})_{1}=19500\\) km was inferred from trait 1 in SS2c. In fact, the models show that not just the east-west extremeties, but the entire boundary of the GRS's potential vorticity anomaly is demarcated by the locations of peak velocity magnitude (\\(|\\mathbf{v}|\\)). This implies that an estimate of the area and aspect ratio of the GRS's potential vorticity anomaly can
Figure 6: N–S velocity along the E–W axis for models with \\(M=2\\). Each model has \\(L_{r}\\), \\((D_{x})_{1}\\), \\(q_{1}^{GRS}\\), and \\((D_{x})_{2}\\) set to the best-fit value in Table 2, but the values of \\(q_{2}^{GRS}\\) for each model are different. The dashed curve has \\(q_{2}^{GRS}=q_{1}^{GRS}\\), the solid gray curve has \\(q_{2}^{GRS}=0.8q_{1}^{GRS}\\) and the solid black curve has the best-fit value of \\(q_{2}^{GRS}=0.57q_{1}^{GRS}\\). Crosses are _Voyager_ 1 data from Fig. 3a. The N–S velocity along the E–W axis is insensitive to parameters of the far-field because of decoupling as described in §5a.
be made directly from the observed velocity field without first determining a best-fit model. Traditionally, the clouds associated with the GRS have been used to infer the area and aspect ratio of the vortex. The east-west diameter of the cloud cover associated with the GRS is \\(\\approx\\)24000-26000 km in length, which is \\(\\approx\\)25% longer than the east-west extent of the potential vorticity anomaly as determined by our best-fit model. The north-south diameter of the cloud cover is also \\(\\approx\\)25% longer than that of the anomaly, so any estimate of the size of the GRS based on its cloud images rather than on its velocity overestimates the area of the potential vorticity anomaly by \\(\\approx\\)50%.
Figure 7: Solid line is the N–S velocity along the E–W axis for a model with best-fit parameter values given in Table 2. Crosses are _Voyager_ 1 data from Fig. 3a.
### Rossby deformation radius
The models show the rate of fall-off of the north-south velocity in the outer portion of the high-speed collar to be almost independent of all parameters with the exception of \\(L_{r}\\). The models also show the magnitude of peak north-south velocity \\(V_{\\rm max}^{NS}\\) along the principal east-west axis to be approximately equal to the product of \\(L_{r}\\), the potential vorticity \\(q_{1}^{GRS}\\) in the collar, and a dimensionless number that depends weakly on \\((D_{x})_{1}/L_{r}\\) (see Table 1). Since \\((D_{x})_{1}\\) is known from the separation of the north-south peaks, and \\(V_{\\rm max}^{NS}\\) can be measured to within \\(\\pm 7\\) m s\\({}^{-1}\\), the best-fit values of \\(L_{r}=2400\\) km and \\(q_{1}^{GRS}=10.5\\times 10^{-5}\\) s\\({}^{-1}\\) were determined simultaneously by demanding that the model reproduce the value of \\(V_{\\rm max}^{NS}\\) as well as the velocity fall-off in the outer portion of the collar.
Figure 8: Solid line is the E–W velocity along the N–S axis for a model with best-fit parameter values given in Table 2. Crosses are _Voyager_ 1 data from Fig. 3a.
### Hollowness of GRS's potential vorticity anomaly
The models show a uniform potential vorticity anomaly to be inconsistent with the north-south velocity along the east-west axis in the GRS's high-speed collar. In particular, an anomaly with uniform potential vorticity cannot simultaneously capture the different rates at which the velocity falls-off in the inner and outer portion of the collar. However, a model with core potential vorticity \\(q_{2}^{GRS}\\approx\\)60% of the collar's potential vorticity \\(q_{1}^{GRS}\\), is able to capture both fall-off rates to within the uncertainties. The \"hollowness\"3 of the GRS's potential vorticity anomaly is surprising because other Jovian vortices such as the White Ovals, as well as other geophysical
Figure 9: Solid line is far-field potential vorticity \\(q^{\\infty}\\) for model with best-fit parameters given in Table 2. Dashed line is \\(q^{\\infty}\\) determined in Dowling and Ingersoll (1989).
vortices such as the Antarctic Stratospheric Polar Vortex and Gulf Stream Rings, have uniform \\(|q|\\) or a \\(|q|\\) maximum in their cores. Furthermore, hollow vortices are sometimes unstable (Marcus 1990). Initial-value simulations show that hollow vortices re-distribute their \\(q\\) on an advective time scale so that the final \\(|q|\\) is uniform or has a maximum in the core. This raises the question of how a hollow GRS is stabilized over time scales longer than an advective time scale.
### Non-staircase far-field potential vorticity
The best-fit \\(q^{\\infty}\\) has a positive jump \\(\\Delta Q_{1}\\) of magnitude \\(1.9\\times 10^{-5}\\) s\\({}^{-1}\\) near the eastward-going jet stream and a larger, negative jump \\(\\Delta Q_{2}\\) of magnitude \\(5.6\\times 10^{-5}\\) s\\({}^{-1}\\) near the westward-going jet stream. Due to these opposing jumps, \\(q^{\\infty}\\) in this region does not monotonically increase with \\(y\\). This is surprising because numerical and laboratory experiments (see SS4a) predict a monotonically increasing \"staircase\" profile, with jumps only at eastward-going jet streams. The best-fit profile determined here agrees qualitatively with a profile determined in Dowling and Ingersoll (1989) using an independent method.
### Aspect ratio of GRS's potential vorticity anomaly
The aspect ratio of the GRS's potential vorticity anomaly is defined to be \\((D_{x})_{1}/(D_{y})_{1}\\), where \\((D_{x})_{1}\\) and \\((D_{y})_{1}\\) are the principal north-south and east-west diameters of the anomaly respectively. (Recall that the shape of the the GRS's \\(q\\) anomaly, and \\((D_{y})_{1}\\) in particular, are obtained as output from the iterative method in appendix B.) The aspect ratio of the anomaly depends on the ratio of \\(q_{1}^{GRS}\\) to the shear of the ambient flow in which the GRS is embedded; a larger \\(q_{1}^{GRS}\\) to ambient shear ratio implies a rounder vortex while a smaller ratio implies a more elongated vortex (Marcus 1990). It should be emphasized that, in general, the ambient shear at the location of the GRS is _not_ identical to the shear of the far-field flow \\({\\bf v}^{\\infty}\\). Instead, as shown in Fig. 4, the ambient shear is determined by the _interaction_ of the GRS with the far-field flow. In particular, the middle column of Fig. 4 shows that \\({\\bf v}^{INT}\\) is large and produces a shear with half the magnitude and _opposite_ sign to the shear of \\({\\bf v}^{\\infty}\\). Therefore, the effect of \\({\\bf v}^{INT}\\) is to greatly reduce the ambient shear at the location of the GRS. For the best-fit model, the aspect ratio of the anomaly is 2.18. If the mitigating effect of \\({\\bf v}^{INT}\\) on the shear is eliminated by setting \\(\\Delta Q_{i}=0\\), with all other parameters, in particular \\((D_{x})_{1}\\), unchanged from their best-fit values, then the GRS's anomaly shrinks in the north-south direction (i.e., \\((D_{y})_{1}\\) decreases) so that its aspect ratio is increased by \\(\\approx\\)28%.
The panel in the left column and bottom row of Fig. 4 explains the functional dependence of \\({\\bf v}^{INT}\\) on \\(y\\) and why its shear is _adverse_ to the local shear of \\({\\bf v}^{\\infty}\\). The panel shows that the effect of deflecting the jet streams and associated isocontours of \\(q^{\\infty}\\) around the GRS is equivalent to placing nearly semi-circular patches of \\(q\\) north and south of the GRS. When the isocontours of \\(q^{\\infty}\\) that are deflected south of the GRS have latitudinal gradient \\(dq^{\\infty}/dy>0\\), the semi-circular patch of \\(q\\) south of the GRS produces anticyclonic shear at the latitude of the GRS. Similarly, if the isocontours that are deflected north of the GRS have \\(dq^{\\infty}/dy>0\\), the semi-circular patch of \\(q\\) north of the GRS produces cyclonic shear at the latitude of the GRS. Thus if the eastward-going and westward-going jet streams of \\(\\mathbf{v}^{\\infty}\\), which are deflected respectively south and north of the GRS, both had \\(dq^{\\infty}/dy>0\\), then the two semi-circular patches of vorticity in Fig. 4 would have opposite sign and form a dipole. The dipole would create a large net westward flow at the latitude of the GRS, but would create little shear (none, if the patches had equal strength) there. However, for the best-fit model, the westward-going jet stream has \\(dq^{\\infty}/dy<0\\) and the eastward-going jet stream has \\(dq^{\\infty}/dy>0\\). Both semi-circular patches are anticyclonic and the result is a large shear that is adverse to the shear of \\(\\mathbf{v}^{\\infty}\\), as shown in Fig. 4.
## 7 Conclusions and Future Work
In this paper we have described a technique for determining quantities of dynamical significance from the observed velocity fields of a long-lived Jovian vortex and its neighboring jet streams. Our approach was to model the flow using the simplest governing equation and the fewest unknown parameters that would reproduce the observed velocity to within its observational uncertainties. For the _Voyager_ 1 data, this is a nine-parameter model that is an exact steady solution to the 1.5-layer reduced gravity QG equations. The nine parameters are the local Rossby deformation radius, the \\(q\\) in the GRS's high-speed collar, the \\(q\\) in the GRS's core, the east-west diameter of the GRS's \\(q\\) anomaly, the east-west diameter of the GRS's core, the size and steepness of two jumps in the far-field \\(q\\), one located near the latitude of the eastward-going jet stream to the south of the GRS, and the other located near the westward-going jet-stream to its north. We determined \"best-fit\" values for the nine parameters by identifying several \"traits\" of the observed GRS velocity field and seeking a model that reproduced all those traits.
Perhaps the most surprising result of our study was that the simple model described above was able to reproduce the entire observed velocity field in Fig. 1 to within the uncertainties of 7% (that is, 7 m s\\({}^{-1}\\)). The success of the model is due, in part, to the fact that the GRS must be well-described by the QG equations, and to the fact that the model is an exact steady solution of the governing equations. The success is also due to the fact that the chosen traits are robust and in some sense unique (e.g., hollowness) to the physics associated with the GRS. Finally, a part of the success of the model is due to the relatively large uncertainties (7%) of the _Voyager_ 1 velocities compared to more recent data sets (see below).
Our most important result was to show that the interaction between the GRS and its neighboring jet streams determines the shape of the GRS's \\(q\\) anomaly. By explicitly computing the interaction, we showed that the effect of the GRS is to bend the jet beams (identified by their jumps in \\(q\\)) so that they pass around the GRS,and the effect of the bending of the jet streams is to reduce the zonal shear at the location of the GRS. A smaller zonal shear at the location of the GRS compared to the \\(q\\) of the GRS implies a smaller east-west to north-south aspect ratio for the GRS's \\(q\\) anomaly. The best-fit model has a positive jump at the eastward-going jet stream and a larger, negative jump at the westward-going jet stream. The bending of these opposing jumps significantly reduces the zonal shear at the GRS, making the aspect ratio of the GRS's \\(q\\) anomaly \\(\\approx\\)28% smaller (i.e., rounder) than it would be if there were no interaction with the jet streams. It is also interesting to note that due to the opposing jumps, the far-field \\(q\\) does not monotonically increase from south to north, which is contrary to numerical and laboratory experiments that predict a monotonically increasing \"staircase\" profile.
The GRS's potential vorticity anomaly was found to be \"hollow\" with core potential vorticity \\(\\approx\\)60% that of the collar; this is curious because hollow vortices are generally unstable. The locations of peak velocity magnitude were found to be accurate markers of the boundary of the GRS's \\(q\\) anomaly, which implies that the area and aspect ratio of the anomaly can be inferred directly from the velocity data. On the other hand, clouds associated with the GRS are not an accurate marker of the anomaly as they differ from the anomaly area by \\(\\approx\\)50%. This suggests that cloud aspect ratios, areas, and morphologies should not be used to determine temporal variability of Jovian vortices.
In devising the model, our philosophy was to include no more complexity than was required to match the observed velocity to within its uncertainties. However, lower-uncertainty measurements of the velocity field using CIV (Asay-Davis et al. 2006) on observations from _Hubble Space Telescope_, _Cassini_, and _Galileo_, may require that the QG approximation be relaxed in favor of shallow-water. Also, if thermal observations (Read et al. 2006a) are to be accounted for, governing equations that permit 3D baroclinic effects will be required. Modeling different data sets would show how the best-fit parameter values evolve with time.
A companion paper to this one shows that the best-fit model is stable and explores the stabilizing effects of the hollow GRS-jet stream interaction. Demonstrating stability is important because hollow vortices are usually unstable. Finally, there are several questions raised by our best-fit model of the GRS that will need to be answered. How did a hollow GRS form? Why are there no other hollow Jovian vortices (for which the velocity has been measured, cf. the current Red Oval and the three White Ovals at 33\\({}^{\\circ}\\)S, which existed between the mid-1930's and 1998)? One possible answer to the second question is that Jovian vortices apart from the GRS lack opposing jumps near their neighboring jet streams and the associated reduction in shear due to the vortex-jet stream interaction. Indeed, a preliminary best-fit model of the White Ovals (Shetty et al. 2006) does not show opposing jumps near the neighboring jet streams.
We thank the NASA Planetary Atmospheres Program for support. Computations were run at the San Diego Supercomputer Center (supported by NSF). One of us (PSM) also thanks the Miller Institute for Basic Research in Science for support.
Appendix A Method for removing spurious velocities and correcting for the curvature of cloud trajectories
The method involves two stages of iteration. We start with the velocity from Mitchell et al. (1981) in which the trajectories are assumed to be straight lines and the velocities are assumed to be located mid-way between tie-point pairs (the initial and final coordinates of a cloud feature in a pair of images is defined to be a tie-point pair). We then spline the irregularly spaced tie-point velocities onto a uniform grid of size \\(0.05^{\\circ}\\times 0.05^{\\circ}\\). The first step of the inner loop of the iteration computes, for each tie-point pair, the _curved_ trajectory that a passive Lagrangian particle would follow beginning at the initial tie-point location \\((x_{I},y_{I})\\) to its final tie-point location \\((x_{F},y_{F})\\), using a \\(5^{th}\\)-order Runge-Kutta integration. To carry out the integration, the velocity field is spline-interpolated from the grid to the off-grid locations where it is required by the integrating scheme. The integration creates a set of closely spaced points, \\((x_{i},y_{i})\\), \\(i=1,2,3,\\cdots,N\\), along the trajectory, where \\((x_{1},y_{1})\\equiv(x_{I},y_{I})\\). In general, this trajectory does not end with \\((x_{N},y_{N})\\) equal to \\((x_{F},y_{F})\\) as desired. We therefore compute a second trajectory \\((X_{i},Y_{i})\\) starting from the final tie-point location \\((x_{F},y_{F})\\) and integrate backward in time using the interpolated velocity from grid points. A third trajectory \\((\\bar{x}_{i},\\bar{y}_{i})\\equiv[(N-i)(x_{i},y_{i})+(i-1)(X_{i},Y_{i})]/(N-1)\\) is then computed as a linear interpolation that, by construction, starts at \\((x_{I},y_{I})\\) and ends at \\((x_{F},y_{F})\\). Moreover, because the points along each trajectory are close together, the velocity at each point \\((\\bar{x}_{i},\\bar{y}_{i})\\) is well-approximated with the temporal, second-order finite difference derivative using the nearest neighbor trajectory points. A new velocity at the grid points is created from the spline of the velocities along the curved trajectories of all of the tie-point pairs (for each trajectory, we use the velocities at eight approximately equally spaced points along the trajectory). We then return to the first step of the inner loop. We use the original set of tie-point pairs, but the velocity is now the updated velocity on the grid. The inner loop is iterated until it converges (typically, three iterations). We then compute the residual of each velocity vector, which is defined to be the magnitude of the difference between the original, uncorrected tie-point velocity and the converged velocity interpolated by splines to that location. Velocity vectors with residuals that were six times the root-mean-squared value of all of the residuals were considered to be spurious, and their tie-points were removed from the original data set. Once the spurious points are removed, the outer loop is complete and the entire process is repeated starting with the new (diminished) set of tie-points. The outer loop was iterated until no more tie points were removed. The _Voyager_ 1 tie-point set required three iterations of the outer loop and resulted in the removal of 220 of the original 1100 points. The root-mean-squared residual of the iterated velocity is \\(\\approx 7\\) m s\\({}^{-1}\\), and we use this value as a measure of the uncertainty in the data. For comparison, it should be noted that the residual of the _Voyager_ 1 tie-points without correcting for curvature and without removing spurious tie-points is \\(\\approx 9\\) m s\\({}^{-1}\\)and the residual for the Hubble Space Telescope data (from CIV) for the GRS using the method described here is \\(\\approx 3\\) m s\\({}^{-1}\\)(Asay-Davis et al., 2006). In the high-speed collar, the residuals in the vorticity derived by differentiating the _Voyager_ 1 velocity are \\(\\approx 35\\%\\) of the maximum vorticity.
APPENDIX B **Iterative method for computing steady-state solutions of the 1.5-layer reduced gravity QG equations**
Here we describe an iterative method for computing steady solutions of Eqs. 1-2 subject to periodic boundary conditions4 in \\(x\\) and \\(y\\). The method seeks solutions that consist of a single anomalous patch embedded in a zonal flow, and that are steady when viewed in a frame translating with the patch. Such solutions are of the form \\(q(x,y,t)=q(x-Ut,y)\\), where \\(U\\mathbf{\\hat{x}}\\) is the constant drift velocity of the vortex. Substituting for \\(q\\) in Eq. 1 we obtain:
Footnote 4: While periodicity is natural in the east-west direction, it is artificially imposed in the north-south direction. This is done by embedding the domain of interest (where the velocities are designed to match those of Jupiter) into one with 20% larger latitudinal extent. The flow velocities in the northern and southern extremities of the enlarged domain do not match those of Jupiter, but smoothly interpolate the velocities from the domain of interest to the periodic boundaries.
\\[(\\mathbf{v}-U\\mathbf{\\hat{x}})\\cdot\
abla q=0,\\] (B1)
which implies that isocontours of \\(q\\) and isocontours of \\(\\psi+Uy\\) are coincident. It is this property that the iterative method exploits to compute uniformly translating solutions. As input, the method requires \\(L_{\\tau}\\), \\(q_{i}^{GRS}\\), \\((D_{x})_{i}\\) for \\(i=1,2,\\cdots,M\\), and \\(\\Delta Q_{i}\\), \\(\\delta_{i}\\) for \\(i=1,2\\). As output, the method provides \\(q^{INT}\\), \\(U\\), and the shape of each vortex patch. Initial guesses must be supplied for the quantities obtained as output. The guesses are then iterated, keeping the input quantities fixed, until the total \\(q\\) is a uniformly translating solution. The iterative procedure is described below. The domain is \\(x\\in[-L_{x},L_{x}]\\), \\(y\\in[-L_{y},L_{y}]\\). The origin is at the point of intersection of the principal axes.
1. A new \\(\\psi\\) is computed from the current \\(q\\) by inverting the Helmholtz operator in Eq. 2. The current \\(q\\) is the sum of \\(q^{\\infty}\\), the current \\(q^{GRS}\\), and the current \\(q^{INT}\\).
2. A new drift speed \\(U\\) for the anomaly is computed. The drift speed of the anomaly, as derived in Marcus (1993), is given by \\(\\int_{A}q^{GRS}v_{x}dA/\\int_{A}q^{GRS}dA\\), where \\(A\\) is the current area of the anomaly.
3. New isocontours of \\((\\psi+Uy)\\) are computed. The isocontours are streamlines of the current velocity in the translating frame. Streamlines that extend from the western to the eastern boundary of the domain are referred to as _open_ streamlines. Streamlines that are not open are referred to as _closed_.
4. A new \\(q^{INT}\\) is computed. This is done by setting the value of \\(q\\) along an open streamline to the value of \\(q^{\\infty}\\) at the point on the western boundary through which the streamline passes. In other words, if \\(y=s(x)\\) is the equation of an open streamline, then \\(q(x,s(x))\\equiv q^{\\infty}(s(-L_{x}))\\) for \\(x\\in[-L_{x},L_{x}]\\).
5. A new \\(q^{GRS}\\) is computed by computing a new boundary for patches \\(i=1,2,\\cdots,M\\). The new boundary is identified as the closed streamline that passes through \\(x=(D_{x})_{i}/2\\). Note that if the current patch is reflection symmetric about the N-S axis, the value of \\((D_{x})_{i}\\) is conserved. The potential vorticity of each patch \\(q_{i}^{GRS}\\) is held fixed.
The iterations are repeated until \\(U\\) converges to within a desired tolerance, or equivalently, until isocontours of \\((\\psi+Uy)\\) and isocontours of \\(q\\) are coincident. For all calculations in this paper, the initial guess for the shape of a patch was an ellipse with \\((D_{y})_{i}=0.5(D_{x})_{i}\\). The final shapes are reflection symmetric about the N-S axis, but they are not symmetric about the E-W axis. The initial \\(q^{INT}\\) and \\(U\\) were set to zero. The grid resolution was \\(0.05^{\\circ}\\times 0.05^{\\circ}\\). The equilibria are not sensitive to the domain size provided the domain boundaries are at least three deformation radii away from the edge of the outermost patch. We note that it would be interesting to explore initial guesses that are not reflection symmetric about the N-S axis, to see if asymmetry persists for the final solution. Indeed, recent low-uncertainty measurements of the GRS velocity field (Asay-Davis et al., 2006) show asymmetry about the N-S axis. For the _Voyager_ 1 data set however, any asymmetry is much smaller than the uncertainties, so asymmetric models are deferred to future work.
## Appendix C Sensitivity of model traits to model parameters
Here we quantify the sensitivity of a model trait to small changes in a model parameter. The results justify the methodology used in SS5 to determine the best-fit parameter values. Consider a trait of the N-S velocity along the E-W axis, say the peak magnitude \\(V_{\\max}^{NS}\\). From dimensional analysis it is rigorous to write:
\\[V_{\\max}^{NS}=L_{r}q_{1}^{GRS}F[q_{2}^{GRS}/q_{1}^{GRS},\\Delta Q_{1}/q_{1}^{GRS },\\Delta Q_{2}/q_{1}^{GRS},\\delta_{1}/L_{r},\\delta_{2}/L_{r},(D_{x})_{1}/L_{r},(D_{x})_{2}/L_{r}],\\] (C1)
where \\(F\\) is a dimensionless function of seven dimensionless arguments (note that Eq. C1 is completely general if the value of \\(V_{\\max}^{NS}\\) is independent of \\(\\mathbf{v}^{\\infty}\\) as is suggested by decoupling; otherwise, and in particular, for any trait of the E-W velocity along the N-S axis, the function \\(F\\) would have to include arguments of the dimensionless scalars that parametrize \\(\\mathbf{v}^{\\infty}\\)). The sensitivity of \\(V_{\\max}^{NS}\\) to changes in a particular parameter, say \\(L_{r}\\), was determined by computing the value of \\(V_{\\max}^{NS}\\) for a change in \\(L_{r}\\) of \\(\\pm 5\\%\\) around its best-fit value with all other parameters fixed at their best-fit values, and then using a finite difference scheme to construct the dimensionless partial derivative \\((L_{r}/V_{\\max}^{NS})\\left(\\partial V_{\\max}^{NS}/\\partial L_{r}\\right)\\equiv \\partial ln\\,V_{\\max}^{NS}/\\partial ln\\,L_{r}\\). Dimensionless partial derivatives computed for other traits are listed in Table C1. We consider a trait to be _insensitive_ to any parameter for which the absolute value of its dimensionless partial derivative is much less than unity. Note that the results are consistent with Table 1.
The partial derivatives are not independent. For example, four of the parameters in Eq. C1 have dimensions of inverse time (and we write them as \\(\\tau_{i}\\), \\(i=1,2,3,4\\)), and four have dimensions of length (and we write them as \\(\\chi_{i}\\), \\(i=1,2,3,4\\)). Differentiation of Eq. C1 yields the following constraints:
\\[\\sum_{i=1}^{4}\\partial ln\\,V_{\\max}^{NS}/\\partial ln\\,\\tau_{i}\\equiv 1,\\] (C2)
and
\\[\\sum_{i=1}^{4}\\partial ln\\,V_{\\max}^{NS}/\\partial ln\\,\\chi_{i}\\equiv 1.\\] (C3)
In general, a trait \\(L\\) that has dimensions of length, such as the width of the N-S jet, and which depends only on the parameters in Eq. C1, must satisfy the following constraints:
\\[\\sum_{i=1}^{4}\\partial ln\\,L/\\partial ln\\,\\tau_{i}\\equiv 0,\\] (C4)
and
\\[\\sum_{i=1}^{4}\\partial ln\\,L/\\partial ln\\,\\chi_{i}\\equiv 1.\\] (C5)
Table C1 shows that all traits with the exception of \\((D_{y})_{1}\\) satisfy the constraints. The reason \\((D_{y})_{1}\\) does not satisfy the constraints is that it is a trait of the E-W velocity and therefore also depends on parameters associated with the far-field flow \\(\\mathbf{v}^{\\infty}\\).
The uncertainties in the best-fit parameter values may be quantified as follows. The \\(L_{2}\\) norm difference between the best-fit velocity and the velocity in Fig. 1 is computed. The \\(L_{2}\\) norm difference is then recomputed with all parameters fixed at their best-fit values with the exception of parameter \\(L_{r}\\) (say). A curve of the \\(L_{2}\\) norm difference as a function of \\(L_{r}\\) is then computed. By construction, the curve has a minimum at the best-fit value of \\(L_{r}\\). The width of the curve at half-minimum is identified as the uncertainty in \\(L_{r}\\). Since measurements of the GRS velocity using CIV have much lower uncertainties than the _Voyager_ velocity and will soon be available (Asay-Davis et al. 2006), we did not deem it useful to compute parameter uncertainties for the analyses in this paper.
## References
* Asay-Davis et al. (2006) Asay-Davis, X., S. Shetty, and P. Marcus, 2006: Extraction of Velocity Fields from Telescope Image Pairs of Jupiter's Great Red Spot, New Red Oval, and Zonal Jet Streams. _Bulletin of the American Physical Society_, **51**, 116.
* Asay-Davis et al. (2006)* Cho and Polvani (1996) Cho, J.-K. and L. Polvani, 1996: The emergence of jets and vortices in freely evolving, shallow water turbulence on a sphere. _Physics of Fluids_, **8**, 1531-1552.
* Cho et al. (2001) Cho, J. Y.-K., M. de la Torre Juarez, A. P. Ingersoll, and D. G. Dritschel, 2001: High-resolution, three-dimensional model of jupiter's great red spot. _J. Geophys. Res._, **106**, 5099-5106.
* Dowling (1995) Dowling, T. E., 1995: Dynamics of jovian atmospheres. _Annual Review of Fluid Mechanics_, **27**, 293-334.
* Dowling and Ingersoll (1988) Dowling, T. E. and A. P. Ingersoll, 1988: Potential vorticity and layer thickness variations in the flow around jupiter's great red spot and the white oval bc. _J. Atmos. Sci._, **45**, 1380-1396.
* (1989) -- 1989: Jupiter's great red spot as a shallow water system. _J. Atmos. Sci._, **46**, 3256-3278.
* Ingersoll and Cuong (1981) Ingersoll, A. P. and P. G. Cuong, 1981: Numerical model of long-lived jovian vortices. _J. Atmos. Sci._, **38**, 2067-2076.
* Limaye (1986) Limaye, S. S., 1986: Jupiter: New estimates of the mean zonal flow at the cloud level. _Icarus_, **65**, 335-352.
* Marcus (1990) Marcus, P. S., 1990: Vortex dynamics in a shearing zonal flow. _J. Fluid Mech._, **215**, 393-430.
* Marcus (1993) -- 1993: Jupiter's great red spot and other vortices. _Rev. Astron. Astrophy._, **31**, 523-573.
* Marcus et al. (2000) Marcus, P. S., T. Kundu, and C. Lee, 2000: Vortex dynamics and zonal flows. _Physics of Plasmas_, **7**, 1630-1640.
* Mitchell et al. (1981) Mitchell, J. L., R. F. Beebe, A. P. Ingersoll, and G. W. Garneau, 1981: Flow fields within jupiter's great red spot and white oval bc. _J. Geophys. Res._, **86**, 8751-8757.
* Read et al. (2006a) Read, P. L., P. J. Gierasch, and B. J. Conrath, 2006a: Mapping potential-vorticity dynamics on Jupiter. II: the Great Red Spot from Voyager 1 and 2 data. _Quarterly Journal of the Royal Meteorological Society_, **132**, 1605-1625.
* Read et al. (2006b) Read, P. L., P. J. Gierasch, B. J. Conrath, A. Simon-Miller, T. Fouchet, and Y. Y. H, 2006b: Mapping potential-vorticity dynamics on Jupiter. I: Zonal-mean circulation from Cassini and Voyager 1 data. _Quarterly Journal of the Royal Meteorological Society_, **132**, 1577-1603.
* Shetty et al. (2006) Shetty, S., X. Asay-Davis, and P. S. Marcus, 2006: Modeling and Data Assimilation of the Velocity of Jupiter's Great Red Spot and Red Oval. _Bulletin of the American Physical Society_, **51**, 116.
* Simon-Miller et al. (2002) Simon-Miller, A. A., P. J. Gierasch, R. F. Beebe, B. Conrath, F. M. Flasar, R. K. Achterberg, and the Cassini CIRS Team, 2002: New observational results concerning jupiter's great red spot. _Icarus_, **158**, 249-266.
* Solomon et al. (1993) Solomon, T., W. Holloway, and H. Swinney, 1993: Shear flow instabilities and rossby waves in barotropic flow in a rotating annulus. _Physics of Fluids_, **5**, 1971-1982.
* Sommeria et al. (1989) Sommeria, J., S. D. Meyers, and H. L. Swinney, 1989: Laboratory model of a planetary eastward jet. _Nature_, **337**, 58-61.
* Zohdi (2003) Zohdi, T. I., 2003: Genetic design of solids possessing a random-particulate microstructure. _Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences_, **361**, 1021-1043.
Generated with ametsocjmk.cls.
Written by J. M. Klymak
mailto:[email protected]
[http://opg1.ucsd.edu/](http://opg1.ucsd.edu/) jklymak/WorkTools.html | In this paper, Jupiter's Great Red Spot (GRS) is used to determine properties of the Jovian atmosphere that cannot otherwise be found. These properties include the potential vorticity of the GRS and its neighboring jet streams, the shear imposed on the GRS by the jet streams, and the vertical entropy gradient (i.e., Rossby deformation radius). The cloud cover of the GRS, which is often used to define the GRS's area and aspect ratio, is found to differ significantly from the region of the GRS's potential vorticity anomaly. The westward-going jet stream to the north of the GRS and the eastward-going jet stream to its south are each found to have a large potential vorticity \"jump\". The jumps have opposite sign and as a consequence of their interaction with the GRS, the shear imposed on the GRS is reduced. The east-west to north-south aspect ratio of the GRS's potential vorticity anomaly depends on the ratio of the imposed shear to the strength of the anomaly. The aspect ratio is found to be \\(\\approx\\)2:1, but without the opposing jumps it would be much greater. The GRS's high-speed collar and quiescent interior require that the potential vorticity in the interior be approximately half that in the collar. No other persistent geophysical vortex has a significant minimum of potential vorticity in its interior and laboratory vortices with such a minimum are unstable.
Sushil Shetty, 6116 Etcheverry Hall, University of California, Berkeley, CA 94720 | Summarize the following text. |
arxiv-format/0707_4620v2.md | # Nuclear equation of state at high baryonic density and compact star constraints
D.N. Basu
Variable Energy Cyclotron Centre, 1/AF Bidhan Nagar, Kolkata 700 064, India
P. Roy Chowdhury
C. Samanta
Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700 064, India Physics Department, Virginia Commonwealth University, Richmond, VA 23284-2000, U.S.A.
######
_PACS numbers_: 21.65.+f, 23.50.+z, 23.60.+e, 23.70.+j, 25.45.De, 26.60.+c
keywords: EoS; Symmetry energy; Neutron star; URCA; Proton Radioactivity.
+
Footnote †: E-mail:[email protected]
+
[FOOTNOIntroduction
The investigation of constraints for the high baryonic density behaviour of nuclear matter has recently received new impetus with the plans to construct a new accelerator facility (FAIR) at GSI Darmstadt. The stiffness of a nuclear equation of state (EoS) is characterised by nuclear incompressibility [1] which can be extracted experimentally. Nuclear incompressibility [2, 3] also determines the velocity of sound in nuclear medium for predictions of shock wave generation and propagation. The EoS is of fundamental importance in the theories of nucleus-nucleus collisions at energies where the nuclear incompressibility \\(K_{0}\\) comes into play as well as in the theories of supernova explosions [4]. A widely used experimental method is the determination of the nuclear incompressibility from the observed giant monopole resonances (GMR) [5, 6]. Other recent experimental determinations are based upon the production of hard photons in heavy ion collisions [7] and from isoscalar giant dipole resonances (ISGDR) [8, 9, 10]. From the experimental data of isoscalar giant monopole resonance (ISGMR) conclusion can be drawn that \\(K_{0}\\approx 240\\)\\(\\pm\\) 20 MeV [11]. The general theoretical observation is that the non-relativistic [12] and the relativistic [13] mean field models [14] predict for the bulk incompressibility for the SNM, \\(K_{0}\\), values which are significantly different from one another, _viz._\\(\\approx\\) 220-235 MeV and \\(\\approx\\) 250-270 MeV respectively. Theoretical EoS for the SNM that predict higher \\(K_{0}\\) values \\(\\approx\\) 300 MeV are often called'stiff' EoS whereas those EoS which predict smaller \\(K_{0}\\) values \\(\\approx\\) 200 MeV are termed as'soft' EoS.
The nuclear symmetry energy (NSE) is an important quantity in the equation of state of isospin asymmetric nuclear matter. This currently unknown quantity plays a key role to the understanding of the structure of systems as diverse as the neutron rich nuclei and neutron stars and it enters as an input to the heavy ion reactions [15, 16]. In general, they can be broadly classified into two different forms. One, where the NSE increases monotonically with increasing density ('stiff' dependence) [17] and the other, where the NSE increases initially up to normal nuclear density or beyond and then decreases at higher densities ('soft dependence') [18]. Determination of the exact form of the density dependence of the NSE is rather important for studying the structure of neutron rich nuclei, and studies relevant to astrophysical problems, such as the structure of neutron stars and the dynamics of supernova collapse [19]. 'Stiff' density dependence of the NSE is predicted to lead a large neutron skin thickness compared to a'soft' dependence and can result in rapid cooling of a neutron star, and a larger neutron star radius, compared to a soft density dependence. A somewhat'stiff' EoS of nuclear matter need not lead to a'stiff' density dependence of the NSE. Modern constraints from mass and mass-radius-relation measurements require a stiff EoS at high densities whereas flow data from heavy-ion collisions seem to disfavour too stiff behavior of the EoS. As a compromise hybrid EoS [20, 21, 22] with a smooth transition at high density to quark matter are being proposed.
In view of rather large differences between the various calculations of the NSE present even at the subsaturation densities, the question arises whether one can obtain empirical constraints from finite nuclei. Since the degree of isospin diffusion in heavy-ion collisions at intermediate energies is affected by the stiffness of the NSE, these reactions, therefore, can also provide constraints on the low energy behaviour of the NSE [17]. However, the high density behaviour remains largely undetermined since hardly any data on the simultaneous measurements of masses and corresponding radii of neutron stars exist whereas they can be obtained theoretically by solving Tolman-Oppenheimer-Volkov equation. However there exist indirect indications such as the neutron star cooling process. Recently search for the experimental signatures of the moderately high density behaviour of the NSE has been proposed [18] theoretically using several sensitive probes suh as the \\(\\pi^{-}\\) to \\(\\pi^{+}\\) ratio, tranverse collective flow and its excitation function as well as the neutron-proton differential flow.
In the present work, we apply recently discovered astrophysical bounds of high density behaviour of nuclear matter from compact star cooling phenomenology and also show that the theoretical description of nuclear matter using the density dependent M3Y-Reid-Elliott effective interaction [23, 24] gives a value of nuclear incompressibility which is in excellent agreement with values extracted from experiments. The velocity of sound does not become superluminous since the energy dependence is treated properly for the negative energy domain of nuclear matter. It also provides a symmetry energy that is consistent with the empirical value extracted from the measured masses of nuclei. The microscopic proton-nucleus interaction potential is obtained by folding the density of the nucleus with DDM3Y effective interaction whose density dependence is determined completely from the nuclear matter calculations. The quantum mechanical tunneling probability is calculated within the WKB framework using these nuclear potentials. These calculations provide reasonable estimates for the observed proton radioactivity lifetimes. Along with earlier works using the same formalism, present work provides a unified description of radioactivity, scattering, nuclear EoS and NSE.
## 2 The nuclear equation of state for symmetric nuclear matter
In the present work, density dependence of the effective interaction, DDM3Y, is completely determined from nuclear matter calculations. The equilibrium density of the nuclear matter is determined by minimizing the energy per nucleon. In contrast to our earlier calculations for the nuclear EoS where the energy dependence of the zero range potential was treated as fixed at a value corresponding to the equilibrium energy per nucleon \\(\\epsilon_{0}\\)[25] and assumed to vary negligibly with \\(\\epsilon\\) inside nuclear fluid, in the present calculations the energy variation of the zero range potential is treated more accurately by allowing it to vary freely but only with the kinetic energy part \\(\\epsilon^{kin}\\) of the energy per nucleon \\(\\epsilon\\) over the entire range of \\(\\epsilon.\\) This is not only more plausible, but also yields excellent result for the incompressibility \\(K_{0}\\) of the SNM which no more suffers from the superluminosity problem.
The constants of density dependence are determined by reproducing the saturation conditions. It is worthwhile to mention here that due to attractive character of the M3Y forces the saturation condition for cold nuclear matter is not fulfilled. However, the realistic description of nuclear matter properties can be obtained with this density dependent M3Y effective interaction. Therefore, the constants of density dependence have been obtained by reproducing the saturation energy per nucleon and the saturation nucleonic density of the cold SNM. Based on the Hartree or mean field assumption and using the DDM3Y interaction, the expression for the energy per nucleon for symmetric nuclear matter \\(\\epsilon\\) is given by
\\[\\epsilon=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]+[\\frac{\\rho J_{v00}C(1-\\beta\\rho^{n })}{2}] \\tag{1}\\]
where Fermi momentum \\(k_{F}=(1.5\\pi^{2}\\rho)^{\\frac{1}{3}}\\), \\(m\\) is the nucleonic mass equal to 938.91897 MeV/c\\({}^{2}\\) and \\(J_{v00}\\) represents the volume integral of the isoscalar part of the M3Y interaction supplemented by the zero-range potential having the form
\\[J_{v00}=J_{v00}(\\epsilon^{kin})=\\int\\int\\int t_{00}^{M3Y}(s, \\epsilon)d^{3}s \\tag{2}\\] \\[= 7999\\frac{4\\pi}{4^{3}}-2134\\frac{4\\pi}{2.5^{3}}+J_{00}(1-\\alpha \\epsilon^{kin})\\ \\mbox{where}\\ J_{00}=-276\\ \\mbox{MeV}\\ \\mbox{fm}^{3}.\\]
where \\(\\epsilon^{kin}=\\frac{3\\hbar^{2}k_{F}^{2}}{10m}\\) is the kinetic energy part of the energy per nucleon \\(\\epsilon\\) given by Eq.(1).
The isoscalar \\(t_{00}^{M3Y}\\) and the isovector \\(t_{01}^{M3Y}\\) components of M3Y interaction potentials [24, 26] supplemented by zero range potentials are given by \\(t_{00}^{M3Y}(s,\\epsilon)=7999\\frac{\\exp(-4s)}{4s}-2134\\frac{\\exp(-2.5s)}{2.5s }-276(1-\\alpha\\epsilon)\\delta(s)\\) and \\(t_{01}^{M3Y}(s,\\epsilon)=-4886\\frac{\\exp(-4s)}{4s}+1176\\frac{\\exp(-2.5s)}{2.5s }+228(1-\\alpha\\epsilon)\\delta(s)\\) respectively, where the energy dependence parameter \\(\\alpha\\)=0.005/MeV. The DDM3Y effective NN interaction is given by \\(v_{0i}(s,\\rho,\\epsilon)=t_{0i}^{M3Y}(s,\\epsilon)g(\\rho)\\) where the density dependence \\(g(\\rho)=C(1-\\beta\\rho^{n})\\) and the constants \\(C\\) and \\(\\beta\\) of density dependence have been obtained from the saturation condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}=0\\) at \\(\\rho=\\rho_{0}\\) and \\(\\epsilon=\\epsilon_{0}\\) where \\(\\rho_{0}\\) and \\(\\epsilon_{0}\\) are the saturation density and the saturation energy per nucleon respectively. Eq.(1)can be differentiated with respect to \\(\\rho\\) to yield equation
\\[\\frac{\\partial\\epsilon}{\\partial\\rho}=[\\frac{\\hbar^{2}k_{F}^{2}}{5m\\rho}]+\\frac{J_ {v00}C}{2}[1-(n+1)\\beta\\rho^{n}]-\\alpha J_{00}C[1-\\beta\\rho^{n}][\\frac{\\hbar^{2 }k_{F}^{2}}{10m}]. \\tag{3}\\]
The equilibrium density of the cold SNM is determined from the saturation condition. Then Eq.(1) and Eq.(3) with the saturation condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}=0\\) can be solved simultaneously for fixed values of the saturation energy per nucleon \\(\\epsilon_{0}\\) and the saturation density \\(\\rho_{0}\\) of the cold SNM to obtain the values of \\(\\beta\\) and \\(C\\). The constants of density dependence \\(\\beta\\) and \\(C\\), thus obtained, are given by
\\[\\beta=\\frac{[(1-p)+(q-\\frac{3q}{p})]\\rho_{0}^{-n}}{[(3n+1)-(n+1)p+(q-\\frac{3q}{ p})]} \\tag{4}\\]
\\[\\mbox{where}\\ \\ \\ \\ p=\\frac{[10m\\epsilon_{0}]}{[\\hbar^{2}k_{F_{0}}^{2}]},\\ q= \\frac{2\\alpha\\epsilon_{0}J_{00}}{J_{v00}^{0}} \\tag{5}\\]
where \\(J_{v00}^{0}=J_{v00}(\\epsilon_{0}^{kin})\\) which means \\(J_{v00}\\) at \\(\\epsilon^{kin}=\\epsilon_{0}^{kin}\\), the kinetic energy part of the saturation energy per nucleon of SNM, \\(k_{F_{0}}=[1.5\\pi^{2}\\rho_{0}]^{1/3}\\) and
\\[C=-\\frac{[2\\hbar^{2}k_{F_{0}}^{2}]}{5mJ_{v00}^{0}\\rho_{0}[1-(n+1)\\beta\\rho_{0} ^{n}-\\frac{q\\hbar^{2}k_{F_{0}}^{2}(1-\\beta\\rho_{0}^{n})}{10m\\epsilon_{0}}]}, \\tag{6}\\]
respectively. It is quite obvious that the constants of density dependence \\(C\\) and \\(\\beta\\) obtained by this method depend on the saturation energy per nucleon \\(\\epsilon_{0}\\), the saturation density \\(\\rho_{0}\\), the index \\(n\\) of the density dependent part and on the strengths of the M3Y interaction through the volume integral \\(J_{v00}^{0}\\).
## 3 The incompressibility of symmetric nuclear matter
The incompressibility or the compression modulus of symmetric nuclear matter, which is a measure of the curvature of an EoS at saturation density and defined as \\(k_{F}^{2}\\frac{\\partial^{2}\\epsilon}{\\partial k_{F}^{2}}\\mid_{k_{F}=k_{F_{0}}}\\), measures the stiffness of an EoS. The \\(\\frac{\\partial^{2}\\epsilon}{\\partial\\rho^{2}}\\) is given by
\\[\\frac{\\partial^{2}\\epsilon}{\\partial\\rho^{2}} = [-\\frac{\\hbar^{2}k_{F}^{2}}{15m\\rho^{2}}]-[\\frac{J_{v00}Cn(n+1) \\beta\\rho^{n-1}}{2}] \\tag{7}\\] \\[-\\alpha J_{00}C[1-(n+1)\\beta\\rho^{n}][\\frac{\\hbar^{2}k_{F}^{2}}{5 m\\rho}]+[\\frac{\\alpha J_{00}C(1-\\beta\\rho^{n})\\hbar^{2}k_{F}^{2}}{30m\\rho}]\\]and therefore the incompressibility \\(K_{0}\\) of the cold SNM which is defined as
\\[K_{0}=k_{F}^{2}\\frac{\\partial^{2}\\epsilon}{\\partial k_{F}^{2}}\\mid_{k_{F}=k_{F_{0} }}=9\\rho^{2}\\frac{\\partial^{2}\\epsilon}{\\partial\\rho^{2}}\\mid_{\\rho=\\rho_{0}} \\tag{8}\\]
can be theoretically obtained as
\\[K_{0} = [-\\frac{3\\hbar^{2}k_{F_{0}}^{2}}{5m}]-[\\frac{9J_{v00}^{0}Cn(n+1) \\beta\\rho_{0}^{n+1}}{2}] \\tag{9}\\] \\[-9\\alpha J_{00}C[1-(n+1)\\beta\\rho_{0}^{n}][\\frac{\\rho_{0}\\hbar^{ 2}k_{F_{0}}^{2}}{5m}]+[\\frac{3\\rho_{0}\\alpha J_{00}C(1-\\beta\\rho_{0}^{n}) \\hbar^{2}k_{F_{0}}^{2}}{10m}].\\]
The calculations are performed using the values of the saturation density \\(\\rho_{0}\\)=0.1533 fm\\({}^{-3}\\)[2] and the saturation energy per nucleon \\(\\epsilon_{0}\\)=-15.26 MeV [27, 28] for the SNM obtained from the co-efficient of the volume term of Bethe-Weizsacker mass formula [29, 30] which is evaluated by fitting the recent experimental and estimated atomic mass excesses from Audi-Wapstra-Thibault atomic mass table [31] by minimizing the mean square deviation incorporating correction for the electronic binding energy [32]. In a similar recent work, including surface symmetry energy term, Wigner term, shell correction and proton form factor correction to Coulomb energy also, \\(a_{v}\\) turns out to be 15.4496 MeV [33] (\\(a_{v}=\\)14.8497 MeV when \\(A^{0}\\) and \\(A^{1/3}\\) terms are also included). Using the usual values of \\(\\alpha\\)=0.005 MeV\\({}^{-1}\\) for the parameter of energy dependence of the zero range potential and \\(n\\)=2/3, the values obtained for the constants of density dependence \\(C\\) and \\(\\beta\\) and the SNM incompressibility \\(K_{0}\\) are 2.2497, 1.5934 fm\\({}^{2}\\) and 274.7 MeV respectively. The saturation energy per nucleon is the volume energy coefficient and the value of -15.26\\(\\pm\\)0.52 MeV covers, more or less, the entire range of values obtained for \\(a_{v}\\) for which now the values of \\(C\\)=2.2497\\(\\pm\\)0.0420, \\(\\beta\\)=1.5934\\(\\pm\\)0.0085 fm\\({}^{2}\\) and the SNM incompressibility \\(K_{0}\\)=274.7\\(\\pm\\)7.4 MeV.
The theoretical estimate \\(K_{0}\\) of the incompressibility of infinite SNM obtained from present approach using DDM3Y is about 270 MeV. The theoretical estimate of \\(K_{0}\\) from the refractive \\(\\alpha\\)-nucleus scattering is about 240-270 MeV [34, 35] and that by infinite nuclear matter model (INM) [36] claims a well defined and stable value of \\(K_{0}=288\\pm 20\\) MeV and present theoretical estimate is in reasonably close agreement with the value obtained by INM which rules out any values lower than 200 MeV. Present estimate for the incompressibility \\(K_{0}\\) of the infinite SNM is in good agreement with the experimental value of \\(K_{0}=300\\pm 25\\) MeV obtained from the giant monopole resonance (GMR) [5] and with the the recent experimental determination of \\(K_{0}\\) based upon the production of hard photons in heavy ion collisions which led to the experimental estimate of \\(K_{0}=290\\pm 50\\) MeV [7]. However, the experimental values of \\(K_{0}\\) extracted from the isoscalar giant dipole resonance (ISGDR) are claimed to be smaller [10]. The present non-relativistic mean field model estimate for the nuclear incompressibility \\(K_{0}\\) for SNM using DDM3Y interaction is rather close to the theoretical estimates obtained using relativistic mean field models and close to the lower limit of the older experimental values [5] and close to the upper limit of the recent values [6] extracted from experiments.
Considering the status of experimental determination of the SNM incompressibility from data on the compression modes ISGMR and ISGDR of nuclei it can be inferred [11] that due to violations of self consistency in HF-RPA calculations of the strength functions of giant resonances result in shifts in the calculated values of the centroid energies which may be larger in magnitude than the current experimental uncertainties. In fact, the prediction of \\(K_{0}\\) lying in the range of 210-220 MeV were due to the use of a not fully self-consistent Skyrme calculations [11]. Correcting for this drawback, Skyrme parmetrizations of SLy4 type predict \\(K_{0}\\) values in the range of 230-240 MeV [11]. Moreover, it is possible to build bona fide Skyrme forces so that the SNM incompressibility is close to the relativistic value, namely 250-270 MeV. Therefore, from the ISGMR experimental data the conclusion can be drawn that \\(K_{0}\\approx\\) 240 \\(\\pm\\) 20 MeV. The ISGDR data tend to point to lower values [8, 9, 10] for \\(K_{0}\\). However, there is consensus that the extraction of \\(K_{0}\\) is in this case more problematic for various reasons. In particular, the maximum cross-section for ISGDR decreases very strongly at high excitation energy and may drop below the current experimental sensitivity for excitation energies [11] above 30 and 26 MeV for \\({}^{116}\\)Sn and \\({}^{208}\\)Pb, respectively. The present value of 274.7\\(\\pm\\)7.4 MeV for the incompressibility \\(K_{0}\\) of SNM obtained using DDM3Y interaction is, therefore, an excellent theoretical result.
The constant of density dependence \\(\\beta\\)=1.5934\\(\\pm\\)0.0085 fm\\({}^{2}\\), which has the dimension of cross section for \\(n\\)=2/3, can be interpreted as the isospin averaged effective nucleon-nucleon interaction cross section in ground state symmetric nuclear medium. For a nucleon in ground state nuclear matter \\(k_{F}\\approx\\) 1.3 fm\\({}^{-1}\\) and \\(q_{0}\\sim\\hbar k_{F}c\\approx\\) 260 MeV and the present result for the 'in medium' effective cross section is reasonably close to the value obtained from a rigorous Dirac-Brueckner-Hartree-Fock [37] calculations corresponding to such \\(k_{F}\\) and \\(q_{0}\\) values which is \\(\\approx\\) 12 mb. Using the value of the constant of density dependence \\(\\beta\\)=1.5934\\(\\pm\\)0.0085 fm\\({}^{2}\\) corresponding to the standard value of the parameter \\(n\\)=2/3 along with the nucleonic density of 0.1533 fm\\({}^{-3}\\), the value obtained for the nuclear mean free path \\(\\lambda\\) is about 4 fm which is in excellent agreement [38] with that obtained using another method.
The nuclear equation of state for asymmetric nuclear matter
The EoS for asymmetric nuclear matter is calculated by adding to the isoscalar part, the isovector component [39] of M3Y interaction [40] that do not contribute to the EoS of SNM. The EoS for SNM and pure neutron matter (PNM) are then used to calculate NSE. In this section, implications for the density dependence of this NSE in case of neutron stars are discussed, and also possible constraints on the density dependence obtained from finite nuclei are compared.
Assuming interacting Fermi gas of neutrons and protons, with isospin asymmetry \\(X=\\frac{\\rho_{n}-\\rho_{p}}{\\rho_{n}+\\rho_{p}}\\), \\(\\rho=\\rho_{n}+\\rho_{p}\\), where \\(\\rho_{n}\\), \\(\\rho_{p}\\) and \\(\\rho\\) are the neutron, proton and nucleonic densities respectively, the energy per nucleon for isospin asymmetric nuclear matter can be derived as
\\[\\epsilon(\\rho,X)=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X)+(\\frac{ \\rho J_{v}C}{2})(1-\\beta\\rho^{n})\\] \\[=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X)-[\\frac{\\hbar^{2}k_{F_{0}}^ {2}}{5m}][\\frac{\\rho}{\\rho_{0}}][\\frac{J_{v}}{J_{v00}^{0}}][\\frac{(1-\\beta \\rho^{n})}{1-(n+1)\\beta\\rho_{0}^{n}-\\frac{q\\hbar^{2}k_{F_{0}}^{2}(1-\\beta\\rho _{0}^{n})}{10m\\epsilon_{0}}}] \\tag{10}\\]
where \\(k_{F}=(1.5\\pi^{2}\\rho)^{\\frac{1}{3}}\\) which is equal to Fermi momentum in case of SNM, the kinetic energy per nucleon \\(\\epsilon^{kin}=[\\frac{3\\hbar^{2}k_{F}^{2}}{10m}]F(X)\\) with \\(F(X)=[\\frac{(1+X)^{5/3}+(1-X)^{5/3}}{2}]\\) and \\(J_{v}=J_{v00}+X^{2}J_{v01}\\) and \\(J_{v01}\\) represents the volume integral of the isovector part of the M3Y interaction supplemented by the zero-range potential having the form
\\[J_{v01}=J_{v01}(\\epsilon^{kin})=\\int\\int\\int t_{01}^{M3Y}(s, \\epsilon)d^{3}s\\] \\[= -4886\\frac{4\\pi}{4^{3}}+1176\\frac{4\\pi}{2.5^{3}}+J_{01}(1-\\alpha \\epsilon^{kin})\\ {\\rm where}\\ J_{01}=228\\ {\\rm MeV}\\ {\\rm fm}^{3}, \\tag{11}\\] \\[\\frac{\\partial\\epsilon}{\\partial\\rho}=[\\frac{\\hbar^{2}k_{F}^{2}} {5m\\rho}]F(X)+\\frac{J_{v}C}{2}[1-(n+1)\\beta\\rho^{n}]-\\alpha JC[1-\\beta\\rho^{n} ][\\frac{\\hbar^{2}k_{F}^{2}}{10m}]F(X) \\tag{12}\\]
where \\(J=J_{00}+X^{2}J_{01}\\) and
\\[\\frac{\\partial^{2}\\epsilon}{\\partial\\rho^{2}}=[-\\frac{\\hbar^{2}k_ {F}^{2}}{15m\\rho^{2}}]F(X)-[\\frac{J_{v}Cn(n+1)\\beta\\rho^{n-1}}{2}]\\] \\[-\\alpha JC[1-(n+1)\\beta\\rho^{n}][\\frac{\\hbar^{2}k_{F}^{2}}{5m\\rho }]F(X)+[\\frac{\\alpha JC(1-\\beta\\rho^{n})\\hbar^{2}k_{F}^{2}}{30m\\rho}]F(X). \\tag{13}\\]The pressure \\(P\\) and the energy density \\(\\varepsilon\\) of nuclear matter with isospin asymmetry \\(X\\) are given by
\\[P= \\rho^{2}\\frac{\\partial\\epsilon}{\\partial\\rho}=[\\rho\\frac{\\hbar^{2}k _{F}^{2}}{5m}]F(X)+\\rho^{2}\\frac{J_{v}C}{2}[1-(n+1)\\beta\\rho^{n}] \\tag{14}\\] \\[-\\rho^{2}\\alpha JC[1-\\beta\\rho^{n}][\\frac{\\hbar^{2}k_{F}^{2}}{10m }]F(X),\\]
and
\\[\\varepsilon=\\rho(\\epsilon+mc^{2})=\\rho[(\\frac{3\\hbar^{2}k_{F}^{2}}{10m})F(X)+ (\\frac{\\rho J_{v}C}{2})(1-\\beta\\rho^{n})+mc^{2}], \\tag{15}\\]
respectively, and thus the velocity of sound \\(v_{s}\\) in nuclear matter with isospin asymmetry \\(X\\) is given by
\\[\\frac{v_{s}}{c}=\\sqrt{\\frac{\\partial P}{\\partial\\varepsilon}}=\\sqrt{\\frac{[2 \\rho\\frac{\\partial\\epsilon}{\\partial\\rho}+\\rho^{2}\\frac{\\partial^{2} \\epsilon}{\\partial\\rho^{2}}]}{[\\epsilon+mc^{2}+\\rho\\frac{\\partial\\epsilon}{ \\partial\\rho}]}}. \\tag{16}\\]
The incompressibilities for isospin asymmetric nuclear matter are evaluated at saturation densities \\(\\rho_{s}\\) with the condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}=0\\) which corresponds to vanishing pressure. The incompressibility \\(K_{s}\\) for isospin asymmetric nuclear matter is therefore expressed as
\\[K_{s}=-\\frac{3\\hbar^{2}k_{F_{s}}^{2}}{5m}F(X)-\\frac{9J_{v}^{s}Cn( n+1)\\beta\\rho_{s}^{n+1}}{2} \\tag{17}\\] \\[-9\\alpha JC[1-(n+1)\\beta\\rho_{s}^{n}][\\frac{\\rho_{s}\\hbar^{2}k_{ F_{s}}^{2}}{5m}]F(X)+[\\frac{3\\rho_{s}\\alpha JC(1-\\beta\\rho_{s}^{n})\\hbar^{2}k_{ F_{s}}^{2}}{10m}]F(X).\\]
Here \\(k_{F_{s}}\\) means that the \\(k_{F}\\) is evaluated at saturation density \\(\\rho_{s}\\). \\(J_{v}^{s}=J_{v00}^{s}+X^{2}J_{v01}^{s}\\) is the \\(J_{v}\\) at \\(\\epsilon^{kin}=\\epsilon_{s}^{kin}\\) which is the kinetic energy part of the saturation energy per nucleon \\(\\epsilon_{s}\\) and \\(J=J_{00}+X^{2}J_{01}\\).
In Table-1 incompressibility of isospin asymmetric nuclear matter \\(K_{s}\\) as a function of the isospin asymmetry parameter \\(X\\), using the usual value of \\(n\\)=2/3 and energy dependence parameter \\(\\alpha\\)=0.005 MeV\\({}^{-1}\\), is provided. The magnitude of the incompressibility \\(K_{s}\\) decreases with the isospin asymmetry \\(X\\) due to lowering of the saturation densities \\(\\rho_{s}\\) with \\(X\\) as well as decrease in the EoS curvature. At high isospin asymmetry \\(X\\), the isospin asymmetric nuclear matter does not have a minimum signifying that it can never be bound by itself due to nuclear interaction. However, the \\(\\beta\\) equilibrated nuclear matter which is a highly neutron rich asymmetric nuclear matter exists in the core of the neutron stars since its E/A is lower than that of SNM at high densities and is unbound by the nuclear force but can be bound due to high gravitational field realizable inside neutron stars.
The theoretical estimates of the pressure \\(P\\) and velocity of sound \\(v_{s}\\) of SNM and isospin asymmetric nuclear matter including PNM are calculated as functions of nucleonic density \\(\\rho\\) and energy density \\(\\varepsilon\\) using the usual value of 0.005 MeV\\({}^{-1}\\) for the parameter \\(\\alpha\\) of energy dependence of the zero range potential and also the standard value of the parameter \\(n\\)=2/3. Unlike other non-relativistic EoS, present EoS does not suffer from superluminosity at all for SNM and for PNM problem of super luminosity occurs only at very high densities (\\(\\rho>12\\rho_{0}\\)), higher than those encountered at the centres of neutron stars. In our earlier calculations for the nuclear EoS where the energy dependence of the zero range potential was treated as fixed at a value corresponding to the equilibrium energy per nucleon \\(\\epsilon_{0}\\)[28] and assumed to vary negligibly with \\(\\epsilon\\) inside nuclear fluid caused superluminosity problems for both SNM and PNM and at much lower densities [25] like the EoS obtained using the \\(v_{14}+TNI\\) interaction [41]. In the present calculations the energy variation of the zero range potential is treated more accurately allowing it to vary freely but only with the kinetic energy part \\(\\epsilon^{kin}\\) of the energy per nucleon \\(\\epsilon\\) over the entire range of \\(\\epsilon\\).
In Fig.-1 the energy per nucleon \\(\\epsilon\\) of SNM and PNM are plotted as functions of \\(\\rho\\). The continuous lines represent the curves for the present calculations using saturation energy per nucleon of -15.26 MeV whereas the dotted lines represent the same using \\(v_{14}+TNI\\) interaction [41] and the dash-dotted lines represent the same for the A18 model using variational chain summation (VCS) [42] for the SNM and PNM. The minimum of the energy per nucleon equaling the saturation energy -15.26 MeV for the present calculations occurs precisely at the saturation density \\(\\rho_{0}\\)=0.1533 fm\\({}^{-3}\\) since equilibrium density \\(\\rho_{0}\\) of the cold SNM is determined from the saturation condition \\(\\frac{\\partial\\epsilon}{\\partial\\rho}\\)=0 at \\(\\rho\\)=\\(\\rho_{0}\\) and \\(\\epsilon\\)=\\(\\epsilon_{0}\\)
\\begin{table}
\\begin{tabular}{c c c} \\hline \\hline \\(X\\) & \\(\\rho_{s}\\) & \\(K_{s}\\) \\\\ \\hline & fm\\({}^{-3}\\) & MeV \\\\ \\hline
0.0 & 0.1533 & 274.7 \\\\
0.1 & 0.1525 & 270.4 \\\\
0.2 & 0.1500 & 257.7 \\\\
0.3 & 0.1457 & 236.6 \\\\
0.4 & 0.1392 & 207.6 \\\\
0.5 & 0.1300 & 171.2 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Incompressibility of isospin asymmetric nuclear matter using the usual value of \\(n\\)=2/3 and energy dependence parameter \\(\\alpha\\)=0.005 MeV\\({}^{-1}\\).
Fig.-2 presents the plots of the energy per nucleon \\(\\epsilon\\) of nuclear matter with different isospin asymmetry X as functions of \\(\\rho/\\rho_{0}\\) for the present calculations. The pressure \\(P\\) of SNM and PNM are plotted in Fig.-3 and Fig.-4 respectively as functions of \\(\\rho/\\rho_{0}\\). The continuous lines represent the present calculations whereas the dotted lines represent the same using the A18 model using variational chain summation (VCS) of Akmal et al. [42] for the SNM and PNM. The dash-dotted line of Fig.-3 represents plot of \\(P\\) versus \\(\\rho/\\rho_{0}\\) for SNM for RMF using NL3 parameter set [43] and the area enclosed by the coninuous line corresponds to the region of pressures consistent with the experimental flow data [44]. It is interesting to note that the RMF-NL3 incompressibility for SNM is 271.76 MeV [45] which about the same as 274.7\\(\\pm\\)7.4 MeV obtained for the present calculation. The areas enclosed by the continuous and the dashed lines in Fig.-4 correspond to the pressure regions for neutron matter consistent with the experimental flow data after inclusion of the pressures from asymmetry terms with weak (soft NM) and strong (stiff NM) density dependences, respectively [44]. In Fig.-5 the velocity of sound \\(v_{s}\\) in SNM and PNM and the energy density \\(\\varepsilon\\) of SNM and PNM for the present calculations are plotted as functions of \\(\\rho/\\rho_{0}\\). The continuous lines represent the velocity of sound in units of \\(10^{-2}\\)c whereas the dotted lines represent energy density in MeV fm\\({}^{-3}\\).
## 5 The nuclear symmetry energy
The nuclear symmetry energy \\(E_{sym}(\\rho)\\) represents a penalty levied on the system as it departs from the symmetric limit of equal number of protons and neutrons and can be defined as the energy required per nucleon to change the SNM to pure neutron matter (PNM) [46]
\\[E_{sym}(\\rho)=\\epsilon(\\rho,1)-\\epsilon(\\rho,0) \\tag{18}\\]
and therefore using Eq.(10) for \\(X=1\\) and Eq.(1), the NSE can be given by
\\[E_{sym}(\\rho)=(2^{2/3}-1)\\frac{3}{5}E_{F}^{0}(\\frac{\\rho}{\\rho_{0}})^{2/3}+ \\frac{C}{2}\\rho(1-\\beta\\rho^{n})J_{v01} \\tag{19}\\]
where the Fermi energy \\(E_{F}^{0}=\\frac{\\hbar^{2}k_{F_{0}}^{2}}{2m}\\) for the SNM at ground state.
### Nuclear symmetry energy at high baryonic density
The first term of the right hand side is the kinetic energy contribution with density dependence of \\(a_{0}\\rho^{2/3}\\) whereas the second term arising due to nuclear interaction has a density dependence of the form of \\(a_{4}\\rho^{n+5/3}\\) since \\(J_{\
u 01}\\) is a function of \\(\\epsilon^{kin}\\) which varies as \\(\\rho^{2/3}\\) and \\(a_{0},a_{1},a_{2},a_{3}\\) and \\(a_{4}\\) are constants with respect to the nucleonic density \\(\\rho\\) or the parameter \\(n\\). If one uses an alternative definition [47] of \\(E_{sym}(\\rho)=\\frac{1}{2}\\frac{\\partial^{2}\\epsilon(\\rho,X)}{\\partial X^{2}} \\mid_{X=0}\\) for the nuclear symmetry energy, only the term \\((2^{2/3}-1)\\) of the above equation gets replaced by 5/9 [which is about five percent less than \\((2^{2/3}-1)\\)] and reduces only the kinetic energy contribution.
In Fig.-6 plots of E/A for SNM, PNM and NSE as functions of \\(\\rho/\\rho_{0}\\) are shown for \\(n\\)=2/3. The density dependence of the NSE at subnormal density from isospin diffusion [17] in heavy-ion collisions at intermediate energies has an approximate form of \\(31.6[\\frac{\\rho}{\\rho_{0}}]^{1.05}\\) MeV. This low energy behaviour of NSE \\(\\approx 31.6[\\frac{\\rho}{\\rho_{0}}]^{1.05}\\) MeV is close to that obtained using Eq.(19) at subnormal densities. At higher densities the present NSE using DDM3Y interaction peaks at \\(\\rho\\approx 1.95\\rho_{0}\\) and becomes negative at \\(\\rho\\approx 4.7\\rho_{0}\\). A negative NSE at high densities implies that the pure neutron matter becomes the most stable state. Consequently, pure neutron matter exists near the core of the neutron stars.
### Nuclear symmetry energy at low baryonic density
The volume symmetry energy coefficient \\(S_{v}\\) extracted from the masses of finite nuclei provides a constraint on the nuclear symmetry energy at nuclear density \\(E_{sym}(\\rho_{0})\\). The value of \\(S_{v}=30.048\\pm 0.004\\) MeV recently extracted [48] from the measured atomic mass excesses of 2228 nuclei is reasonably close to the theoretical estimate of the value of NSE at the saturation density \\(E_{sym}(\\rho_{0})\\)=30.71\\(\\pm\\)0.26 MeV obtained from the present calculations using DDM3Y interaction. Instead of Eq.(18), if an alternative definition [47]\\(E_{sym}(\\rho)=\\frac{1}{2}\\frac{\\partial^{2}\\epsilon(\\rho,X)}{\\partial X^{2}} \\mid_{X=0}\\) of the nuclear symmetry energy is used, then its value is 30.03\\(\\pm\\)0.26 MeV. In ref. [49] it is between 29.10 MeV to 32.67 MeV and that obtained by the liquid droplet model calculation of ref. [50] is 27.3 MeV whereas in ref. [51] it is 28.0 MeV. It should be mentioned that the value of the volume symmetry parameter \\(S_{v}\\) in some advanced mass description [52] is close to the present value which with their \\(-\\kappa_{vol.}b_{vol}=S_{v}\\) equals 29.3 MeV. The value of NSE at nuclear saturation density \\(\\approx 30\\) MeV, therefore, seems well established empirically. Theoretically different parametrizations of the relativistic mean-field (RMF) models, which fit obseravables for isospin symmetric nuclei well, lead to a relatively wide range of predictions 24-40 MeV for \\(E_{sym}(\\rho_{0})\\). The present result of 30.71\\(\\pm\\)0.26 MeV of the mean field calculation is close to the results of the calculation using Skyrme interaction SkMP (29.9 MeV) [53], Av18+\\(\\delta v\\)+UIX\\({}^{*}\\) variational calculation (30.1 MeV) [42] and field theoretical calculation DD-F (31.6 MeV) [46].
### Compact star constraints
The knowledge of the density dependence of nuclear symmetry energy is important for understanding not only the stucture of radioactive nuclei but also many important issues in nuclear astrophysics, such as nucleosynthesis during presupernova evolution of massive stars and the cooling of protoneutron stars. A neutron star without neutrino trappings can be considered as a \\(n,p,e\\) matter consisting of neutrons (n), protons (p) and electrons (e). The neutrinos do not accumulate in neutron stars because of its very small interaction probability and correspondingly very high mean free path [54, 55]. The \\(\\beta\\) equilibrium proton fraction \\(x_{\\beta}\\) [\\(=\\rho_{p}/\\rho\\)] is determined by [56]
\\[\\hbar c(3\\pi^{2}\\rho x_{\\beta})^{1/3}=4E_{sym}(\\rho)(1-2x_{\\beta}). \\tag{20}\\]
The equilibrium proton fraction is therefore entirely determined by the NSE. The \\(\\beta\\) equilibrium proton fraction calculated using the present NSE is plotted as function of \\(\\rho/\\rho_{0}\\) in Fig.-7. The maximum of \\(x_{\\beta}\\approx 0.044\\) occurs at \\(\\rho\\approx 1.35\\rho_{0}\\) and goes to zero at \\(\\rho\\approx 4.5\\rho_{0}\\) for \\(n\\)=2/3. The NSE extracted from the isospin diffusion in the intermediate energy heavy-ion collisions, having the approximate form of 31.6[\\(\\frac{\\rho}{\\rho_{0}}\\)]\\({}^{1.05}\\) MeV, provides a monotonically increasing \\(\\beta\\) equilibrium proton fraction and therefore can not be extended beyond normal nuclear matter densities. Present calculation, using NSE given by Eq.(19), of the \\(\\beta\\) equilibrium proton fraction forbids the direct URCA process since the equilibrium proton fraction is always less than 1/9 [56] which is consistent with the fact that there are no strong indications that fast cooling occurs. Moreover, recently it has been concluded theoretically that an acceptable EoS of asymmetric nuclear matter (such as \\(\\beta\\) equilibrated neutron matter) shall not allow the direct URCA process to occur in neutron stars with masses below 1.5 solar masses [46]. Although a recent experimental observation suggests high heat conductivity and enhanced core cooling process indicating the enhanced level of neutrino emission but that can be via the direct URCA process or Cooper-pairing [57]. Also observations of massive compact stars in the mass range of 2.1\\(\\pm\\)0.2 solar mass to a 1\\(\\sigma\\) confidence level (and 2.1\\({}^{+0.4}_{-0.5}\\) solar mass to a 2\\(\\sigma\\) confidence level) and 2.0\\(\\pm\\)0.1 solar mass and the lower bound for the mass-radius relation of isolated pulsar RX J1856 imply a rather'stiff' nuclear EoS [46]. The present NSE is'soft' because it increases initially with nucleonic density up to about two times the normal nuclear density and then decreases monotonically at higher densities. It is interesting to observe that although the SNM incompressibility is slightly on the higher side and the present EoS is'stiff', yet the present calculations provide a rather'soft' nuclear symmetry energy and thus satisfy the astrophysical constraints.
\\begin{table}
\\begin{tabular}{l l l l l l l l l} \\hline \\hline Parent & \\(l\\) & \\(Q\\) & \\(1^{st}\\) tpt & \\(2^{nd}\\) tpt & \\(3^{rd}\\) tpt & Expt. & This work & UFM \\\\ \\({}^{A}Z\\) & \\(\\hbar\\) & MeV & \\(R_{1}\\)[fm] & \\(R_{a}\\)[fm] & \\(R_{b}\\)[fm] & \\(log_{10}T(s)\\) & \\(log_{10}T(s)\\) & \\(log_{10}T(s)\\) \\\\ \\hline \\({}^{105}Sb\\) & 2 & 0.491(15) & 1.43 & 6.69 & 134.30 & 2.049\\({}^{+0.058}_{-0.067}\\) & 1.90(45) & 2.085 \\\\ \\({}^{145}Tm\\) & 5 & 1.753(10) & 3.20 & 6.63 & 56.27 & -5.409\\({}^{+0.109}_{-0.146}\\) & -5.28(7) & -5.170 \\\\ \\({}^{147}Tm\\) & 5 & 1.071(3) & 3.18 & 6.63 & 88.65 & 0.591\\({}^{+0.125}_{-0.175}\\) & 0.83(4) & 1.095 \\\\ \\({}^{147}Tm^{*}\\) & 2 & 1.139(5) & 1.44 & 7.28 & 78.97 & -3.444\\({}^{+0.046}_{-0.051}\\) & -3.46(6) & -3.199 \\\\ \\({}^{150}Lu\\) & 5 & 1.283(4) & 3.21 & 6.67 & 78.23 & -1.180\\({}^{+0.055}_{-0.064}\\) & -0.74(4) & -0.859 \\\\ \\({}^{150}Lu^{*}\\) & 2 & 1.317(15) & 1.45 & 7.33 & 71.79 & -4.523\\({}^{+0.620}_{-0.301}\\) & -4.46(15) & -4.556 \\\\ \\({}^{151}Lu\\) & 5 & 1.255(3) & 3.21 & 6.69 & 78.41 & -0.896\\({}^{+0.011}_{-0.012}\\) & -0.82(4) & -0.573 \\\\ \\({}^{151}Lu^{*}\\) & 2 & 1.332(10) & 1.46 & 7.35 & 69.63 & -4.796\\({}^{+0.026}_{-0.027}\\) & -4.96(10) & -4.715 \\\\ \\({}^{155}Ta\\) & 5 & 1.791(10) & 3.21 & 6.78 & 57.83 & -4.921\\({}^{+0.125}_{-0.125}\\) & -4.80(7) & -4.637 \\\\ \\({}^{156}Ta\\) & 2 & 1.028(5) & 1.47 & 7.37 & 94.18 & -0.620\\({}^{+0.082}_{-0.101}\\) & -0.47(8) & -0.461 \\\\ \\({}^{156}Ta^{*}\\) & 5 & 1.130(8) & 3.21 & 6.76 & 90.30 & 0.949\\({}^{+0.100}_{-0.129}\\) & 1.50(10) & 1.446 \\\\ \\({}^{157}Ta\\) & 0 & 0.947(7) & 0.00 & 7.55 & 98.95 & -0.523\\({}^{+0.135}_{-0.198}\\) & -0.51(12) & -0.126 \\\\ \\({}^{160}Re\\) & 2 & 1.284(6) & 1.45 & 7.43 & 77.67 & -3.046\\({}^{+0.075}_{-0.056}\\) & -3.08(7) & -3.109 \\\\ \\({}^{161}Re\\) & 0 & 1.214(6) & 0.00 & 7.62 & 79.33 & -3.432\\({}^{+0.045}_{-0.049}\\) & -3.53(7) & -3.231 \\\\ \\({}^{161}Re^{*}\\) & 5 & 1.338(7) & 3.22 & 6.84 & 77.47 & -0.488\\({}^{+0.056}_{-0.065}\\) & -0.75(8) & -0.458 \\\\ \\({}^{164}Ir\\) & 5 & 1.844(9) & 3.20 & 6.91 & 59.97 & -3.959\\({}^{+0.190}_{-0.139}\\) & -4.08(6) & -4.193 \\\\ \\({}^{165}Ir^{*}\\) & 5 & 1.733(7) & 3.21 & 6.93 & 62.35 & -3.469\\({}^{+0.082}_{-0.100}\\) & -3.67(5) & -3.428 \\\\ \\({}^{166}Ir\\) & 2 & 1.168(8) & 1.47 & 7.49 & 87.51 & -0.824\\({}^{+0.166}_{-0.273}\\) & -1.19(10) & -1.160 \\\\ \\({}^{166}Ir^{*}\\) & 5 & 1.340(8) & 3.22 & 6.91 & 80.67 & -0.076\\({}^{+0.125}_{-0.176}\\) & 0.06(9) & 0.021 \\\\ \\({}^{167}Ir\\) & 0 & 1.086(6) & 0.00 & 7.68 & 91.08 & -0.959\\({}^{+0.024}_{-0.025}\\) & -1.35(8) & -0.943 \\\\ \\({}^{167}Ir^{*}\\) & 5 & 1.261(7) & 3.22 & 6.92 & 83.82 & 0.875\\({}^{+0.098}_{-0.127}\\) & 0.54(8) & 0.890 \\\\ \\({}^{171}Au\\) & 0 & 1.469(17) & 0.00 & 7.74 & 69.09 & -4.770\\({}^{+0.185}_{-0.151}\\) & -5.10(16) & -4.794 \\\\ \\({}^{171}Au^{*}\\) & 5 & 1.718(6) & 3.21 & 7.01 & 64.25 & -2.654\\({}^{+0.054}_{-0.060}\\) & -3.19(5) & -2.917 \\\\ \\({}^{177}Tl\\) & 0 & 1.180(20) & 0.00 & 7.76 & 88.25 & -1.174\\({}^{+0.191}_{-0.349}\\) & -1.44(26) & -0.993 \\\\ \\({}^{177}Tl^{*}\\) & 5 & 1.986(10) & 3.22 & 7.10 & 57.43 & -3.347\\({}^{+0.095}_{-0.122}\\) & -4.64(6) & -4.379 \\\\ \\({}^{185}Bi\\) & 0 & 1.624(16) & 0.00 & 7.91 & 65.71 & -4.229\\({}^{+0.068}_{-0.081}\\) & -5.53(14) & -5.184 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Comparison between experimentally measured and theoretically calculated half-lives of spherical proton emitters. The asterisk symbol (*) in the experimental \\(Q\\) values denotes the isomeric state. The experimental \\(Q\\) values, half lives and \\(l\\) values are from ref. [58]. The results of the present calculations using the isoscalar and isovector components of DDM3Y folded potentials are compared with the experimental values and with the results of UFM estimates [59]. Experimental errors in \\(Q\\)[58] values and corresponding errors in calculated half-lives are inside parentheses.
Folding model analyses using effective interaction whose density dependence determined from nuclear matter calculation
Microscopic proton-nucleus interaction potentials are obtained by single folding the density of the nucleus with M3Y effective interaction supplemented by a zero-range pseudo-potential for exchange along with the density dependence. Parameters of the density dependence, \\(C\\)=2.2497 and \\(\\beta\\)=1.5934 fm\\({}^{2}\\), obtained here from the nuclear matter calculations assuming kinetic energy dependence of zero range potential, are used.
The half lives of the decays of spherical nuclei away from proton drip line by proton emissions are estimated theoretically. The half life of a parent nucleus decaying via proton emission is calculated using the WKB barrier penetration probability. The WKB method is found quite satisfactory and even better than the S-matrix method for calculating half widths of the \\(\\alpha\\) decay of superheavy elements [60]. For the present calculations, the zero point vibration energies used here are given by eqn.(5) of ref. [61] extended to protons and the experimental \\(Q\\) values [58] are used. Spherical charge distributions are used for Coulomb interaction potentials. The same set of data of ref. [59] has been used for the present calculations using \\(C\\)=2.2497 and \\(\\beta\\)=1.5934 fm\\({}^{2}\\) and presented in Table-2. The agreement of the present calculations with a wide range of experimental data for the proton radioactivity lifetimes are reasonable.
Since the density dependence of the effective projectile-nucleon interaction was found to be fairly independent of the projectile [62], as long as the projectile-nucleus interaction was amenable to a single-folding prescription, the density dependent effects on the nucleon-nucleon interaction were factorized into a target term times a projectile term and used successfully in case of \\(\\alpha\\) radioctivity of nuclei [63] including superheavies [64, 65, 66] and the cluster radioactivity [63, 67]. The calculations were performed for elastic and inelastic scattering of protons from nuclei \\({}^{18}Ne\\), \\({}^{18}O\\), \\({}^{20}O\\), \\({}^{22}O\\) using \\(C\\)=2.07 and \\(\\beta\\)=1.624 fm\\({}^{2}\\)[68, 69]. It is needless to say that the present value of \\(\\beta\\)=1.5934\\(\\pm\\)0.0085 fm\\({}^{2}\\), obtained by treating the energy variation of the zero range potential properly by allowing it to vary freely with the kinetic energy, which changes by about one percent neither changes the shape of the potential significantly nor the quality of fit or the values extracted for the nuclear deformations. However, since the value of \\(C\\), which acts as the overall normalisation constant for the nuclear potentials, changes by about six percent, causes changes but only to the renormalizations required for the potentials. Therefore, it provides reasonable description for elastic and inelastic scattering of protons and the deformation parameters extracted from these analyses are in good agreement with the quadrupole deformations obtained from the available experimental \\(B(E2)\\) values [70].
Summary and conclusion
A mean field calculation is carried out to obtain the equation of state of nuclear matter from a density dependent M3Y interaction (DDM3Y). The microscopic nuclear potentials are obtained by folding the DDM3Y effective interaction with the densities of interacting nuclei. The energy per nucleon is minimized to obtain ground state of the symmetric nuclear matter (SNM). The constants of density dependence of the effective interaction are obtained by reproducing the saturation energy per nucleon and the saturation density of SNM. The EoS of asymmetric nuclear matter is calculated by adding to the isoscalar part, the isovector component of M3Y interaction. The SNM and pure neutron matter EoS are used to calculate the nuclear symmetry energy which is found to be consistent with that extracted from the isospin diffusion in heavy-ion collisions at intermediate energies. The microscopic proton-nucleus interaction potential is obtained by folding the density of the nucleus with DDM3Y effective interaction whose density dependence is determined completely from the nuclear matter calculations.
In this work the energy variation of the exchange potential is treated properly in the negative energy domain of nuclear matter. The EoS of SNM, thus obtained, is free from the superluminosity problem encountered in some previous prescriptions. Moreover, the result of the present calculation for the compression modulus for the infinite symmetric nucler matter is in better agreement with that extracted from experiments. The calculated \\(\\beta\\) equilibrium proton fraction forbids direct URCA process which is consistent with the fact that there are no strong indications that fast cooling occurs. The results of the present calculations using single folded microscopic potentials for the proton-radioactivity lifetimes are in good agreement over a wide range of experimental data. We find that it also provides reasonable description for the elastic and inelastic scattering of protons and the deformation parameters extracted from the analyses are in good agreement with the available results. The results of the present calculations using microscopic potentials for half life calculations of \\(\\alpha\\) decays are found to be in excellent agreement with experimental data. These calculations also provide reliable estimates for the observed \\(\\alpha\\) decay lifetimes of the newly synthesized superheavy elements. It is, therefore, pertinent to conclude that a unified description of the symmetric and asymmetric nuclear matter, elastic and inelastic scattering, and cluster, \\(\\alpha\\) and proton radioactivities is achieved. With the energies and interaction rates foreseen at FAIR, the compressed baryonic matter (CBM) will create highest baryon densities in nucleus-nucleus collisions to explore the properties of superdense baryonic matter and the in-medium modifications of hadrons.
## References
* [1] J.P. Blaizot, Phys. Rep. **65**, 171 (1980).
* [2] C. Samanta, D. Bandyopadhyay and J.N. De, Phys. Lett. **B 217**, 381 (1989).
* [3] D. Bandyopadhyay, C. Samanta, S. K. Samaddar and J. N. De, Nucl. Phys. **A 511**, 1 (1990).
* [4] G.F. Bertsch and S. Das Gupta, Phys. Rep. **160**, 189 (1988).
* [5] M.M. Sharma, W.T.A. Borghols, S. Brandenburg, S. Crona, A. van der Woude and M.N. Harakeh, Phys. Rev. **C 38**, 2562 (1988).
* [6] Y.W.Lui, D.H. Youngblood, H.L. Clark, Y. Tokimoto and B. John, Acta Phys. Pol. **B 36**, 1107 (2005).
* [7] Y. Schutz et al., Nucl. Phys. **A 599**, 97c (1996).
* [8] Y.W. Lui, D.H. Youngblood, Y. Tokimoto, H.L. Clark and B. John, Phys. Rev. **C 69**, 034611 (2004).
* [9] D.H. Youngblood, Y.W. Lui, B. John, Y. Tokimoto, H.L. Clark and X. Chen, Phys. Rev. **C 69**, 054312 (2004).
* [10] U. Garg, Nucl. Phys. **A 731**, 3 (2004).
* [11] S. Shlomo, V.M. Kolomietz and G. Colo, Eur. Phys. Jour. **A 30**, 23 (2006).
* [12] G. Colo, N. Van Giai, J. Meyer, K. Bennaceur and P. Bonche, Phys. Rev. **C 70**, 024307 (2004).
* [13] R. Brockmann and R. Machleidt, Phys. Rev. **C 42**, 1965 (1990).
* [14] G. Colo and N. Van Giai, Nucl. Phys. **A 731**, 15 (2004).
* [15] B.A. Li, C.M. Ko and W. Bauer, Int. J. Mod. Phys. **E 7**, 147 (1998).
* [16] Bao-An Li, Phys. Rev. Lett. **88**, 192701 (2002).
* [17] Lie-Wen Chen, Che Ming Ko and Bao-an Li, Phys. Rev. Lett. **94**, 032701 (2005).
* [18] Bao-an Li, Nucl. Phys. **A 708**, 365 (2002).
* [19]_Isospin Physics in Heavy Ion Collisions at Intermediate Energies_, Edited by Bao-an Li and W. Schroder (Nova Science, New York, 2001).
* [20] H. Grigorian, D. Blaschke and T. Klahn, arXiv:astro-ph/0611595.
* [21] Gordon Baym, AIP Conf. Proc. **892**, 8 (2007).
* [22] G.F. Burgio, Jour. Phys. **G 35**, 014048 (2008).
* [23] G.Bertsch, J.Borysowicz, H.McManus, W.G.Love, Nucl. Phys. **A 284**, 399 (1977).
* [24] G.R. Satchler and W.G. Love, Phys. Reports **55**, 183 (1979).
* [25] D.N. Basu, P. Roy Chowdhury and C. Samanta, Acta Phys. Pol. **B 37**, 2869 (2006).
* [26] D.N. Basu, P. Roy Chowdhury and C. Samanta, Phys. Rev. **C 72**, 051601(R) (2005).
* [27] P. Roy Chowdhury, C. Samanta, D.N. Basu, Mod. Phys. Letts. **A 21**, 1605 (2005).
* [28] P. Roy Chowdhury and D.N. Basu, Acta Phys. Pol. **B 37**, 1833 (2006).
* [29] C.F. Weizsacker, Z.Physik **96**, 431 (1935).
* [30] H.A. Bethe, R.F. Bacher, Rev. Mod. Phys. **8**, 82 (1936).
* [31] G. Audi, A.H. Wapstra and C. Thibault, Nucl. Phys. **A 729**, 337 (2003).
* [32] D. Lunney, J.M. Pearson and C. Thibault, Rev. Mod. Phys. **75**, 1021 (2003).
* [33] G.Royer and C.Gautier, Phys. Rev. **C 73**, 067302 (2006).
* [34] Dao T. Khoa, G.R. Satchler and W. von Oertzen, Phys. Rev. **C 56**, 954 (1997).
* [35] Dao T. Khoa, W. von Oertzen, H.G. Bohlen and S. Ohkubo, Jour. Phys. **G 34**, R111 (2007).
* [36] L. Satpathy, V.S. Uma Maheswari and R.C. Nayak, Phys. Rep. **319**, 85 (1999).
* [37] F. Sammarruca and P. Krastev, Phys. Rev. **C 73**, 014001 (2006).
* [38] B. Sinha, Phys. Rev. Lett. **50**, 91 (1983).
* [39] A.M. Lane, Nucl. Phys. **35**, 676 (1962).
* [40] G.R. Satchler, _Int. series of monographs on Physics_, Oxford University Press, _Direct Nuclear reactions_, 470 (1983).
* [41] B. Friedman and V.R. Pandharipande, Nucl. Phys. **A 361**, 502 (1981).
* [42] A. Akmal, V.R. Pandharipande and D.G. Ravenhall, Phys. Rev. **C 58**, 1804 (1998).
* [43] G.A. Lalazissis, J. Konig, P. Ring, Phys. Rev. **C 55**, 540 (1997).
* [44] P. Danielewicz, R. Lacey and W.G. Lynch, Science **298**, 1592 (2002).
* [45] G. A. Lalazissis, S. Raman, and P. Ring, At. Data and Nucl. Data Tables **71**, 1 (1999).
* [46] T. Klahn et al., Phys. Rev. **C 74**, 035802 (2006).
* [47] R.B. Wiringa, V. Fiks and A. Fabrocini, Phys. Rev. **C 38**, 1010 (1988).
* [48] T. Mukhopadhyay and D.N. Basu, Acta Phys. Pol. **B 38**, 3225 (2007).
* [49] P. Danielewicz, Nucl. Phys. **A 727**, 233 (2003).
* [50] A.W. Steiner, M. Prakash, J.M. Lattimer and P.J. Ellis, Phys. Rep. **411**, 325 (2005).
* [51] A.E.L. Dieperink and D van Neck, Jour. Phys. Conf. Series **20**, 160 (2005).
* [52] K. Pomorski and J. Dudek, Phys. Rev. **C 67**, 044316 (2003).
* [53] L. Bennour et al., Phys. Rev. C **40**, 2834 (1989).
* [54] C. Shen, U. Lombardo, N. Van Giai and W. Zuo, Phys. Rev. **C 68**, 055802 (2003).
* [55] J. Margueron, I. Vidana and I. Bombaci, Phys. Rev. **C 68**, 055806 (2003).
* [56] J.M. Lattimer, C.J. Pethick, M. Prakash and P. Haensel, Phys. Rev. Lett. **66**, 2701 (1991).
* [57] E.M. Cackett et al., Mon. Not. Roy. Astron. Soc. **372**, 479 (2006).
* [58] A.A. Sonzogni, Nucl. Data Sheets **95**, 1 (2002).
* [59] M. Balasubramaniam and N. Arunachalam, Phys. Rev. **C 71**, 014603 (2005).
* [60] S. Mahadevan, P. Prema, C.S. Shastry and Y.K. Gambhir, Phys. Rev. **C 74**, 057601 (2006).
* [61] D.N. Poenaru, W. Greiner, M. Ivascu, D. Mazilu and I.H. Plonski, Z. Phys. **A 325**, 435 (1986).
* [62] D.K. Srivastava, D.N. Basu and N.K. Ganguly, Phys. Lett. **124 B** (1983) 6.
* [63] D.N. Basu, Phys. Lett. **B 566**, 90 (2003).
* [64] P. Roy Chowdhury, C. Samanta and D.N. Basu, Phys. Rev. **C 73**, 014612 (2006).
* [65] P. Roy Chowdhury, D.N. Basu and C. Samanta, Phys. Rev. **C 75**, 047306 (2007).
* [66] C. Samanta, P. Roy Chowdhury and D.N. Basu, Nucl. Phys. **A789**, 142 (2007).
* [67] D.N. Basu, Phys. Rev. **C 66**, 027601 (2002).
* [68] D. Gupta and D.N. Basu, Nucl. Phys. **A 748**, 402 (2005).
* [69] D. Gupta, E. Khan and Y. Blumenfeld, Nucl. Phys. **A 773**, 230 (2006).
* [70] S. Raman et al., Atomic Data and Nuclear Data Tables **36**, 1 (1987).
Figure 1: The energy per nucleon \\(\\epsilon\\) = E/A of SNM (spin and isospin symmetric nuclear matter) and PNM (pure neutron matter) as functions of \\(\\rho\\). The continuous lines represent curves for the present calculations using DDM3Y interaction, the dotted lines represent the same using \\(v_{14}+TNI\\) interaction [41] and the dash-dotted lines represent the same for A18 model using variational chain summation (VCS) [42].
Figure 2: The energy per nucleon \\(\\epsilon\\)=E/A of nuclear matter with different isospin asymmetry X as functions of \\(\\rho/\\rho_{0}\\) for the present calculations using DDM3Y interaction.
Figure 3: The pressure \\(P\\) of SNM (spin and isospin symmetric nuclear matter) as a function of \\(\\rho/\\rho_{0}\\). The continuous lines represent the present calculations using \\(\\epsilon_{0}\\) = -15.26\\(\\pm\\)0.52 MeV. The dotted line represents the same using the A18 model using variational chain summation (VCS) of Akmal et al. [42], the dash-dotted line represents the RMF calculations using NL3 parameter set [43] whereas the area enclosed by the continuous line corresponds to the region of pressures consistent with the experimental flow data [44] for SNM.
Figure 4: The pressure \\(P\\) of PNM (pure neutron matter) as a function of \\(\\rho/\\rho_{0}\\). The continuous lines represent the present calculations using \\(\\epsilon_{0}\\) = -15.26\\(\\pm\\)0.52 MeV. The dotted line represents the same using the A18 model using variational chain summation (VCS) of Akmal et al. [42] whereas the areas enclosed by the continuous and the dashed lines correspond to the pressure regions for neutron matter consistent with the experimental flow data after inclusion of the pressures from asymmetry terms with weak (soft NM) and strong (stiff NM) density dependences, respectively [44].
Figure 5: The velocity of sound \\(v_{s}\\) in SNM (spin and isospin symmetric nuclear matter) and PNM (pure neutron matter) and the energy density \\(\\varepsilon\\) of SNM and PNM as functions of \\(\\rho/\\rho_{0}\\) for the present calculations using DDM3Y interaction. The continuous lines represent the velocity of sound in units of \\(10^{-2}c\\) whereas the dotted lines represent energy density in MeV fm\\({}^{-3}\\).
Figure 6: The energy per nucleon \\(\\epsilon\\)=E/A of SNM (spin and isospin symmetric nuclear matter), PNM (pure neutron matter) and NSE (nuclear symmetry energy \\(E_{sym}\\)) are plotted as functions of \\(\\rho/\\rho_{0}\\) for the present calculations using DDM3Y interaction.
Figure 7: The \\(\\beta\\) equilibrium proton fraction calculated with NSE (nuclear symmetry energy) obtained using DDM3Y interaction is plotted as a function of \\(\\rho/\\rho_{0}\\). | A mean field calculation is carried out to obtain the equation of state (EoS) of nuclear matter from a density dependent M3Y interaction (DDM3Y). The energy per nucleon is minimized to obtain ground state of the symmetric nuclear matter (SNM). The constants of density dependence of the effective interaction are obtained by reproducing the saturation energy per nucleon and the saturation density of SNM. The energy variation of the exchange potential is treated properly in the negative energy domain of nuclear matter. The EoS of SNM, thus obtained, is not only free from the superluminosity problem but also provides excellent estimate of nuclear incompressibility. The EoS of asymmetric nuclear matter is calculated by adding to the isoscalar part, the isovector component of M3Y interaction. The SNM and pure neutron matter EoS are used to calculate the nuclear symmetry energy which is found to be consistent with that extracted from the isospin diffusion in heavy-ion collisions at intermediate energies. The \\(\\beta\\) equilibrium proton fraction calculated from the symmetry energy and related theoretical findings are consistent with the constraints derived from the observations on compact stars. | Provide a brief summary of the text. |
arxiv-format/0708_0425v1.md | # Distribution of phylogenetic diversity under random extinction
Beata Faller
Department of Mathematics, University of California, Berkeley, CA 94720 [email protected]
Fabio Pardi
Department of Mathematics, University of California, Berkeley, CA 94720 [email protected]
Mike Steel
Department of Mathematics, University of California, Berkeley, CA 94720 [email protected]
3 August 2007
## 1. Introduction
The current rapid rate of extinction of many diverse species has focused attention on predicting the loss of future biodiversity. There are numerous ways to measure the 'biodiversity' of a group of species, and one which recognises the evolutionary linkages between taxa (for example, species) is phylogenetic diversity ([3], [4], [8]). Briefly, given a subset of taxa, the PD (phylogenetic diversity) score of that subset is the sum of the lengths of the edges of the evolutionary tree that connects this subset (formal definitions are given shortly). Here the 'length' of an edge may refer to the amount of genetic change on that edge, its temporal duration or perhaps other features (such as morphological diversity).
Under the simplest models of speciation, each taxon has the same probability of surviving until some future time, and the survival of taxa are treated as independent events; this is a simple type of 'field of bullets' model ([9], [12], [14]). This model is quite restrictive ([11]) and a more realistic extension allows each species to have its own survival probability - this is the model we study in this paper. Under this model, we would like to be able to predict the PD score of the set of taxa that survive. This 'future PD' is a random variable with a well-defined distribution, but to date, most attention has focused on just its mean (that is, the expected PD score of the species that survive). For example, the 'Noah's Ark problem' ([6, 15, 10]) attempts to maximize expected future PD by allocating resources that increase the survival probabilities in a constrained way. Clearly, one could consider other properties of the distribution of future PD - for example the probability (let us call it the \\(PL_{0}\\) value) that future PD is less than some critical lower limit (\\(L_{0}\\)). Given different conservation strategies, we may wish to maximize expected PD or minimize the \\(PL_{0}\\) value. A natural question is how are these two quantities related?
To address these sorts of questions, we need to know the full distribution of future PD. In this paper, we show that for large trees, future PD is (asymptotically) normally distributed. Given the increasing trend in biology of constructing and analysing phylogenetic trees that contain large numbers of species (\\(10^{2}-10^{3}\\)), we see this result as timely. Our work was also motivated by the suggestive form of distributions obtained by simulating future PD by sampling 12-leaf subtrees randomly from 64-leaf trees from Nee and May ([9], see also [14]). To formally prove the normal limit law requires some care, as future PD is not a sum of independent random variables (even though the survival events for the taxa at the leaves are treated independently); consequently, the usual central limit theory does not immediately apply.
This limit law has some useful consequences for applications. For example, it means that for a large tree, the \\(PL_{0}\\) value can be estimated by the area under a normal curve to the left of \\(\\frac{L_{0}-\\mathbb{E}[PD]}{\\sqrt{\\mathrm{Var}[PD]}}\\). In particular, we see that the relation between the \\(PL_{0}\\) value and expected future PD (\\(\\mathbb{E}[PD]\\)) involves scaling by the standard deviation of future PD (so strategies that aim to maximize expected future PD may not necessarily minimize the \\(PL_{0}\\) value).
Our normal distribution result is asymptotic - that is, it holds for large trees. However, it is also useful to have techniques for calculating the exact PD distribution on any given tree. In Section 3, we show how this may be achieved by a polynomial time algorithm under the mild assumption that each edge length is an integer multiple of some fixed length. In Section 4, we show how our results can be easily modified to handle an 'unrooted' form of PD that has also been considered in the literature.
### Definitions and preliminaries
Throughout this paper \\(X\\) will denote a set of _taxa_ (for example, different species, different genera or populations of the same species) and \\(X^{\\prime}\\) will denote a subset of \\(X\\). A _rooted phylogenetic \\(X\\)-tree_ is a rooted tree in which (i) all edges are oriented away from the root, (ii) \\(X\\) is the set of leaves (vertices of the tree with no outgoing edges) and (iii) every vertex except the leaves (and also possibly the root) has at least two out-going edges (allowing the root to have just one outgoing arc will be useful later). In systematic biology, these trees are used to represent evolutionary development of the set \\(X\\) of taxa from their common ancestor (the root of the tree), and the orientation of the edges corresponds to temporal ordering. Given a rooted phylogenetic \\(X\\)-tree \\(\\mathcal{T}\\), we let \\(E(\\mathcal{T})\\) denote the set of edges, and \\(E_{P}(\\mathcal{T})\\) denote the set of pendant edges (edges that are incident with a leaf).
Suppose we have a rooted phylogenetic \\(X\\)-tree \\(\\mathcal{T}\\) and a map \\(\\lambda\\) that assigns a non-negative real-valued length \\(\\lambda_{e}\\) to each edge \\(e\\) of \\(\\mathcal{T}\\). Given the pair \\((\\mathcal{T},\\lambda)\\) and a subset \\(X^{\\prime}\\) of \\(X\\), the _phylogenetic diversity_ of \\(X^{\\prime}\\), denoted \\(PD_{(\\mathcal{T},\\lambda)}(X^{\\prime})\\) - or, more briefly, \\(PD(X^{\\prime})\\) - is the sum of the \\(\\lambda_{e}\\) values of all edges that lie on at least one path between an element of \\(X^{\\prime}\\) and the root of \\(\\mathcal{T}\\).
In the _(generalized) field of bullets model_ (g-FOB), we have a triple \\((\\mathcal{T},\\lambda,p)\\) where \\(\\mathcal{T}\\) is a rooted phylogenetic \\(X\\)-tree, \\(\\lambda\\) is an edge length assignment map, and \\(p\\) is a map that assigns to each leaf \\(i\\in X\\) a probability \\(p_{i}\\). Construct a random set \\(X^{\\prime}\\) by assigning each element \\(i\\) of \\(X\\) to \\(X^{\\prime}\\) independently with probability \\(p_{i}\\). In biodiversity conservation we regard \\(X^{\\prime}\\) as the set of taxa that will still exist (that is, not be extinct) at some time \\(t\\) in the future; accordingly, we call \\(p_{i}\\) the _survival probability_ of \\(i\\).
Considering the random variable \\(\\varphi=\\varphi_{\\mathcal{T}}=PD_{(\\mathcal{T},\\lambda)}(X^{\\prime})\\), which is the phylogenetic diversity of the random subset \\(X^{\\prime}\\) of \\(X\\) (consisting of those taxa that'survive')
Figure 1. If only the taxa marked * in the tree on the left survive then the future phylogenetic diversity is the sum of the lengths of the solid edges in the tree on the right.
according to the process just described, we call \\(\\varphi\\)_future phylogenetic diversity_. An example of this process is shown in Fig. 1.
Note that in the g-FOB model, we can write
\\[\\varphi=\\sum_{e}\\lambda_{e}Y_{e}, \\tag{1}\\]
where \\(Y_{e}\\) is the binary random variable which takes the value \\(1\\) if \\(e\\) lies on at least one path between an element of \\(X^{\\prime}\\) and the root of \\(\\mathcal{T}\\), and which is \\(0\\) otherwise. Moreover,
\\[\\mathbb{P}[Y_{e}=1]=1-\\prod_{i\\in C(e)}(1-p_{i}), \\tag{2}\\]
where \\(C(e)\\) is the set of elements of \\(X\\) that are separated from the root of \\(\\mathcal{T}\\) by \\(e\\). Consequently, if we let
\\[P_{e}:=\\mathbb{P}[Y_{e}=1]=1-\\prod_{i\\in C(e)}(1-p_{i}),\\]
then
\\[\\mathbb{E}[\\varphi]=\\sum_{e}\\lambda_{e}P_{e}. \\tag{3}\\]
Equation (1) suggests that for large trees, \\(\\varphi\\) might be normally distributed, as it will be sum of many random variables (a normal distribution is also suggested by simulations described in [9, 14]). However, the random variables (\\(\\lambda_{e}Y_{e}\\)) are not identically distributed and, more importantly, they are not independent. Therefore a straightforward application of the (usual) central limit theorem seems problematic. We show that under two mild restrictions, a normal law can be established for large trees. Moreover, neither of these two mild restrictions can be lifted (we exhibit a counter-example to a normal law in both cases).
Since a normal distribution is determined once we know both its mean and variance, it is useful to have equations for calculating both these quantities. Equation (3) provides a simple expression for the mean, and we now present an expression for the variance that is also easy to compute. Given two distinct edges of \\(\\mathcal{T}\\), we write \\(e<_{\\mathcal{T}}f\\) if the path from the root of \\(\\mathcal{T}\\) to \\(f\\) includes edge \\(e\\) (or, equivalently, \\(C(f)\\subset C(e)\\)).
**Lemma 1.1**.: \\[\\text{Var}[\\varphi]=\\sum_{e}\\lambda_{e}^{2}P_{e}(1-P_{e})+2\\sum_{(e,f):e<_{ \\mathcal{T}}f}\\lambda_{e}\\lambda_{f}P_{f}(1-P_{e}).\\]
Proof.:?From Equation (1) we have:
\\[\\text{Var}[\\varphi]=\\text{Cov}[\\varphi,\\varphi]=\\sum_{e,f}\\lambda_{e}\\lambda_ {f}\\,\\text{Cov}[Y_{e},Y_{f}].\\]
The covariance of \\(Y_{e}\\) and \\(Y_{f}\\) is
\\[\\text{Cov}[Y_{e},Y_{f}]=\\mathbb{E}[Y_{e}Y_{f}]-\\mathbb{E}[Y_{e}]\\mathbb{E}[Y_ {f}]=\\mathbb{P}[Y_{e}=1,Y_{f}=1]-\\mathbb{P}[Y_{e}=1]\\mathbb{P}[Y_{f}=1].\\]
Now, we have the following cases:\\(e\
eq f\\) and neither \\(e<_{\\mathcal{T}}f\\) nor \\(f<_{\\mathcal{T}}e\\). In this case, the subtree of \\(\\mathcal{T}\\) with root edge \\(e\\) and the subtree of \\(\\mathcal{T}\\) with root edge \\(f\\) do not have any leaves in common, and so \\(Y_{e}\\) and \\(Y_{f}\\) are independent. Thus, \\(\\operatorname{Cov}[Y_{e},Y_{f}]=0\\).
2. \\(e<_{\\mathcal{T}}f\\). In this case, \\(C(f)\\subset C(e)\\) and so the survival of any taxon in \\(C(f)\\) implies the survival of a taxon in \\(C(e)\\); that is, \\(Y_{f}=1\\) implies \\(Y_{e}=1\\) and we have \\(\\operatorname{Cov}[Y_{e},Y_{f}]=\\mathbb{P}[Y_{f}=1]-\\mathbb{P}[Y_{e}=1] \\mathbb{P}[Y_{f}=1]=P_{f}(1-P_{e})\\).
3. \\(f<_{\\mathcal{T}}e\\). This is analogous to case (2) (and, together with case (1), explains the factor of 2 in the expression on the right-hand side of our formula for \\(\\operatorname{Var}[\\varphi]\\)).
4. \\(e=f\\). This case gives \\(\\operatorname{Cov}[Y_{e},Y_{f}]=\\mathbb{P}[Y_{e}=1](1-\\mathbb{P}[Y_{e}=1])=P_{ e}(1-P_{e})\\) (and corresponds to the first term on the right hand-side of our formula for \\(\\operatorname{Var}[\\varphi]\\)).
By considering these cases for \\(\\operatorname{Cov}[Y_{e},Y_{f}]\\), we obtain the result claimed.
A consequence of this lemma is the following lower bound on the variance of future \\(PD\\) which will be useful later.
**Corollary 1.2**.: _Consider the g-FOB model on \\((\\mathcal{T},\\lambda,p)\\). Then,_
\\[\\text{Var}[\\varphi]\\geq\\sum_{e\\in E_{P}(\\mathcal{T})}\\lambda_{e}^{2}P_{e}(1-P _{e}).\\]
Proof.: Notice that all the terms in the summation expression for \\(\\operatorname{Var}[\\varphi]\\) in Lemma 1.1 are non-negative, and so a lower bound on \\(\\operatorname{Var}[\\varphi]\\) is obtained by summing over those pairs \\((e,f)\\) for which \\(e=f\\) is a pendant edge of \\(\\mathcal{T}\\). This gives the claimed bound.
## 2. Asymptotic normality of future phylogenetic diversity under the g-FOB model
Consider a sequence of such rooted phylogenetic trees:
\\[\\mathcal{T}_{1},\\mathcal{T}_{2},\\ldots,\\mathcal{T}_{n},\\ldots\\]
where \\(\\mathcal{T}_{n}\\) has a leaf label set \\(X=\\{1,\\ldots,n\\}\\). Furthermore, suppose that for each tree we have an associated edge length function \\(\\lambda=\\lambda^{(n)}\\) and a survival probability function \\(p=p^{(n)}\\). For the sequence of g-FOB models \\((\\mathcal{T}_{n},\\lambda^{(n)},p^{(n)})\\), we impose the following conditions (where \\(E_{P}(\\mathcal{T}_{n})\\) is the set of pendant edges of \\(\\mathcal{T}_{n}\\)):
1. For some \\(\\epsilon>0\\) and for each \\(n\\), we have: \\[\\epsilon\\leq p_{i}^{(n)}\\leq 1-\\epsilon,\\] for all \\(i\\in\\{1,\\ldots,n\\}\\) except for at most \\(An^{\\alpha}\\) values of \\(i\\), where \\(A,\\alpha\\geq 0\\) are constants, with \\(\\alpha<\\frac{1}{2}\\).
* Let \\(L(n)=\\max\\{\\lambda_{e}^{(n)}:e\\in E(\\mathcal{T}_{n})\\}\\). Then, for each \\(n\\), we have: \\[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}\\left(\\lambda_{e}^{(n)}\\right)^{2}\\geq Bn^{ \\beta}L(n)^{2},\\] for some constants \\(B>0,\\beta>2\\alpha\\).
**Remarks concerning conditions (C1), (C2).**
Condition (C1) simply says that the survival of most taxa is neither (arbitrarily close to) certain nor impossible. The term \\(An^{\\alpha}\\) provides the flexibility to allow for some of the taxa to have a survival probability that is very close to, or even equal to, \\(0\\) or \\(1\\).
Condition (C2) says, roughly speaking, that the pendant edges are, on average, not too short in relation to the longest edge in the tree. This is relevant for evolutionary biology, as it follows that for trees generated by a constant speciation rate 'pure birth' model (see, for example, [2]) condition (C2) holds in expectation (for any \\(\\alpha\\in(0,\\frac{1}{2})\\)). A more formal statement of this claim, and its proof, is given in the Appendix.
Note that if condition (C2) holds for a value \\(\\beta>0\\) then, \\(\\beta\\) is at most \\(1\\), since the terms in the summation expression in (C2) are all at most \\(1\\) and there are \\(O(n)\\) of them.
\\(\\Box\\)
Next, we state our main theorem, which describes the asymptotic normality of future phylogenetic diversity \\(\\varphi_{n}=\\varphi_{\\mathcal{T}_{n}}\\). Since phylogenetic trees often contain a large number of taxa, the result allows one to approximate the distribution of future phylogenetic diversity with a normal distribution.
**Theorem 2.1**.: _Under conditions (C1) and (C2), \\((\\varphi_{n}-\\mathbb{E}[\\varphi_{n}])/\\sqrt{\\text{Var}[\\varphi_{n}]}\\) converges in distribution to \\(N(0,1)\\) as \\(n\\to\\infty\\), where \\(N(0,1)\\) denotes a standard normally distributed random variable._
We pause to note that one cannot drop either condition (C1) or (C2) in Theorem 2.1. It is clear that dropping (C1) is problematic (for example, set \\(p_{i}^{(n)}\\in\\{0,1\\}\\) for all \\(i\\) which leads to a degenerate distribution); as for (C2) the following example shows that we require \\(\\beta\\) to be strictly positive.
**Example: Condition (C2) cannot be removed**
Consider a tree \\(\\mathcal{T}_{n}\\) with \\(n\\) leaves. Leaves \\(1,\\ldots,n-1\\) have incident edges that each have length \\(\\frac{1}{\\sqrt{n-1}}\\) and all these edges are incident with a vertex that is adjacent to the root by an edge of length \\(1\\). Leaf \\(n\\) has edge length \\(1\\) (see Fig. 2). Consider a sequence of g-FOB models with \\(p_{i}^{(n)}=s\\) for all \\(i,n\\), where \\(s\\) is any number strictly between \\(0\\) and \\(1\\). Then \\(\\varphi_{n}=\\frac{1}{\\sqrt{n-1}}A_{n}+B_{n}+C_{n}\\) where \\(\\frac{1}{\\sqrt{n-1}}A_{n}\\) is the contribution to \\(\\varphi_{n}\\) of the \\(n-1\\) edges that are incident with leaves \\(1,\\ldots,n-1\\); \\(B_{n}\\) is the contribution to \\(\\varphi_{n}\\) of the edge that connects these \\(n-1\\) edges to the root of and \\(C_{n}\\) is the contribution to \\(\\varphi_{n}\\) of the edge incident with leaf \\(n\\). Notice that \\(A_{n}\\) is a sum of \\(n-1\\) i.i.d. binary \\((0,1)\\) random variables, each of which takes the value \\(1\\) with probability \\(s\\), and \\(C_{n}\\) is a binary random variable which takes the value \\(1\\) with probability \\(s\\). Consequently, the variance of \\(\\frac{1}{\\sqrt{n-1}}A_{n}\\) equals \\(s(1-s)\\), the same as the variance of \\(C_{n}\\). Moreover, \\(B_{n}\\) converges in probability to \\(1\\), and \\(C_{n}\\) is independent of \\(A_{n}\\) and \\(B_{n}\\). Consequently, \\(\\operatorname{Var}[\\varphi_{n}]\\to 2s(1-s)\\) as \\(n\\to\\infty\\). Furthermore, by the standard central limit theorem, \\(\\frac{\\sqrt{n-1}\\,A_{n}-\\mathbb{E}[\\frac{\\sqrt{n-1}}{\\sqrt{n-1}}A_{n}]}{\\sqrt {2s(1-s)}}\\) converges in distribution to \\(N(0,\\frac{1}{2})\\) (a normal random variable with mean \\(0\\) and variance \\(\\frac{1}{2}\\)). Thus, \\((\\varphi_{n}-\\mathbb{E}[\\varphi_{n}])/\\sqrt{\\operatorname{Var}[\\varphi_{n}]}\\) converges to the random variable \\(N(0,\\frac{1}{2})+W\\) where \\(W\\) is independent of \\(N(0,\\frac{1}{2})\\) and takes the value \\(\\frac{1-s}{\\sqrt{2s(1-s)}}\\) with probability \\(s\\) and takes the value \\(\\frac{-s}{\\sqrt{2s(1-s)}}\\) with probability \\(1-s\\). In particular, \\((\\varphi_{n}-\\mathbb{E}[\\varphi_{n}])/\\sqrt{\\operatorname{Var}[\\varphi_{n}]}\\) does not converge in distribution to \\(N(0,1)\\). Notice that in this example, (C1) is satisfied, but (C2) fails since \\(\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}=2L(n)^{2}\\).
We now provide a brief, informal outline of the approach we use to prove Theorem 2.1. The main idea is to decompose \\(\\mathcal{T}_{n}\\) into a 'central core' and a large number of'moderately small' pendant subtrees. Each edge in the central core separates the root from enough leaves so that we can be very sure that at least one of these leaves survives - consequently the combined PD-contribution of this central core converges in probability to a fixed (non-random) function of \\(n\\). Regarding the pendant subtrees, their contributions to the PD score are independent and although they are not identically distributed random variables, their combined variance grows sufficiently quickly that we can establish a normal law for their sum by a standard central limit theorem.
Proof of Theorem 2.1.: We first note that it is sufficient to establish Theorem 2.1 under (C1) and the seemingly stronger condition:
\\(\\text{(C2}^{*})\\)\\(L(n)=1\\), and \\(\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}\\geq Bn^{\\beta}\\) for constants \\(B>0,\\beta>2\\alpha\\).
To see why, suppose we have established Theorem 2.1 under (C1), (C2\\({}^{*}\\)). For a sequence \\(\\mathcal{T}_{n}\\) (with associated maps \\(\\lambda^{(n)}\\), \\(p^{(n)}\\)) satisfying (C1), (C2), let \\(\\mu_{e}^{(n)}=L(n)^{-1}\\lambda_{e}^{(n)}\\) for each edge \\(e\\) of \\(\\mathcal{T}_{n}\\) and each \\(n\\). Note that, by Equation (1), the
Figure 2. A tree for which future phylogenetic diversity does not become normally distributed as \\(n\\) grows.
normalized \\(\\varphi\\) score (namely \\((\\varphi_{n}-\\mathbb{E}[\\varphi_{n}])/\\sqrt{\\operatorname{Var}[\\varphi_{n}]}\\))) for \\((\\mathcal{T}_{n},\\mu^{(n)},p^{(n)})\\) equals the normalized \\(\\varphi\\) score for \\((\\mathcal{T}_{n},\\lambda^{(n)},p^{(n)})\\) and that \\((\\mathcal{T}_{n},\\mu^{(n)},p^{(n)})\\) satisfies (C2\\({}^{*}\\)). Thus we will henceforth assume conditions (C1) and (C2\\({}^{*}\\)).
Next, we make a notational simplification: for the remainder of the proof, we will write \\(\\lambda_{e}^{(n)}\\) as \\(\\lambda_{e}\\) and \\(p_{i}^{(n)}\\) as \\(p_{i}\\) (but respecting in the proof that these quantities depend on \\(n\\)). Also, for a sequence of random variables \\((Y_{n})\\), we write \\(Y_{n}\\xrightarrow{P}a\\) to denote that \\(Y_{n}\\) converges in probability to a constant \\(a\\), and \\(Y_{n}\\xrightarrow{D}Y\\) to denote that \\(Y_{n}\\) converges in distribution to a random variable \\(Y\\).
Since \\(\\beta>2\\alpha\\), we may select a value \\(\\gamma\\) with \\(\\alpha<\\gamma<\\beta/2\\), and set \\(f(n):=n^{\\gamma}\\). We partition the edges of \\(\\mathcal{T}_{n}\\) into two classes \\(E_{1}^{n}\\) and \\(E_{2}^{n}\\) and we define a third class \\(E_{12}^{n}\\subseteq E_{1}^{n}\\) as follows: Let \\(n_{e}\\) denote the number of leaves of \\(\\mathcal{T}_{n}\\) that are separated from the root by \\(e\\). Then set:
* \\(E_{1}^{n}\\): edges \\(e\\) of \\(\\mathcal{T}_{n}\\) with \\(n_{e}\\leq f(n)\\);
* \\(E_{2}^{n}\\): edges \\(e\\) of \\(\\mathcal{T}_{n}\\) with \\(n_{e}>f(n)\\);
* \\(E_{12}^{n}\\): edges \\(e\\in E_{1}^{n}\\) such that \\(e\\) is adjacent to an edge \\(f\\in E_{2}^{n}\\).
For an edge \\(e\\in E_{12}^{n}\\) of \\(\\mathcal{T}_{n}\\), we make the following definitions:
* \\(t_{e}\\) denotes the subtree of \\(\\mathcal{T}_{n}\\) consisting of edge \\(e\\) and all other edges of \\(\\mathcal{T}_{n}\\) that are separated from the root by \\(e\\).
* \\(\\varphi_{e}^{n}\\) denotes the future phylogenetic diversity of \\(t_{e}\\), under the probabilistic model described above.
See Fig. 3 for a schematic summary of these concepts.
For \\(\\varphi_{n}\\), Equation (1) gives
\\[\\varphi_{n}=\\sum_{e\\in E_{1}^{n}}\\lambda_{e}Y_{e}+\\sum_{e\\in E_{2}^{n}}\\lambda _{e}Y_{e}=\\sum_{e\\in E_{12}^{n}}\\varphi_{e}^{n}+\\sum_{e\\in E_{2}^{n}}\\lambda_ {e}Y_{e}. \\tag{4}\\]
Figure 3. A representation of the decomposition of \\(\\mathcal{T}_{n}\\) in the proof of Theorem 2.1.
Let
\\[\\lambda_{n}=\\sum_{e\\in E_{2}^{n}}\\lambda_{e},Z_{n}=\\sum_{e\\in E_{12}^{n}}\\varphi_{e }^{n},\\text{ and }R_{n}=\\sum_{e\\in E_{2}^{n}}\\lambda_{e}(1-Y_{e}).\\]
With this notation, we can re-write (4) as
\\[\\varphi_{n}=\\lambda_{n}+Z_{n}-R_{n}. \\tag{5}\\]
**Lemma 2.2**.: \\(R_{n}\\xrightarrow{P}0\\)_._
Proof.: Since \\(\\operatorname{Var}[R_{n}]=\\mathbb{E}[R_{n}^{2}]-\\mathbb{E}[R_{n}]^{2}\\) and \\(\\mathbb{E}[R_{n}^{2}]\\geq\\mathbb{E}[R_{n}]^{2}\\), it is sufficient to show that \\(\\mathbb{E}[R_{n}^{2}]\\to 0\\) (the claim that \\(R_{n}\\xrightarrow{P}0\\) then follows by Chebyshev's inequality). We have \\(R_{n}=\\sum_{e\\in E_{2}^{n}}\\lambda_{e}(1-Y_{e})\\) and so
\\[R_{n}^{2}=\\sum_{e,f\\in E_{2}^{n}}\\lambda_{e}\\lambda_{f}(1-Y_{e})(1-Y_{f})\\leq |E_{2}^{n}|\\sum_{e\\in E_{2}^{n}}(1-Y_{e}),\\]
since \\(\\lambda_{e},\\lambda_{f}\\leq 1\\) by (C2\\({}^{*}\\)), and \\((1-Y_{f})\\leq 1\\) for all \\(f\\in E_{2}^{n}\\). Thus,
\\[\\mathbb{E}[R_{n}^{2}]\\leq|E_{2}^{n}|^{2}\\cdot\\max\\{\\mathbb{P}[Y_{e}=0]:e\\in E_ {2}^{n}\\}. \\tag{6}\\]
Now, for any edge \\(e\\in E_{2}^{n}\\) there are at least \\(n^{\\gamma}-An^{\\alpha}\\) elements \\(i\\) of \\(C(e)\\) for which \\(p_{i}\\geq\\epsilon\\) (by (C1)) and thus
\\[\\mathbb{P}[Y_{e}=0]\\leq(1-\\epsilon)^{n^{\\gamma}-An^{\\alpha}}.\\]
Since \\(|E_{2}^{n}|<2n\\), Equation (6) and the inequality \\(\\alpha<\\gamma\\) gives
\\[\\mathbb{E}[R_{n}^{2}]\\leq 4n^{2}\\cdot(1-\\epsilon)^{n^{\\gamma}-An^{\\alpha}}\\to 0 \\text{ as }n\\rightarrow\\infty,\\]
as required.
**Lemma 2.3**.: _Under conditions (C1) and (C2\\({}^{*}\\)), we have_
\\[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}P_{e}(1-P_{e})\\geq B \\epsilon^{2}(1+o(1))n^{\\beta},\\]
_where \\(o(1)\\) denotes a term that tends to \\(0\\) as \\(n\\rightarrow\\infty\\)._
Proof.: Let \\(U_{n}\\) be the set of those pendant edges \\(e\\) of \\(\\mathcal{T}_{n}\\) for which the leaf incident with \\(e\\) has its survival probability in the interval \\([\\epsilon,1-\\epsilon]\\), and let \\(V_{n}\\) denote the set of the remaining pendant edges of \\(\\mathcal{T}_{n}\\). Clearly,
\\[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}P_{e}(1-P_{e})\\geq \\epsilon^{2}\\sum_{e\\in U_{n}}(\\lambda_{e}^{(n)})^{2}, \\tag{7}\\]
and by (C2\\({}^{*}\\)) we have
\\[Bn^{\\beta}\\leq\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}\\leq \\sum_{e\\in U_{n}}(\\lambda_{e}^{(n)})^{2}+|V_{n}| \\tag{8}\\]where the last term \\((|V_{n}|)\\) is an upper bound on \\(\\sum_{e\\in V_{n}}(\\lambda_{e}^{(n)})^{2}\\) by virtue of the bound \\(|\\lambda_{e}^{(n)}|\\leq 1\\) (by (C2\\({}^{*}\\))). Since \\(|V_{n}|\\leq An^{\\alpha}\\), Equations (7) and (8) give
\\[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}(\\lambda_{e}^{(n)})^{2}P_{e}(1-P_{e})\\geq \\epsilon^{2}(Bn^{\\beta}-An^{\\alpha})=B\\epsilon^{2}(1+o(1))n^{\\beta}.\\]
**Lemma 2.4**.: _The random variable \\(\\psi_{n}=(Z_{n}-\\mathbb{E}[Z_{n}])/\\sqrt{\\text{Var}[Z_{n}]}\\xrightarrow{D}N(0,1)\\)._
Proof.: We can apply a version of the central limit theorem for double arrays of random variables. The required theorem can be found in [13] and states the following. For each \\(n\\), let \\(X_{n1},\\ldots,X_{nr}\\) be \\(r=r(n)\\) independent random variables with finite \\(p\\)th moments for some \\(p>2\\). Let
\\[A_{n}=\\sum_{j}\\mathbb{E}[X_{nj}];\\quad B_{n}=\\sum_{j}\\text{Var}[X_{nj}].\\]
If
\\[B_{n}^{-p/2}\\sum_{j}\\mathbb{E}[|X_{nj}-\\mathbb{E}[X_{nj}]|^{p}]\\to 0\\text{ as }n \\rightarrow\\infty, \\tag{9}\\]
then \\(W_{n}=(\\sum_{j}X_{nj}-A_{n})/\\sqrt{B_{n}}\\xrightarrow{D}N(0,1)\\). We apply this theorem by taking \\(\\{X_{n1},\\ldots,X_{nr}\\}=\\{\\varphi_{e}^{n}:e\\in E_{12}^{n}\\}\\), since the random variables \\(\\{\\varphi_{e}^{n}:e\\in E_{12}^{n}\\}\\) are clearly independent. With our notation \\(Z_{n}=\\sum_{e\\in E_{12}^{n}}\\varphi_{e}^{n}\\), we have \\(A_{n}=\\mathbb{E}[Z_{n}]\\), \\(B_{n}=\\text{Var}[Z_{n}]\\) and \\(W_{n}=\\psi_{n}\\). Thus, we only need to verify condition (9) in order to establish Lemma 2.4.
By Corollary 1.2, we have:
\\[\\text{Var}[\\varphi_{e}^{n}]\\geq\\sum_{f\\in E_{P}(t_{e})}\\lambda_{f}^{2}P_{f}(1- P_{f}).\\]
This lower bound and the independence of \\(\\{\\varphi_{e}^{n}:e\\in E_{12}^{n}\\}\\), implies:
\\[B_{n}=\\text{Var}[Z_{n}]=\\sum_{e\\in E_{12}^{n}}\\text{Var}[\\varphi_{e}^{n}]\\geq \\sum_{e\\in E_{12}^{n}}\\sum_{f\\in E_{P}(t_{e})}\\lambda_{f}^{2}P_{f}(1-P_{f})\\]
Consequently, by Lemma 2.3, and the fact that every pendant edge occurs in \\(E_{P}(t_{e})\\) for some \\(e\\in E_{12}^{n}\\) we obtain,
\\[B_{n}\\geq B\\epsilon^{2}(1+o(1))n^{\\beta}. \\tag{10}\\]
Consider now the absolute central moments in (9). We have
\\[\\mathbb{E}[|X_{nj}-\\mathbb{E}[X_{nj}]|^{p}]=\\mathbb{E}[|\\varphi_{e}^{n}- \\mathbb{E}[\\varphi_{e}^{n}]|^{p}]\\leq L_{e}^{p},\\]
where \\(L_{e}\\) is the sum of the lengths of the edges of \\(t_{e}\\). Since \\(t_{e}\\) has less than \\(2n_{e}\\) edges, and the edge lengths are bounded from above by \\(1\\) (under (C2\\({}^{*}\\))) and \\(e\\in E_{12}^{n}\\) implies \\(n_{e}\\leq f(n)\\), we obtain \\(L_{e}\\leq 2n_{e}\\leq 2f(n)\\). Now we have
\\[\\mathbb{E}[|\\varphi_{e}^{n}-\\mathbb{E}[\\varphi_{e}^{n}]|^{p}]\\leq 2^{p}f(n)^{p}. \\tag{11}\\]Combining the bounds (10) and (11), and noting that \\(|E_{12}^{n}|\\leq 2n\\) and \\(f(n)=n^{\\gamma}\\) we obtain:
\\[B_{n}^{-p/2}\\sum_{e\\in E_{12}^{n}}\\mathbb{E}[|\\varphi_{e}^{n}- \\mathbb{E}[\\varphi_{e}^{n}]|^{p}] \\leq\\frac{|E_{12}^{n}|^{2p}f(n)^{p}}{(B\\epsilon^{2}(1+o(1)))^{p/2 }n^{\\beta p/2}}\\] \\[\\leq C(p)n^{1+p(\\gamma-\\beta/2)},\\]
for some constant \\(C(p)>0\\) independent of \\(n\\). Now, since \\(\\gamma<\\beta/2\\), the exponent of \\(n\\) in the obtained upper bound is negative for any \\(p>(\\beta/2-\\gamma)^{-1}\\). Since there are some \\(p>2\\) satisfying this inequality and consequently satisfying condition (9), the proof of Lemma 2.4 is complete.
We return to the proof of Theorem 2.1. Using Equation (5) and the notation of Lemma 2.4, we get
\\[\\frac{\\varphi_{n}-\\mathbb{E}[\\varphi_{n}]}{\\sqrt{\\operatorname{ Var}[\\varphi_{n}]}} =\\frac{\\lambda_{n}+Z_{n}-R_{n}-(\\lambda_{n}+\\mathbb{E}[Z_{n}]- \\mathbb{E}[R_{n}])}{\\sqrt{\\operatorname{Var}[\\varphi_{n}]}}\\] \\[=C_{n}\\psi_{n}+D_{n}\\]
where
\\[C_{n}=\\frac{\\sqrt{\\operatorname{Var}[Z_{n}]}}{\\sqrt{\\operatorname{Var}[ \\varphi_{n}]}}\\text{ and }D_{n}=-\\frac{R_{n}-\\mathbb{E}[R_{n}]}{\\sqrt{ \\operatorname{Var}[\\varphi_{n}]}}.\\]
By Lemma 2.2 and the fact that \\(\\operatorname{Var}[\\varphi_{n}]\\) does not converge to \\(0\\) (by Corollary 1.2 Lemma 2.3 and condition (C2\\({}^{*}\\))), we have:
\\[D_{n}\\xrightarrow{P}0. \\tag{12}\\]
Moreover, by (5), \\(\\operatorname{Var}[\\varphi_{n}]=\\operatorname{Var}[Z_{n}]+\\operatorname{Var}[ R_{n}]-2\\operatorname{Cov}[Z_{n},R_{n}]\\), so that
\\[C_{n}^{-2}-1=\\frac{\\operatorname{Var}[R_{n}]}{\\operatorname{Var}[Z_{n}]}-2 \\rho\\frac{\\sqrt{\\operatorname{Var}[R_{n}]}}{\\sqrt{\\operatorname{Var}[Z_{n}]}},\\]
where \\(\\rho\\) is the correlation coefficient of \\(R_{n}\\) and \\(Z_{n}\\). Now, by Lemma 2.2 we have \\(\\lim_{n\\to\\infty}\\operatorname{Var}[R_{n}]=0\\). Thus, since \\(\\operatorname{Var}[Z_{n}]\\) is bounded away from \\(0\\) (by (10)), and \\(\\rho\\in[-1,1]\\) we have:
\\[\\lim_{n\\to\\infty}C_{n}=1. \\tag{13}\\]
To complete the proof of Theorem 2.1, we apply Slutsky's Theorem [1] which states that if \\(X_{n},Y_{n},W_{n}\\) are sequences of random variables, and \\(X_{n}\\xrightarrow{P}a\\), \\(Y_{n}\\xrightarrow{P}b\\), (where \\(a,b\\) are constants) and \\(W_{n}\\xrightarrow{D}W\\) (for some random variable \\(W\\)) then \\(X_{n}W_{n}+Y_{n}\\xrightarrow{D}aW+b\\). In our setting, we will take \\(X_{n}=C_{n},Y_{n}=D_{n},W_{n}=\\psi_{n}\\), and \\(W=N(0,1)\\) (the standard normal random variable). The condition that \\(\\psi_{n}\\xrightarrow{D}N(0,1)\\) was established in Lemma 2.4, and the conditions \\(C_{n}\\xrightarrow{P}1\\), \\(D_{n}\\xrightarrow{P}0\\) were established in (13) and (12) (note that the convergence of a sequence of real numbers in (13) is just a special case of convergence in probability). Thus,
\\[(\\varphi_{n}-\\mathbb{E}[\\varphi_{n}])/\\sqrt{\\operatorname{Var}[\\varphi_{n}]} =C_{n}\\psi_{n}+D_{n}\\xrightarrow{D}N(0,1),\\]which completes the proof of Theorem 2.1.
## 3. Computing the PD distribution
In this section we describe an algorithm to calculate the distribution of \\(\\varphi_{\\mathcal{T}}\\) efficiently under the g-FOB model. An approximate distribution could also be obtained by simulation, but the approach we present here allows us to derive the _exact_ distribution of \\(\\varphi_{\\mathcal{T}}\\). Note that we do not require conditions (C1) or (C2) in this section. We make the simplifying assumption that the edge lengths are non-negative integer-valued, which implies that \\(\\varphi_{\\mathcal{T}}\\) can only have values in the set \\(\\{0,1,\\ldots,L\\}\\), where \\(L=PD(X)=\\sum_{e}\\lambda_{e}\\). This assumption is not problematic in practice, as we can rescale all the edge lengths so that they are (arbitrarily close to) integer multiples of some small value.
We also assume that the input tree is such that the root has one outgoing edge and all other non-leaf vertices have exactly two outgoing edges. This assumption does not affect the generality of our method, as any tree can be modified to satisfy it, without changing the distribution for \\(\\varphi_{\\mathcal{T}}\\): one can resolve multifurcations arbitrarily and possibly insert an edge below the root, always assigning length \\(0\\) to the newly introduced edges.
Consistent with the notation used before, \\(\\varphi_{e}\\) denotes the contribution to \\(\\varphi_{\\mathcal{T}}\\) that comes from \\(e\\) and the edges separated from the root by \\(e\\). Then, for any edge \\(e\\) and integer \\(x\\), define
\\[f_{e}(x):=\\mathbb{P}[\\varphi_{e}=x,\\,Y_{e}=1].\\]
Also recall that \\(P_{e}=\\mathbb{P}[Y_{e}=1]\\).
Clearly, if \\(e\\) is the only edge attached to the root of \\(\\mathcal{T}\\), then \\(f_{e}\\) and \\(P_{e}\\) are all that is needed to derive the distribution of \\(\\varphi_{\\mathcal{T}}\\): simply observe that
\\[\\mathbb{P}[\\varphi_{\\mathcal{T}}=x]=\\mathbb{P}[\\varphi_{e}=x,\\,Y_{e}=1]+ \\mathbb{P}[\\varphi_{e}=x,\\,Y_{e}=0]=f_{e}(x)+(1-P_{e})\\cdot I_{x=0},\\]
where \\(I_{p}\\) equals \\(0\\) or \\(1\\) depending on proposition \\(p\\) being false or true, respectively.
The algorithm then consists in doing a depth-first (bottom-up) traversal of all the edges, so that each time an edge \\(e\\) is visited, the values of \\(P_{e}\\) and \\(f_{e}(x)\\), for all \\(x\\in\\{\\lambda_{e},\\lambda_{e}+1,\\ldots,L\\}\\), are calculated using the following recursions. We may then use the \\(P_{e}\\) and \\(f_{e}(x)\\) values of the root edge to calculate the distribution of \\(\\varphi_{\\mathcal{T}}\\).
**Recursion for \\(f_{e}(x)\\).**
* If \\(e\\) leads into leaf \\(i\\), then \\[f_{e}(x)\\;=\\;\\mathbb{P}[\\varphi_{e}=\\lambda_{e},\\,Y_{e}=1]\\cdot I_{x=\\lambda_{ e}}\\;=\\;p_{i}\\cdot I_{x=\\lambda_{e}}.\\]* If \\(e\\) leads into the tail of edges \\(c\\) and \\(d\\), then
* \\(f_{e}(x)=\\sum_{i=\\lambda_{e}}^{x-\\lambda_{e}-\\lambda_{d}}f_{c}(i) \\cdot f_{d}(x-\\lambda_{e}-i)+(1-P_{d})\\cdot f_{c}(x-\\lambda_{e})+(1-P_{c})\\cdot f _{d}(x-\\lambda_{e})\\).
Note that whenever the term \\(f_{c}(x-\\lambda_{e})\\) with \\(x-\\lambda_{e}<\\lambda_{c}\\) or the term \\(f_{d}(x-\\lambda_{e})\\) with \\(x-\\lambda_{e}<\\lambda_{d}\\) is used in Equation (14), the algorithm will assume that its value is \\(0\\) and that therefore there is no need to calculate and store \\(f_{e}(x)\\) for \\(x\\) outside the range \\(\\{\\lambda_{e},\\lambda_{e}+1,\\ldots,L\\}\\).
Equation (14) is easily proved; we have
\\[f_{e}(x) = \\mathbb{P}[\\varphi_{e}=x,\\,Y_{c}=1,\\,Y_{d}=1]+\\mathbb{P}[\\varphi _{e}=x,\\,Y_{c}=1,\\,Y_{d}=0]+\\mathbb{P}[\\varphi_{e}=x,\\,Y_{c}=0,\\,Y_{d}=1]\\] \\[= \\mathbb{P}[\\varphi_{c}+\\varphi_{d}=x-\\lambda_{e},\\,Y_{c}=1,\\,Y_{ d}=1]\\] \\[+\\,\\mathbb{P}[\\varphi_{c}=x-\\lambda_{e},\\,Y_{c}=1,\\,Y_{d}=0]+ \\mathbb{P}[\\varphi_{d}=x-\\lambda_{e},\\,Y_{c}=0,\\,Y_{d}=1]\\]
where the second equality is obtained by restating event \\(\\varphi_{e}=x\\) in terms of \\(\\varphi_{c}\\) and \\(\\varphi_{d}\\), which is possible once we make assumptions on \\(Y_{c}\\) and \\(Y_{d}\\). Thus,
\\[f_{e}(x) = \\sum_{i=0}^{x-\\lambda_{e}}\\mathbb{P}\\left[\\varphi_{c}=i,\\,Y_{c}= 1\\right]\\cdot\\mathbb{P}\\left[\\varphi_{d}=x-\\lambda_{e}-i,\\,Y_{d}=1\\right]+\\] \\[\\mathbb{P}[\\varphi_{c}=x-\\lambda_{e},\\,Y_{c}=1]\\cdot\\mathbb{P}[Y _{d}=0]+\\mathbb{P}[\\varphi_{d}=x-\\lambda_{e},\\,Y_{d}=1]\\cdot\\mathbb{P}[Y_{c}=0]\\] \\[= \\sum_{i=\\lambda_{e}}^{x-\\lambda_{e}-\\lambda_{d}}f_{c}(i)\\cdot f_{ d}(x-\\lambda_{e}-i)+(1-P_{d})\\cdot f_{c}(x-\\lambda_{e})+(1-P_{c})\\cdot f_{d}(x- \\lambda_{e}).\\]
where the first equality is obtained by using the independence between the survival events in \\(C(c)\\) and \\(C(d)\\). Note that in the first expression in the second equality, the range of the sum has been reduced, as \\(f_{c}(i)=0\\) for \\(i<\\lambda_{c}\\) and \\(f_{d}(x-\\lambda_{e}-i)=0\\) for \\(x-\\lambda_{e}-i<\\lambda_{d}\\).
### Recursion for \\(P_{e}\\)
* If \\(e\\) leads into leaf \\(i\\), then \\(P_{e}=p_{i}\\).
* If \\(e\\) leads into the tail of edges \\(c\\) and \\(d\\), then \\(P_{e}=P_{c}+P_{d}-P_{c}P_{d}\\).
### Computational complexity
For any given \\(e\\), the calculation of \\(P_{e}\\) is done in \\(O(1)\\) time, whereas that of each of the \\(f_{e}(x)\\) values requires \\(O(x)=O(L)\\) time (see recursion (14)), giving a total of \\(O(L^{2})\\). Calling \\(n\\) the number of leaves in \\(\\mathcal{T}\\), there are \\(2n-1\\) edges in \\(\\mathcal{T}\\) and the entire procedure takes \\(O(nL^{2})\\) time.
A more efficient version of the algorithm can be obtained by restricting the calculation of \\(f_{e}(x)\\) to the values of \\(x\\in\\{\\lambda_{e},\\lambda_{e}+1,\\ldots,L_{e}\\}\\), where \\(L_{e}\\) is the maximum value that \\(\\varphi_{e}\\) can attain (namely the sum of the lengths of all the edges separated from the root by \\(e\\), including \\(e\\) itself). Note that the sum in (14) can then be further restricted to the values of \\(i\\) such that \\(i\\leq L_{c}\\) and \\(x-\\lambda_{e}-i\\leq L_{d}\\). Using this more efficient algorithm, it is easy to see that the calculation of all the \\(f_{e}(x)\\) values for a given internal edge \\(e\\) takes \\(O(L_{c}L_{d}+L_{e})\\) time, where \\(c\\) and \\(d\\) are the edges that \\(e\\) leads into. Noting that the sum of all the \\(L_{c}L_{d}\\) terms, for all sister edges \\(c\\) and \\(d\\), is bounded above by \\(L^{2}\\), this shows that the running time of the entire procedure is \\(O(L^{2}+nL)\\). Since typically every pair of taxa in the tree is separated by at least one edge of positive length, we have that \\(n=O(L)\\) and therefore the running time above is equivalent to \\(O(L^{2})\\).
Regarding memory requirements, note that each time we calculate the information relative to \\(e\\) (namely \\(P_{e}\\) and \\(f_{e}(x)\\)), the information relative to the edges it leads to (if any) can be deleted, as it will never be used again. So, at any given moment the information of at most \\(n\\) 'active' edges needs to be stored. If we use the range restriction just described, the sizes of the \\(f_{e}(x)\\) vectors for all the active edges sum to a number bounded above by \\(n+L\\), and therefore the algorithm requires \\(O(n+L)\\) space, equivalent to \\(O(L)\\) if \\(n=O(L)\\).
## 4. Extension to unrooted PD
There is a simple modification of the definition of phylogenetic diversity that is also relevant in biology ([5], [10]). Given a subset \\(X^{\\prime}\\) of \\(X\\), we can evaluate the sum of the lengths of the edges in the minimum subtree connecting (only) the leaves in \\(X^{\\prime}\\). This score - which we will denote by \\(uPD(X^{\\prime})\\) and refer to as the 'unrooted PD' score of \\(X^{\\prime}\\) - is equivalent to \\(PD(X^{\\prime})\\) if the path connecting two leaves in \\(X^{\\prime}\\) traverses the root of \\(\\mathcal{T}\\). However, in general, \\(uPD(X^{\\prime})\\leq PD(X^{\\prime})\\) (Fig. 4 shows an example where \\(uPD(X^{\\prime})<PD(X^{\\prime})\\)).
This alternative concept of phylogenetic diversity has the advantage that it can be defined on either rooted or unrooted phylogenetic trees. Of course, the g-FOB model is also defined naturally on unrooted trees, and so it makes sense to consider the distribution of \\(uPD\\) under the g-FOB model in this more general setting. A natural question is whether Theorem 2.1 is still valid, (that is, is the future uPD of (rooted or unrooted) trees also asymptotically normal under conditions (C1) and (C2)?). We now answer this question (affirmatively) and also show how to extend the computation of the exact future PD distribution to unrooted trees.
Let the random variable \\(\\varphi^{\\prime}=\\varphi^{\\prime}_{\\mathcal{T}}\\) denote the uPD score of the random subset \\(X^{\\prime}\\) of \\(X\\) (consisting of those taxa that will still exist at some time \\(t\\) in the future).
Figure 4. If only the taxa marked * in the tree on the left survive then future ‘unrooted’ phylogenetic diversity is the sum of the lengths of the solid edges in the tree on the right. Notice that the \\(uPD\\) value in this example is less than the \\(PD\\) value (_c.f._ Fig. 1).
We call \\(\\varphi^{\\prime}\\) the _future unrooted phylogenetic diversity_. In this model, we have
\\[\\varphi^{\\prime}=\\sum_{e}\\lambda_{e}Y^{\\prime}_{e}, \\tag{15}\\]
where \\(Y^{\\prime}_{e}\\) is the binary random variable which takes the value \\(1\\) if \\(e\\) lies on at least one path between some pair of taxa in \\(X^{\\prime}\\), and which is \\(0\\) otherwise. Moreover,
\\[\\mathbb{P}[Y^{\\prime}_{e}=1]=(1-\\prod_{i\\in X_{1}(e)}(1-p_{i}))(1-\\prod_{j\\in X _{2}(e)}(1-p_{j})), \\tag{16}\\]
where \\(X_{1}(e)\\) and \\(X_{2}(e)\\) are the bipartition of \\(X\\) consisting of the two subsets of \\(X\\) that are separated by edge \\(e\\). Thus if we let \\(P_{i}(e)\\) denote the probability that at least one taxon in \\(X_{i}(e)\\) survives (for \\(i\\in\\{1,2\\}\\)), then the expected value of \\(\\varphi^{\\prime}\\) (analogous to (3)) is
\\[\\mathbb{E}[\\varphi^{\\prime}]=\\sum_{e}\\lambda_{e}P_{1}(e)P_{2}(e). \\tag{17}\\]
Regarding \\(\\operatorname{Var}[\\varphi^{\\prime}]\\) there is an analogous formula to that given in Lemma 1.1.
Consider now a sequence \\(\\mathcal{T}_{1},\\mathcal{T}_{2},\\ldots,\\mathcal{T}_{n},\\ldots\\) of (rooted or unrooted) phylogenetic trees where \\(\\mathcal{T}_{n}\\) has \\(n\\) leaves, and assume that this sequence satisfies conditions (C1) and (C2) when each \\(\\mathcal{T}_{n}\\) has associated edge length and leaf survival probability functions. It can be shown that Theorem 2.1 is still valid for uPD; that is, under the same conditions, \\((\\varphi^{\\prime}_{n}-\\mathbb{E}[\\varphi^{\\prime}_{n}])/\\sqrt{\\operatorname{ Var}[\\varphi^{\\prime}_{n}]}\\) converges in distribution to \\(N(0,1)\\) as \\(n\\to\\infty\\).
To establish this asymptotic normality of \\(\\varphi^{\\prime}_{n}\\) under conditions (C1) and (C2\\({}^{*}\\)) (and thereby (C1) and (C2)) requires slight modifications to the proof of Theorem 2.1, and we now provide an outline of the argument. The main difference is that now each edge \\(e\\) induces a bipartition \\(X=X_{1}(e)\\cup X_{2}(e)\\) of the taxon set and so we decompose \\(\\mathcal{T}_{n}\\) in a slightly different way. For simplicity, assume that \\(|X_{1}(e)|\\leq|X_{2}(e)|\\) and consider the following edge sets (the definition of the function \\(f(n)\\) is as in the rooted case):
\\(E_{1}^{n}\\): edges \\(e\\) of \\(\\mathcal{T}_{n}\\) with \\(|X_{1}(e)|\\leq f(n)\\).
\\(E_{2}^{n}\\): edges \\(e\\) of \\(\\mathcal{T}_{n}\\) with \\(|X_{1}(e)|>f(n)\\).
\\(E_{12}^{n}\\): edges \\(e\\in E_{1}^{n}\\) such that \\(e\\) is adjacent to an edge \\(f\\in E_{2}^{n}\\).
For \\(\\varphi^{\\prime}_{n}\\) we obtain the following equation:
\\[\\varphi^{\\prime}_{n}=\\sum_{e\\in E_{1}^{n}}\\lambda_{e}Y^{\\prime}_{e}+\\sum_{e \\in E_{2}^{n}}\\lambda_{e}Y^{\\prime}_{e}=\\sum_{e\\in E_{1}^{n}}\\lambda_{e}Y^{ \\prime}_{e}+\\lambda_{n}-R^{\\prime}_{n}, \\tag{18}\\]
where \\(\\lambda_{n}=\\sum_{e\\in E_{2}^{n}}\\lambda_{e}\\) and \\(R^{\\prime}_{n}=\\sum_{e\\in E_{2}^{n}}\\lambda_{e}(1-Y^{\\prime}_{e})\\). For an edge \\(e\\in E_{12}^{n}\\), let \\(t_{e}\\) denote the subtree with root edge \\(e\\) and with leaf set \\(X_{1}(e)\\). Let \\((\\varphi^{n}_{e})^{\\prime}\\) denote the contribution to \\(\\varphi^{\\prime}_{n}\\) by the edges in \\(t_{e}\\). Furthermore, let \\(\\varphi^{n}_{e}\\) be the rooted future phylogenetic diversity of \\(t_{e}\\), \\(Z_{n}=\\sum_{e\\in E_{12}^{n}}\\varphi^{n}_{e}\\) as in the rooted case, \\(W_{e}=\\varphi^{n}_{e}-(\\varphi^{n}_{e})^{\\prime}\\)and \\(V_{n}=\\sum_{e\\in E_{12}^{n}}W_{e}\\). With this notation, we get
\\[\\varphi_{n}^{\\prime}=\\sum_{e\\in E_{12}^{n}}(\\varphi_{e}^{n})^{\\prime}+\\lambda_{n }-R_{n}^{\\prime}=\\sum_{e\\in E_{12}^{n}}\\varphi_{e}^{n}-\\sum_{e\\in E_{12}^{n}}W_{ e}+\\lambda_{n}-R_{n}^{\\prime}=Z_{n}-V_{n}+\\lambda_{n}-R_{n}^{\\prime}. \\tag{19}\\]
Now we can apply Lemma 2.4 and Slutsky's Theorem to complete the proof.
### Computing the uPD distribution
Finally we show how the algorithm described in Section 3 for computing the PD distribution can be modified to calculate the distribution of unrooted PD. As before, we assume the edge lengths are non-negative integers and we preprocess \\(\\mathcal{T}\\) (possibly rooting it in an arbitrary vertex) so that the number of outgoing edges is \\(1\\) for the root and \\(2\\) for all the other non-leaf vertices. Since \\(\\mathcal{T}\\) is now rooted, \\(C(e)\\) and the random variables \\(Y_{e}\\) are well defined. We also define \\(\\varphi_{e}^{\\prime}\\) as the uPD of the surviving taxa in \\(C(e)\\). Then, for any integer \\(x\\), define
\\[f_{e}^{\\prime}(x):=\\mathbb{P}[\\varphi_{e}^{\\prime}=x,\\,Y_{e}=1].\\]
As before, if \\(e\\) is the root edge of \\(\\mathcal{T}\\), then \\(f_{e}^{\\prime}\\) and \\(P_{e}\\) are sufficient to derive the distribution of \\(\\varphi_{\\mathcal{T}}^{\\prime}\\):
\\[\\mathbb{P}[\\varphi_{\\mathcal{T}}^{\\prime}=x]=f_{e}^{\\prime}(x)+(1-P_{e})\\cdot I _{x=0}.\\]
An algorithm to calculate the distribution of \\(\\varphi_{\\mathcal{T}}^{\\prime}\\) can be obtained with a simple modification of the algorithm for \\(\\varphi_{\\mathcal{T}}\\): for each edge \\(e\\), in addition to calculating \\(P_{e}\\) and \\(f_{e}(x)\\), also calculate \\(f_{e}^{\\prime}(x)\\), for all \\(x\\in\\{0,1,\\ldots,L\\}\\). For this purpose, the following recursion is used (note that the \\(f_{e}^{\\prime}\\) values may depend on \\(f_{c}\\) and \\(f_{d}\\) as well as on \\(f_{c}^{\\prime}\\) and \\(f_{d}^{\\prime}\\), which is why we retain the calculation of the \\(f_{e}\\) values even though they are not directly implicated in determining \\(\\mathbb{P}[\\varphi_{\\mathcal{T}}^{\\prime}=x]\\)).
### Recursion for \\(f_{e}^{\\prime}(x)\\)
* If \\(e\\) leads into leaf \\(i\\), then \\[f_{e}^{\\prime}(x)\\;=\\;p_{i}\\cdot I_{x=0}.\\]
* If \\(e\\) leads into the tail of edges \\(c\\) and \\(d\\), then (20) \\[f_{e}^{\\prime}(x)=\\sum_{i=\\lambda_{c}}^{x-\\lambda_{d}}f_{c}(i)\\cdot f_{d}(x-i )+(1-P_{d})\\cdot f_{c}^{\\prime}(x)+(1-P_{c})\\cdot f_{d}^{\\prime}(x),\\]
which is proved in a way similar to (14):
\\[f_{e}^{\\prime}(x) = \\mathbb{P}[\\varphi_{e}^{\\prime}=x,\\,Y_{c}=1,\\,Y_{d}=1]+\\mathbb{P} [\\varphi_{e}^{\\prime}=x,\\,Y_{c}=1,\\,Y_{d}=0]+\\mathbb{P}[\\varphi_{e}^{\\prime}=x,\\,Y_{c}=0,\\,Y_{d}=1]\\] \\[= \\mathbb{P}[\\varphi_{c}+\\varphi_{d}=x,\\,Y_{c}=1,\\,Y_{d}=1]+\\mathbb{ P}[\\varphi_{c}^{\\prime}=x,\\,Y_{c}=1,\\,Y_{d}=0]+\\mathbb{P}[\\varphi_{d}^{\\prime}=x,\\,Y_{c}=0,\\,Y_{d}=1]\\] \\[= \\sum_{i=\\lambda_{c}}^{x-\\lambda_{d}}f_{c}(i)\\cdot f_{d}(x-i)+(1-P _{d})\\cdot f_{c}^{\\prime}(x)+(1-P_{c})\\cdot f_{d}^{\\prime}(x).\\]
## 5. Concluding remarks
The main result of this paper (Theorem 2.1) has been to establish a limiting normal distribution for future PD on large phylogenetic trees. This theorem assumes an underlying generalized 'field of bullets' model, and imposes two further mild conditions (conditions (C1) and (C2)). In this setting Theorem 2.1 reduces the problem of computing the distribution of future PD to that of determining just two parameters - its mean and variance - and these can be readily computed by Equation (3) and Lemma 1.1. Using the resulting normal distribution one can easily compute the probability under the model that future PD will fall below any given critical value. This may also be helpful in designing strategies to minimize this probability (analogous to the 'Noah's Ark problem' which tries to maximize expected future PD).
In practice, the use of a normal distribution based on Theorem 2.1 requires that the number of taxa is moderate (\\(>50\\)), that the survival probabilities are not too extreme (condition (C1)), and that the length of the pendant edges on average are not too small in relation to the largest edge length in the tree (condition (C2)). If these conditions are violated, it would be prudent to use the exact algorithm we have described in the paper, as this requires neither a large number of taxa nor condition (C1) or (C2). To apply this algorithm may involve some small adjustment to the edge lengths to make them integral multiples of some common value.
Regarding the'mild conditions' for Theorem 2.1 (namely (C1) and (C2)), we showed that neither can be dropped completely from the statement of the theorem. However it is likely that both conditions could be weakened somewhat, though at the risk of complicating the description of the conditions and the proof of Theorem 2.1.
It would be interesting to explore other extinction models that weaken the strong assumption in the g-FOB model that taxon extinction events are independent. One such model would regard extinction as a continuous-time Markov process in which the extinction rate of a taxon \\(i\\) at any given time \\(t\\) is the product of an intrinsic extinction rate \\(r_{i}\\), with a factor that depends on the set of species in the tree that are extant at time \\(t\\). In general, such processes could be very complex, so a first step would be to identify a simple model that nevertheless captures more biological realism than the g-FOB model.
## 6. Appendix
_Condition (C2) is satisfied in expectation for trees generated under a continuous-time pure-birth model._
Consider a model where each lineages persist for a random period of time before speciating, and that these persistence times are i.i.d. random variables with exponential distribution with mean \\(s>0\\) (This model is sometimes called the 'Yulemodel' in phylogenetics). Now suppose we sample the process at some time during which the tree has \\(n\\) leaves. Let \\(\\mathcal{T}_{n}\\) denote this tree (with its associated edge lengths) and let \\(\\mu_{P}(n)\\) denote the average length of the pendant edges of \\(\\mathcal{T}_{n}\\).
**Proposition 6.1**.: _Under a constant birth model with speciation rate \\(s\\) and \\(\\beta\\in(0,1)\\) there is a constant \\(B>0\\) for which the expected value of_
\\[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}\\left(\\lambda_{e}^{(n)}\\right)^{2}-Bn^{\\beta }L(n)^{2}\\]
_is strictly positive for all \\(n\\geq 3\\)._
Proof.: By using the inequality \\(\\sum_{i=1}^{n}x_{i}^{2}\\geq\\frac{1}{n}(\\sum_{i=1}^{n}x_{i})^{2}\\) we have
\\[\\mathbb{E}[\\sum_{e\\in E_{P}(\\mathcal{T}_{n})}\\left(\\lambda_{e}^{(n)}\\right)^{2 }-Bn^{\\beta}L(n)^{2}]\\geq n\\mathbb{E}[\\mu_{P}(n)]^{2}-Bn^{\\beta}\\mathbb{E}[L(n )^{2}]\\]
and the proposition follows (by choice of a sufficiently small value of \\(B>0\\)) once we establish the following two results.
1. \\(\\mathbb{E}[\\mu_{P}(n)]\\geq s/6\\) for all \\(n\\geq 3\\).
2. \\(n^{-\\eta}\\mathbb{E}[L(n)^{2}]\\to 0\\) as \\(n\\to\\infty\\), for any \\(\\eta>0\\).
To establish result (i), let \\(S_{n}\\) denote the sum of the lengths of the pendant edges of \\(\\mathcal{T}_{n}\\) up till the point when the number of species first changes from \\(n-1\\) to \\(n\\), and excluding the (length of the) pendant edge on which this speciation event occurs. Thus \\(S_{n}\\) is a sum of lengths of \\(n-2\\) pendant edges. [For example, \\(S_{3}\\) has an exponential distribution with mean \\(s/2\\), as it is the length of the edge that does not first speciate, up until the time when one of the two edges in the tree first speciates]. Since we are observing the tree \\(\\mathcal{T}_{n}\\) at some later time (but while it still has \\(n\\) leaves) then we clearly have:
\\[\\mu_{P}(n)\\geq\\frac{1}{n}S_{n}. \\tag{21}\\]
We will derive a recursion for the sequence \\((\\mathbb{E}[S_{n}],n=3,4,\\ldots)\\). Let \\(\\theta_{n}\\) be an exponentially-distributed random variable with mean \\(s/n\\). Now, the random variable \\(S_{n+1}\\) takes the value \\(S_{n}+(n-1)\\theta_{n}\\), with probability \\(2/n\\) (this is the case where the next speciation event occurs on one of the two edges that develop from the last speciation event). Otherwise (and so with probability \\(1-2/n\\)), \\(S_{n+1}\\) takes the value \\(S_{n}+(n-1)\\theta_{n}-\\lambda_{e}\\), where \\(\\lambda_{e}\\) is the length of one of the \\(n-2\\) pendant edges that contribute to \\(S_{n}\\) (selected uniformly at random from this set of edges).
Consequently,
\\[\\mathbb{E}[S_{n+1}] = \\frac{2}{n}(\\mathbb{E}[S_{n}]+(n-1)\\frac{s}{n})+(1-\\frac{2}{n})( \\mathbb{E}[S_{n}](1-\\frac{1}{n-2})+(n-1)\\frac{s}{n})\\] \\[= \\frac{n-1}{n}(\\mathbb{E}[S_{n}]+s).\\]
By using the initial condition \\(\\mathbb{E}[S_{3}]=s/2\\), and this recursion, we have that \\(\\mathbb{E}[S_{n}]=ns/2-s\\) for all \\(n\\geq 3\\). Taking expectations on both sides of inequality (21) gives
\\[\\mathbb{E}[\\mu_{P}(n)]\\geq\\frac{1}{n}(ns/2-s)=\\frac{s}{2}-\\frac{s}{n}\\geq\\frac {s}{6},\\]for all \\(n\\geq 3\\), thus proving (i).
To establish result (ii), observe that length of the longest edge in \\(\\mathcal{T}_{n}\\) (namely \\(L(n)\\)) is bounded above by the length of the longest edge in the tree obtained from \\(\\mathcal{T}_{n}\\) by allowing each leaf to evolve until it next speciates. Now, the lengths of the edges in this resulting trees is a set of \\(|E(\\mathcal{T}_{n})|\\) independent random variables each having an exponential distribution with mean \\(s\\) (here \\(|E(\\mathcal{T}_{n})|\\) is the number of edges of \\(\\mathcal{T}_{n}\\), which is at most \\(2n-1\\)). Thus, if we let \\(Y_{n}\\) be the maximum of \\(2n-1\\) i.i.d exponentially-distributed random variables, each with mean \\(s\\), then \\(L(n)\\leq Y_{n}\\). Moreover, for any \\(x>0\\) we have:
\\[\\mathbb{E}[Y_{n}^{2}]=\\int_{0}^{\\infty}\\mathbb{P}[Y_{n}^{2}>y]dy\\leq x^{2}+ \\int_{x^{2}}^{\\infty}\\mathbb{P}[Y_{n}^{2}>y]dy, \\tag{22}\\]
where the first equality in (22) is a standard identity in probability theory for any non-negative random variable \\(Y_{n}^{2}\\). Now, by Boole's inequality,
\\[\\mathbb{P}[Y_{n}^{2}>y]=\\mathbb{P}[Y_{n}>\\sqrt{y}]\\leq(2n-1)\\exp(-\\sqrt{y}/s).\\]
Making the substitution \\(y=x^{2}+t^{2}\\), and applying the inequality \\(\\sqrt{x^{2}+t^{2}}\\geq\\frac{x+t}{\\sqrt{2}}\\) we obtain
\\[\\int_{x^{2}}^{\\infty}\\mathbb{P}[Y_{n}^{2}>y]dy\\leq 2(2n-1)\\exp(-\\frac{x}{s \\sqrt{2}})\\int_{t=0}^{\\infty}t\\exp(-\\frac{t}{s\\sqrt{2}})dt.\\]
Thus, taking \\(x=cs\\log(n)\\), for any \\(c>\\sqrt{2}\\), in (22) we obtain
\\[\\mathbb{E}[L(n)^{2}]\\leq\\mathbb{E}[Y_{n}^{2}]\\leq c^{2}s^{2}(\\log(n))^{2}+o(1),\\]
(where \\(o(1)\\) is a term that tends to \\(0\\) as \\(n\\to\\infty\\)) and result (ii) now follows.
## References
* [1] Durrett, R., 1991. Probability: Theory and Examples, Wadsworth and Brooks/Cole, Belmont, California.
* [2] Edwards, A.W.F., 1970. Estimation of the branch points of a branching diffusion process. (With discussion.). J. Roy. Statist. Soc. Ser. B., 32, 155-174.
* [3] Faith, D.P., 1992. Conservation evaluation and phylogenetic diversity. Biol. Conserv. 61, 1-10.
* [4] Faith, D.P., 2006. The role of the phylogenetic diversity measure, PD, in bio-informatics: Getting the definition right. Evol. Bioinf. Online.
* [5] Faith, D.P., Baker, A.M., 2006. Phylogenetic diversity (PD) and biodiversity conservation: some bioinformatics challenges. Evol. Bioinf. Online.
* [6] Hartmann, K., Steel, M., 2006. Maximimizing phylogenetic diversity in biodiversity conservation: greedy solutions to the Noah's Ark problem. Syst. Biol. 55(4), 644-651.
* [7] Hartmann, K., Steel, M., 2007. Phylogenetic diversity: From combinatorics to ecology. In: Reconstructing evolution: New mathematical and computational approaches (eds. O. Gascuel and M. Steel), Oxford University Press, Oxford, UK.
* [8] Mooers, A.O., Heard, S.B., Chrostowski, E., 2005. Evolutionary heritage as a metric for conservation. Pages 120-138 in Phylogeny and conservation (A. Purvis, T. Brooks, and J. Gittleman, eds.). Cambridge University Press, Cambridge, UK.
* [9] Nee, S., May, R. M., 1997. Extinction and the loss of evolutionary history. Science, 278(5338), 692-694.
* [10] Pardi, F., Goldman, N., 2007. Resource-aware taxon selection for maximizing phylogenetic diversity, Syst. Biol. 56(3), 431-444.
* [11] Purvis, A., Agapow, P-M., Gittleman, J.L., Mace, G.M., 2000. Nonrandom extinction and the loss of evolutionary history. Science 288, 328-330.
* [12] Raup, D.M., 1993. Extinction: bad genes or bad luck? Oxford Univ. Press, Oxford.
* [13] Serfling, R.J., 1980. Approximation theorems of mathematical statistics, Wiley, New York.
* [14] Vazquez, D.P., Gittleman, J.L., 1998. Biodiversity conservation: Does phylogeny matter? Current Biol. 8, 379-381.
* [15] Weitzman, M.L., 1998. The Noah's ark problem. Econometica, 66(6), 1279-1298. | Phylogenetic diversity is a measure for describing how much of an evolutionary tree is spanned by a subset of species. If one applies this to the (unknown) subset of current species that will still be present at some future time, then this 'future phylogenetic diversity' provides a measure of the impact of various extinction scenarios in biodiversity conservation. In this paper we study the distribution of future phylogenetic diversity under a simple model of extinction (a generalized 'field of bullets' model). We show that the distribution of future phylogenetic diversity converges to a normal distribution as the number of species grows (under mild conditions, which are necessary). We also describe an algorithm to compute the distribution efficiently, provided the edge lengths are integral, and briefly outline the significance of our findings for biodiversity conservation.
Key words and phrases: biodiversity conservation, phylogenetic tree, central limit theorem, field of bullets model of extinction 1991 Mathematics Subject Classification: 05C05; 92D15 We thank the NZ Marsden Fund (06-UOC-02) for supporting this research. | Provide a brief summary of the text. |
arxiv-format/0708_2112v1.md | # In-Medium Hadronic Interactions and the Nuclear Equation of State
Talk given at the \"International Symposium on Exotic States of Nuclear Matter\"
Catania (Italy), June 11-15, 2007
F. Sammarruca
[email protected] Physics Department, University of Idaho,
Moscow, IDAHO 83844-0903, U.S.A.
E-mail: [email protected]
## 1 Introduction
The properties of hadronic interactions in a dense environment is a problem of fundamental relevance in nuclear physics. Such properties are typically expressed in terms of the nuclear equation of state (EoS), a relation between energy and density (and possibly other thermodynamic quantities) in infinite nuclear matter. The infinite geometry of nuclear matter eliminates surface effects and makes this system a clean \"laboratory\" for developing and testing models of the nuclear force in the medium.
The project I review in this talk is broad-scoped. We have examined several EoS-sensitive systems/phenomena on a consistent footing with the purpose of gaining a broad overview of various aspects of the EoS. We hope this will help identify patterns and common problems.
Our approach is microscopic, with the starting point being realistic free-space interactions. In particular, we apply the Bonn B[1] potential in the Dirac-Brueckner-Hartree-Fock (DBHF) approach to asymmetric nuclear matter as done earlier by the Oslo group.[2] The details of the calculations have been described previously.[3] Asit has been known for a long time, the DBHF approximation allows a more realistic description of the saturation properties of symmetric nuclear matter as compared with the conventional Brueckner scheme. The leading relativistic effect characteristic of the DBHF model turns out to be a very efficient saturation mechanism. We recall that such correction has been shown to simulate a many-body effect (the \"Z-diagrams\") through mixing of positive- and negative-energy Dirac spinors by the scalar interaction [4].
In what follows, I will review some of our recent results and discuss on-going work. I stress again that these efforts belong to the broader context of learning more about the behavior of the nuclear force in the medium using the EoS of infinite matter (under diverse conditions of isospin and spin asymmetry) as an exploratory tool. I also emphasize the importance of fully exploiting empirical information, which is becoming more available through collisions of neutron-rich nuclei.
## 2 Isospin-asymmetric nuclear matter
### Seeking laboratory constraints to the symmetry potential
Reactions induced by neutron-rich nuclei can probe the isospin dependence of the EoS. Through careful analyses of heavy-ion collision (HIC) dynamics one can identify observables that are sensitive to the asymmetric part of the EOS. Among those, for instance, is the neutron-to-proton ratio in semiperipheral collisions of asymmetric nuclei at Fermi energies [5].
In transport models of heavy-ion collisions, particles drift in the presence of an average potential while undergoing two-body collisions. Isospin-dependent dynamics is typically included through the _symmetry potential_ and isospin-dependent effective cross sections. Effective cross sections (ECS) will not be discussed in this talk. However, it is worth mentioning that they play an outstanding role for the determination of quantities such as the nucleon mean-free path in nuclear matter, the nuclear transparency function, and, eventually, the size of exotic nuclei.
The symmetry potential is closely related to the single-neutron/proton potentials in asymmetric matter, which we obtain self-consistently with the effective interaction. Those are shown in Fig. 1 as a function of the asymmetry parameter \\(\\alpha=\\frac{\\rho_{n}-\\rho_{p}}{\\rho}\\). The approximate linear behavior, consistent with a quadratic dependence on \\(\\alpha\\) of the _average_ potential energy, is apparent. Clearly, the isospin splitting of the single-particle potential will be effective in separating the collision dynamics of neutrons and protons. The symmetry potential is shown in Fig. 2 where it is compared with empirical information on the isovector part of the optical potential (the early Lane potential [6]). The decreasing strength with increasing energy is in agreement with optical potential data.
### Summary of EOS results and comparison with recent constraints
In Table 1 we summarize the main properties of the symmetric and asymmetric parts of our EoS. Those include saturation energy and density, incompressibility \\(K\\), the skewness parameter \\(K^{\\prime}\\), the symmetry energy, and the symmetry pressure, \\(L\\). The EoS for symmetric and neutron matter and the symmetry energy are displayed in Figs. 3-4.
A recent analysis to constrain the EoS using compact star phenomenology and
Figure 1: The single-neutron (upper panel) and single-proton (lower panel) potentials as a function of the asymmetry parameter for fixed average density (\\(k_{F}=1.4fm^{-1}\\)) and momentum (\\(k=k_{F}\\)).
Figure 2: The symmetry potential as a function of the nucleon kinetic energy at \\(k_{F}=1.4fm^{-1}\\). The shaded area represents empirical information from optical potential data.
HIC data can be found in Ref.[7]. While the saturation energy is not dramatically different between models, the incompressibility values spread over a wider range. Major model dependence is found for the \\(K^{\\prime}\\) parameter, where a negative value indicates a very stiff EoS at high density. That is the case for models with parameters fitted to the properties of finite nuclei, whereas flow data require a soft EoS at the higher densities and thus a larger \\(K^{\\prime}\\). The \\(L\\) parameter also spreads considerably, unlike the symmetry energy which tends to be similar in most models. For \\(L\\), a combination of experimental information on neutron skin thickness in nuclei and isospin diffusion data sets the constraint 62 MeV \\(<L<\\) 107 MeV.
Overall, our EoS parameters compare reasonably well with most of those constraints. They also compare well with those from other DBHF calculations reported in Ref.[7]
The parameters in Table 1 are defined by an expansion of the energy written as
\\[e=e_{s}+\\frac{K}{18}\\epsilon^{2}-\\frac{K^{\\prime}}{162}\\epsilon^{3}+ + \\alpha^{2}(e_{sym}+\\frac{L}{3}\\epsilon+ ), \\tag{1}\\]
with \\(\\epsilon=(\\rho-\\rho_{s})/\\rho_{s}\\), and \\(\\rho_{s}\\) is the saturation density.
Figure 4: Symmetry energy as a function of density (in fm\\({}^{-3}\\)).
Figure 3: Energy/particle in symmetric (solid) and neutron (dashed) matter, as a function of density (in fm\\({}^{-3}\\)).
## 3 Spin-polarized neutron matter
In this section we move on to a another issue presently discussed in the literature with regard to exotic states of nuclear/neutron matter, namely the aspect of spin asymmetry and (possible) spin instabilities.
The problem of spin-polarized neutron/nuclear matter has a long history. Extensive work on the this topic has been done by Vidana, Polls, Ramos, Bombaci, Muther, and more (see bibliography of Ref. [8] for a more complete list). The major driving force behind these efforts is the search for ferromagnetic instabilities, namely the existence of a polarized state with lower energy than the unpolarized, which naturally would lead to a spontaneous transition. Presently, conclusions differ widely concerning the existence of such transition and its onset density.
A coupled self-consistency scheme similar to the one described in Ref. [3] was developed to calculate the EoS of polarized neutron matter. The details are described Ref. [8]. As done previously for the case of isospin asymmetry, the single-particle potential (for upward and downward polarized neutrons), is obtained self-consistently with the effective interaction. Schematically
\\[U_{i}=\\int G_{ij}+\\int G_{ii} \\tag{2}\\]
with \\(i,j\\)=\\(u,d\\). The nearly linear dependence of the single-particle potentials on the spin-asymmetry parameter [8]
\\[U_{u/d}(\\rho,\\beta)\\approx U_{0}(\\rho,0)\\pm U_{s}(\\rho)\\beta, \\tag{3}\\]
with \\(\\beta\\) the spin-asymmetry parameter, is reminescent of the analogous case for isospin asymmetry and may be suggestive of a possible way to seek constraints on \\(U_{s}\\), the \"spin Lane potential\", similarly to what was discussed above for the isovector optical potential. Namely, one can write, for a nucleus,
\\[U\\approx U_{0}+U_{\\sigma}({\\bf s}\\cdot{\\bf\\Sigma})/A, \\tag{4}\\]
with \\({\\bf s}\\) and \\({\\bf\\Sigma}\\) the spins of the projectile nucleon and the target nucleus, respectively, and extract an obvious relation with the previous equation. (In practice, the situation for an actual scattering experiment on a polarized nucleus would require a more
\\begin{table}
\\begin{tabular}{c c} \\hline Parameter & Predicted Value \\\\ \\hline \\(e_{s}\\) & -16.14 MeV \\\\ \\(\\rho_{s}\\) & 0.185 fm\\({}^{-3}\\) \\\\ \\(K\\) & 259 MeV \\\\ \\(K^{\\prime}\\) & 506 MeV \\\\ \\(e_{sym}(\\rho_{s})\\) & 33.7 MeV \\\\ \\(L(\\rho_{s})\\) & 70.1 MeV \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: An overview of our predicted properties for the EOS of symmetric matter and neutron matter.
complicated parametrization than the one above, as normally a spin-unsaturated nucleus is also isospin-unsaturated.)
As already implied by the linear dependence of the single-particle potential displayed in Eq. (3), the dependence of the average energy on the asymmetry parameter is approximately quadratic,[8]
\\[e(\\rho,\\beta)\\approx e(\\rho,0)+S(\\rho)\\beta^{2}, \\tag{5}\\]
where \\(S(\\rho)\\) is the spin symmetry energy, shown in Fig. 5. The spin symmetry energy can be related to the magnetic susceptibility through
\\[\\chi=\\frac{\\rho\\mu^{2}}{2S(\\rho)}. \\tag{6}\\]
The rise of \\(S(\\rho)\\) with density shows a tendency to slow down, a mechanism that we attribute to increased repulsion in large even-singlet waves (which contribute only to the energy of the unpolarized state). This could be interpreted as a precursor of spin instability. In Table 2 we show predicted values of the ratio \\(\\chi_{F}/\\chi\\), where \\(\\chi_{F}\\) is the susceptibility of a free Fermi gas.
Concerning the possibility of laboratory constraints which may help shed light on these issues, magnetic properties are of course closely related to the strength of the effective interaction in the spin-spin channel, which suggests to look into the \\(G_{0}\\) Landau parameter. With simple arguments, the latter can be related to the susceptibility ratio and the effective mass as
\\[\\frac{\\chi}{\\chi_{F}}=\\frac{m^{*}/m}{1+G_{0}} \\tag{7}\\]
Thus, \\(G_{0}\\)\\(\\leq\\) -1 signifies spin instability. (Notice the analogy with the formally similar relation between the incompressibility ratio \\(K/K_{F}\\) and the parameter \\(F_{0}\\), where \\(F_{0}\\)\\(\\leq\\) -1 signifies that nuclear matter is unstable.) At this time, no reliable constraints on \\(G_{0}\\) are available, due to the fact that spin collective modes have not been observed with sufficient strength.
Figure 5: Spin symmetry energy as a function of the neutron density (in fm\\({}^{-3}\\)).
In closing this section, we note that we found similar considerations concerning the trend of the magnetic susceptibility to hold for symmetric nuclear matter as well as for neutron matter.
## 4 Work in progress: non-nucleonic degrees of freedom
There are important motivations for considering strange baryons in nuclear matter. The presence of hyperons in stellar matter tends to soften the EoS, with the consequence that the predicted neutron star maximum masses become considerably smaller. With recent constraints allowing maximum masses larger than previously accepted limits, accurate calculations which include strangeness become important and timely. On the other hand, remaining within terrestrial nuclear physics, studies of hyperon energies in nuclear matter naturally complements our knowledge of finite hypernuclei.
The nucleon and the \\(\\Lambda\\) potentials in nuclear matter are the solution of a coupled self-consistency problem, which reads, schematically
\\[U_{N} = \\int_{k<k_{F}^{N}}G_{NN}+\\int_{k<k_{F}^{\\Lambda}}G_{N\\Lambda} \\tag{8}\\] \\[U_{\\Lambda} = \\int_{k<k_{F}^{N}}G_{\\Lambda N}+\\int_{k<k_{F}^{\\Lambda}}G_{ \\Lambda\\Lambda}\\]
To confront the simplest possible scenario, one may consider the case of symmetric nuclear matter at some Fermi momentum \\(k_{F}^{N}\\) in the presence of a \"\\(\\Lambda\\) impurity\", namely \\(k_{F}^{\\Lambda}\\approx 0\\). Under these conditions, the problem stated above simplifies considerably. Such calculation was done in Ref. [9] within the Brueckner scheme.
We have done a similar calculation but made use of the latest nucleon-hyperon (NY) potential of Ref. [10], which was provided by the Julich group. In a first approach, we have taken the single-nucleon potential from a separate calculation of symmetric matter. (Notice that the \\(\\Lambda\\) potential is quite insensitive to the choice of \\(U_{N}\\), as reported in Ref. [9] and as we have observed as well.) The parameters of the \\(\\Lambda\\) potential, on the other hand, are calculated self-consistently with the \\(G_{N\\Lambda}\\) interaction, which is the solution of the Bethe-Goldstone equation with one-boson exchange nucleon-hyperon potentials. In the Brueckner calculation, angle-averagedPauli blocking and dispersive effects are included. Once the single-particle potential is obtained, the value of \\(-U_{\\Lambda}(p)\\) at \\(p\\)=0 provides the \\(\\Lambda\\) binding energy in nuclear matter, \\(B_{\\Lambda}\\).
As shown and discussed extensively in Ref. [10], there are several remarkable differences between this model and the older \\(NY\\) Julich potential [11], and those seem to have a large impact on nuclear matter results. The main new feature of this model is a microscopic model of correlated \\(\\pi\\pi\\) and \\(K\\bar{K}\\) exchange to constrain both the \\(\\sigma\\) and \\(\\rho\\) contributions [10]. With the new model, we obtain considerably more attraction than Reuber _et al._[9], about 49 MeV at \\(k_{F}^{N}\\)=1.35 fm\\({}^{-1}\\) for \\(B_{\\Lambda}\\). We have also incorporated the DBHF effect in this calculation (which amounts to involving the \\(\\Lambda\\) single-particle Dirac wave function in the self-consistent calculation through the effective mass) and find a moderate reduction of \\(B_{\\Lambda}\\) by 3-4 MeV. A detailed report of this project is forthcoming.
The natural extension of this preliminary calculation will be a DBHF self-consistent calculation of \\(U_{N}\\), \\(U_{\\Lambda}\\), and \\(U_{\\Sigma}\\) for diverse \\(\\Lambda\\) and \\(\\Sigma\\) concentrations.
## 5 Summary and conclusions
I have presented a summary of recent results from my group as well as on-going work. Our scopes are broad and involve several aspects of nuclear matter, the common denominator being the behavior of the nuclear force, including its isospin and spin dependence, in the medium. I have stressed the importance of seeking and exploiting laboratory constraints. In the future, coherent effort from theory, experiment, and observations will be the key to improving our knowledge of nuclear matter and its exotic states.
## Acknowledgments
Support from the U.S. Department of Energy under Grant No. DE-FG02-03ER41270 is acknowledged. I am grateful to Johann Haidenbauer for providing the nucleon-hyperon potential code and for useful communications.
## References
* [1] R. Machleidt, Adv. Nucl. Phys. **19**, 189 (1989).
* [2] G. Bao, L. Engvik, M. Hjorth-Jensen, E. Osnes, and E. Ostgaard, Nucl. Phys. **A575**, 707 (1994).
* [3] D. Alonso and F. Sammarruca, Phys. Rev. C **67**, 054301 (2003).
* [4] G. E. Brown, W. Weise, G. Baym, and J. Speth, _Relativistic Effects in Nuclear Physics_, Comments Nucl. Part. Phys. 1987, Vol.17, No.1, pp.39-62.
* [5] V. Baran, M. Colonna, M. Di Toro, M. Zielinska-Pfabe, and H.H. Wolter, Phys. Rev. C **72**, 064620 (2005).
* [6] A. M. Lane, Nucl. Phys. **35**, 676 (1962).
* [7] T. Klahn _et al_, Phys. Rev. C **74**, 0635802 (2006).
* [8] F. Sammarruca and P. Krastev, Phys. Rev. C **75**, 034315 (2007).
* [9] A. Reuber, K. Holinde, and J. Speth, Nucl. Phys. **A570**, 543 (1994).
* [10] J. Haidenbauer and Ulf-G. Meissner, Phys. Rev. C **72**, 044005 (2005).
* [11] B. Holzenkamp, K. Holinde, and J. Speth, Nucl. Phys. **A500**, 485 (1989). | Microscopic studies of nuclear matter under diverse conditions of density and asymmetry are of great contemporary interest. Concerning terrestrial applications, they relate to future experimental facilities that will make it possible to study systems with extreme neutron-to-proton ratio. In this talk, I will review recent efforts of my group aimed at exploring nuclear interactions in the medium through the nuclear equation of state. The approach we take is microscopic and relativistic, with the predicted EoS properties derived from realistic nucleon-nucleon potentials. I will also discuss work in progress. Most recently, we completed a DBHF calculation of the \\(\\Lambda\\) hyperon binding energy in nuclear matter.
Nuclear Matter; Equation of State; in-Medium Hadronic Interactions. | Give a concise overview of the text below. |
arxiv-format/0708_3197v1.md | # From Microscales to Macroscales in 3D:
Selfconsistent Equation of State for Supernova and Neutron Star Models
W G Newton\\({}^{1}\\)
J R Stone\\({}^{1,2,3}\\)
A Mezzacappa\\({}^{2}\\)
\\({}^{1}\\) Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom \\({}^{2}\\) Physics Division, Oak Ridge National Laboratory,P.O. Box 2008, Oak Ridge, TN 37831, USA \\({}^{3}\\) Department of Chemistry and Biochemistry, University of Maryland, College Park, MD 20742, USA [email protected], [email protected], [email protected]
## 1 Introduction
Observational properties of neutron stars and supernovae serve as powerful constraints of the nuclear EoS. There is a large variety of EoS models in the literature and it is imperative to investigate the connection of the physical processes expected in stars with the features of individual EoS. The models used to construct nuclear EoS range from empirical to those based on non-relativistic effective and realistic nucleon-nucleon potentials and relativistic field theories (for a recent reviews see e.g. [1, 2]). It is unclear at present which of these EoS is closest to reality. All the EoS are required to reflect physics occurring in a wide region of particle number densities. In core-collapse supernovae these densities span from the subnuclear density of about 4\\(\\times 10^{-8}\\) to \\(\\sim\\)0.1 fm\\({}^{-3}\\) (inhomogeneous matter) to the high density phase (uniform matter) between \\(\\sim\\)0.1 fm\\({}^{-3}\\) - 0.6 fm\\({}^{-3}\\). Neutron star models involve an even wider density range starting from \\(\\sim\\)6\\(\\times 10^{-15}\\) fm\\({}^{-3}\\) (estimated density at the surface of neutron stars) to about 0.6-1.0 fm\\({}^{-3}\\) (expected in the center of neutron stars). Most of the currently used EoSs in both the subnuclear and supernuclear density do not cover the whole range but are composed of several EoSs reflecting the evolution of the character of matter with changing density in smaller intervals. One of the most interesting density regions covers the transition betweenuniform and inhomogenous matter, known is the 'pasta' phase. In this region superheavy neutron rich nuclei beyond the neutron drip line gradually dissolve into nucleon + lepton matter of uniform density. The proton and neutron density distribution is determined by a delicate balance between the surface tension of nuclei and the Coulomb repulsion of protons. Previous models of the 'pasta' phase of matter, assuming spherical symmetry, predicted the existence of a series of exotic nuclear shapes - rods, slabs, tubes and bubbles, immersed in a free neutron and electron gas, corresponding to minimal energy of the matter as a function of increasing density, until the uniform distribution becomes energetically favorable. The 'pasta' phase of stellar matter, although occurring in a relatively small region of density, has a significant influence on the neighboring higher and lower density regions due to the requirement of continuity and thermodynamical consistency of the energy per particle and related quantities throughout the whole density and temperature range.
The focus of this work is on the EoS that serves as an input to core-collapse supernova models and non-equilibrium young neutron stars. However, only a slight modification, i.e. the inclusion of chemical equilibrium at supernuclear densities, is required to use this EoS in old neutron stars. The most widely used EoS in core-collapse supernova simulations so far have been the non-relativistic EoS by Lattimer-Swesty [3] and relativistic mean-field model by Shen et al [4]. Both these EoS describe hot stellar matter assuming spherical symmetry and use different models for matter in different density and temperature regions. It is the aim of this work to show that a fully self-consistent non-relativistic EoS in the Hartree-Fock (HF) approximation [5, 6] in three dimensions (removing the constraint of spherical symmetry) can be constructed in the whole density and temperature region of interest. In this way the matter is treated as an ensemble of nucleons that naturally configure to a distribution corresponding to the minimal energy per particle at given density and temperature. The computation method adopted here is an extension of previous work of Bonche and Vautherin [7] and Hillebrant and Wolff [8] who calculated self-consistent HF EoS at finite temperature but only in the spherically symmetrical case and Magierski and Heenen [9] who developed an HF EoS for the general case of three dimensions but considered only zero temperature.
## 2 Computational Procedure
Equation of State, determining the pressure of a system as a function of density and temperature, is constructed for stellar matter at the densities and temperatures found during core collapse of a massive star pre- and post-bounce. Such matter is composed primarily of neutrons, protons and electrons, with a significant flux of photons, positrons and neutrinos also present during core collapse. There are three main bulk parameters of the matter, baryon number density \\(n_{\\rm b}\\), temperature T and proton fraction \\(y_{\\rm p}\\) defined as the ratio of the proton number density \\(n_{\\rm p}\\) to the total baryon number density \\(n_{\\rm b}\\). In the present work, the ranges of these parameters are \\(0.001<n_{b}<0.16fm^{-3}\\), \\(0<T<10MeV\\), and \\(0<y_{\\rm p}<0.5\\). Furthermore, the EoS is dependent on a number of microscopic parameters, determining the strong force, acting between nucleons in the matter. The phenomenological Skyrme SkM\\({}^{*}\\) force [10] is used here but it is easy to modify the computer code for any other applicable model of the nucleon-nucleon interaction. Finally, the electric Coulomb force acting between charged particles, protons and electrons is included. Electrons are treated as forming a degenerate Fermi gas which should be a valid approximation. Neutrinos are not considered at the present stage of the model.
The fundamental assumption used here is that nuclear matter has a periodic character and can be modeled as an infinite sequence of cubic unit cells. This notion removes a serious limitation of all previous models based on consideration of spherical cells which allows only spherically symmetrical nucleon distribution in the cell and cannot fully express the period character of matter as the cells make contact only at limited number of points leaving the space between them unaccounted for. Each unit cell contains a certain number of neutrons \\(N\\) and protons\\(Z\\), making a total baryon number of \\(A=N+Z\\). Quantum mechanical determination of all energy states and corresponding wave functions of a system of A nucleons in the cell requires exact solution of the A-dimensional equation of motion - the Schroedinger Equation - which is not technically feasible at present. However, if it is assumed that there exists an average single-particle potential, created by all nucleons, in which each nucleon moves independently of all the other nucleons present, then it is possible to use the Hartree-Fock approximation to the A-dimensional problem which reduces it to A one-dimensional problems.A spectrum of discrete energy states, the single-particle states, can be defined in the cell which the individual nucleons occupy (in analogy to a spectrum of standing waves in a box in classical physics). The single-particle wave functions \\(\\psi_{i}\\), associated with these states, are used to construct the total wavefunction \\(\\Psi\\) and to calculate the expectation value of total energy in the state \\(\\Psi\\). Obviously there are many ways the nucleons can be distributed over the available single-particle states, which always considerably outnumber, by a factor of two at least, the the total number of nucleons in the cell. Each of these nucleon configurations corresponds to an energy state and a particular spacial distribution of nucleon density in the cell. It turns out that it is possible to find a state \\(\\Psi_{\\rm min}\\), constructed of a set of single-particle states, of which the lowest A states are occupied, which corresponds to the minimum energy of the system and is the best approximation to the true A-particle ground state.
Starting from a trial set of single-particle wave functions \\(\\psi_{i}\\), the expectation value of total energy is minimized using the variational principle
\\[\\delta E[\\Psi]=0 \\tag{1}\\]
This conditions leads to a system of A non-linear equations for \\(\\psi_{i}\\) that has to be solved iteratively. In this work, three forms of the trial wavefunction have been tested, Gaussian times polynomial functions, harmonic oscillator wave functions and plane waves. At the beginning the lowest A trial single-particle states are occupied. After each iteration, the resulting states are reordered according to increasing energy and re-occupied. This approach ensures that the final solution is fully independent from the initial choice of trial wavefunction and it is not predetermined by this choice. The evolution of the shape of neutron density distribution during the iteration process is illustrated for A=900 and \\(y_{\\rm p}\\)=0.3; in Figs. 1-2 the 3D density distribution is displayed for \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\), \\(T\\)=2.5 MeV and Figs. 3-4 for \\(n_{\\rm b}\\)=0.12 fm\\({}^{-3}\\), \\(T\\)=5.0 MeV. The change in the distribution after 500 and several thousand iterations is quite striking. We note that in these figures increase in density is color-coded from blue to red.
Two iteration schemes have been employed to avoid instabilities in the iteration process - the Imaginary Time Step (ITS) and the Damped Gradient Step (DGS). The ITS is very robust and leads to initial rather rapid convergence even when the iteration process is started from trial functions not too similar to the true single-particle wavefunctions. However, when the minimum is approached, it slows down exponentially. The DGS method requires fairly good initial wavefunctions but converges much faster and leads to close to linear convergence for final iterations. In the present work both schemes have been used starting with the ITS and switching over after first few hundred iterations to DGS. After convergence is reached, the total energy density, entropy and pressure and other related observables are calculated and the EoS constructed in tabular form.
It is important to realise that it is not known _a priori_ what is the number of particles in the cell at given density that corresponds to the physical size of the unit cell in nature. For each particle number density the volume of a cell is defined as \\(A/n_{\\rm b}\\) and the energy density and the spatial particle density distribution varies significantly with \\(A\\), as demonstrated in Figs. 5-8 for \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\), T=2.5 MeV and \\(y_{\\rm p}\\)=0.3. Each of these results are examples of possible _excited_ states of the true unit cell (although they are local ground-states for a given set of parameters). These states are rather close in energy and a series of careful calculations has to be performedto search for the value of \\(A\\) which gives the absolute minimum energy density for a given set of bulk parameters (i.e. _minimum minimorum_).
## 3 Results and discussion
One of the main results of the current work is the development of properties of nuclear matter through extended density and temperature regions. At the lower density limit (\\(n_{\\rm b}<0.0001\\) fm\\({}^{-3}\\)), the nucleons are arranged as a roughly spherical (but very large) 'nucleus' at the centre of the cell. As the density increases, however, the shape deforms and the nucleon density distribution starts to spread out toward the cell boundaries, assuming a variety of exotic forms made of high and low density regions. At the extreme density, the nucleon density distribution becomes uniform. This behaviour is illustrated in Figs. 9-11 which shows neutron density distribution at three selected densities, T=5 MeV and \\(y_{\\rm p}\\)=0.3 and clearly demonstrates the transition between spherical and homogeneous density distribution.
The entire nucleon configuration within each cell is treated self-consistently as one entity for each set of the macroscopic parameters and evolves naturally within the model as the macroscopic parameters are varied. This is in sharp contrast with previous models where neutron heavy nuclei at and beyond the particle drip-line where considered as immersed in a sea of unbound free nucleons and the two systems were treated separately. In this approach the transition between the inhomogeneous and homogeneous phase of nuclear matter did not emerge naturally from the calculation but had to be imposed artificially, introducing uncertainly about the threshold density region. Furthermore, important phenomena discussed in more detail elsewhere [11] such as shell effects, influence of the lattice structure on Coulomb energy and scattering of weakly bound nucleons on inhomogeneities in the matter are automatically included.
## 4 Summary
The present model provides the first fully self-consistent 3D picture of hot dense nuclear matter. It offers a new concept of hot nuclear matter in the inner crust of neutron stars and in the transitional density region between non-uniform and uniform matter in collapsing stars. Instead of the traditional notion of super-neutron-heavy nuclei immersed in a free neutron gas it predicts a continuous medium with varying spatial concentration of neutrons and protons. The properties of this medium come out self-consistently from the model, as well as the transitions to both higher and lower density phases of the matter. These results may have profound consequences for macroscopic modelling of core-collapse supernovae and neutron stars. In particular, weak interaction processes (neutrino transport and beta-decay) in such a medium, will have to be investigated.
## Acknowledgments
Special thanks go to R. J. Toede, Chao Li Wang, Amy Bonsor and Jonathan Edge for development and performing data visualisation. This work was conducted under the auspices of the TeraScale Supernova Initiative, funded by SciDAC grants from the DOE Office of Science High-Energy, Nuclear, and Advanced Scientific Computing Research Programs and partly supported by US DOE grant DE-FG02-94ER40834. Resources of the Center for Computational Sciences at Oak Ridge National Laboratory were used. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
## References
* [1] Lattimer J. M and Prakash M 2000 _Phys. Rep._**334** 121.
* [2] Stone J. R 2006 _Open Issues in Understanding of Core-Collapse Supernova Theory_ ed. A Mezzacappa and G M Fuller (World Scientific) p 318
* [3] Lattimer J. M and Swesty F. D 1991 _Nucl.Phys._ A **535** 331
* [4] Shen H, Toki H, Oyamatsu K and Sumioshi K 1998 _Nucl.Phys._ A **637** 435
* [5] Hartree D R 1928 _Proc.Camb.Phil.Soc._**24** 89
* [6] Fock V A 1930 _Z.Phys_**61** 126
* [7] Bonche P and Vautherin D 1981 _Nucl.Phys._ A **372** 496
* [8] Hillebrandt W and Wolff R.- G 1985 _Nucleosynthesis:Challenges and New Developments_ ed. D Arnett and J W Truran A (Univ. Chicago) p 131
* [9] Magierski P and Heenen P.-H 2002 _Phys. Rev._ C **65** 045804
* [10] Bartel J, Quentin P, Brack M, Guet C, and Hakansson H.-B 1982 _Nucl.Phys._ A **386** 79
* [11] Newton W G, 2006 _DPhil Thesis_ Oxford UniversityFigure 1: 3D neutron density distribution after 500 iterations. Figure 2: 3D neutron density distribution after 2800 iterations. Figure 3: 3D neutron density distribution after 500 iterations. Figure 4: 3D neutron density distribution after 6500 iterations. Figure 5: Neutron density distribution for A=180. Figure 6: The same as Fig. 5 but for A=460. Figure 7: The same as Fig. 5 but for A=1400. Figure 8: The same as Fig. 5 but for A=2200.
Figure 9: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.04 fm\\({}^{-3}\\). Figure 10: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.08 fm\\({}^{-3}\\). Figure 11: 3D neutron density distribution at \\(n_{\\rm b}\\)=0.12 fm\\({}^{-3}\\). | First results from a fully self-consistent, temperature-dependent equation of state that spans the whole density range of neutron stars and supernova cores are presented. The equation of state (EoS) is calculated using a mean-field Hartree-Fock method in three dimensions (3D). The nuclear interaction is represented by the phenomenological Skyrme model in this work, but the EoS can be obtained in our framework for any suitable form of the nucleon-nucleon effective interaction. The scheme we employ naturally allows effects such as (i) neutron drip, which results in an external neutron gas, (ii) the variety of exotic nuclear shapes expected for extremely neutron heavy nuclei, and (iii) the subsequent dissolution of these nuclei into nuclear matter. In this way, the equation of state is calculated across phase transitions without recourse to interpolation techniques between density regimes described by different physical models. EoS tables are calculated in the wide range of densities, temperature and proton/neutron ratios on the ORNL NCCS XT3, using up to 2000 processors simultaneously. | Give a concise overview of the text below. |
arxiv-format/0709_1244v1.md | Scaling of Anisotropic Flows and Nuclear Equation of State in Intermediate Energy Heavy Ion Collisions1
Footnote 1: Supported partially by the National Natural Science Foundation of China under Grant No 10535010 and 10610285, the Shanghai Development Foundation for Science and Technology under Grant Numbers 06JC14082 and 05XD14021, and CAS project KJCX3.SYW.N2
Yan Ting-Zhi
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China Graduate School of the Chinese Academy of Sciences, Beijing 100039, China
Ma Yu-Gang
[email protected] Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
Cai Xiang-Zhou
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
Fang De-Qing
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
Guo Wei
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China Graduate School of the Chinese Academy of Sciences, Beijing 100039, China
Ma Chun-Wang
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China Graduate School of the Chinese Academy of Sciences, Beijing 100039, China
Shen Wen-Qing
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
Tian Wen-Dong
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
Wang Kun
Shanghai Institute of Applied Physics, Chinese Academy of Sciences, P. O. Box 800-204, Shanghai 201800, China
November 3, 2021
######
**Keywords**: Anisotropic flow, number-of-nucleon scaling, EOS, symmetry energy
**PACC**: 2410, 2570, 2587
+
Footnote †: Corresponding author. Email: [email protected]
+
Footnote †: Corresponding author. Email: ygma@sinap.
describe the time evolution of the colliding system well. When the spatial distance (\\(\\Delta r\\)) is closer than 3.5 fm and the momentum difference (\\(\\Delta p\\)) is smaller than 300 MeV/c between two nucleons, two nucleons can coalesce into a cluster [16]. With this simple coalescence mechanism which has been extensively applied in transport theory, different size clusters can be recognized.
In the model the nuclear mean-field potential is parameterized as
\\[U(\\rho,\\tau_{z})=\\alpha(\\frac{\\rho}{\\rho_{0}})+\\beta(\\frac{\\rho}{\\rho_{0}})^{ \\gamma}+\\frac{1}{2}(1-\\tau_{z})V_{c}\\]
\\[+C_{sym}\\frac{(\\rho_{n}-\\rho_{p})}{\\rho_{0}}\\tau_{z}+U^{Yuk} \\tag{4}\\]
where \\(\\rho_{0}\\) is the normal nuclear matter density (\\(0.16fm^{-3}\\)), \\(\\rho_{n}\\), \\(\\rho_{p}\\) and \\(\\rho\\) are the neutron, proton and total densities, respectively. \\(\\tau_{z}\\) is \\(z\\)-th component of the isospin degree of freedom, which equals 1 or -1 for neutrons or protons, respectively. The coefficients \\(\\alpha\\), \\(\\beta\\) and \\(\\gamma\\) are parameters for nuclear equation of state. \\(C_{sym}\\) is the symmetry energy strength due to the density difference of neutrons and protons in nuclear medium, which is important for asymmetry nuclear matter [24; 25; 26] (here \\(C_{sym}=32\\) MeV is used to consider symmetry energy effect or isospin-dependent potential, and \\(C_{sym}=0\\) for no symmetry energy effect or isospin-independent potential). \\(V_{c}\\) is the Coulomb potential and \\(U^{Yuk}\\) is Yukawa (surface) potential. In this work, we take \\(\\alpha=124\\) MeV, \\(\\beta\\) = 70.5 MeV and \\(\\gamma=2\\) which corresponds to the so-called hard EOS with an incompressibility of \\(K\\) = 380 MeV, and \\(\\alpha\\) = -356 MeV, \\(\\beta\\) = 303 MeV and \\(\\gamma\\) = 7/6 which corresponds to the so-called soft EOS with an incompressibility of \\(K\\) = 200 MeV. In the present study, four combinations with different potential parameters, i.e. parameters of hard or soft EOS with or without symmetry energy effect (i.e. \\(C_{sym}=32\\) or 0 MeV), for the collision system of \\({}^{86}\\)Kr + \\({}^{124}\\)Sn at 25 MeV/nucleon with impact parameter from 7 fm to 10 fm were carried out. The physics results were extracted at the time of 200 fm/c when the system has been in the freeze-out stage.
The Fig.1 (a), (b), (e) and (f) shows transverse momentum dependence of elliptic flows for mid-rapidity light fragments in four different calculation conditions: (a) for soft EOS with symmetry potential (\\(soft\\_iso\\)); (b) for hard EOS with symmetry potential (\\(hard\\_iso\\)); (e) for soft EOS without symmetry potential (\\(soft\\_niso\\)) and (f) for hard EOS without symmetry potential (\\(hard\\_niso\\)). In all cases, elliptic flow is positive and it increases with the increasing \\(p_{t}\\), which is apparently similar to RHIC's results [12; 13]. Of course, the mechanism is very different. In intermediate energy domain, collective rotation is one of the main mechanisms to induce the positive elliptic flow [2; 3; 27; 28; 29]. However, at RHIC energies it is the strong pressure which is built in early initial almond anisotropy of the geometrical overlap zone between both colliding nuclei that drives the positive elliptic flow [12]. The corresponding nucleon-number scaled elliptic flows are plotted in Fig.1 (c), (d), (g) and (h) as a function of transverse momentum per nucleon. From these panels, it seems that the number of nucleon scaling for elliptic flow exists for light fragments at low \\(p_{t}/A\\) (\\(p_{t}/A<0.2GeV/c\\)). This behavior is apparently similar to the number of constituent quarks scaling of elliptic flow versus transverse momentum per constituent quark (\\(p_{t}/n\\)) for different mesons and baryons which was observed at RHIC [12]. Since all calculations show the similar scaling behavior, this scaling behavior is robust, and it is independent of the details of EOS and symmetry potential.
To quantitatively look the difference of the flows in different calculation conditions, we compare the values of \\(v_{2}/A\\) for the four simulation conditions (see Fig.2). The figures show that the difference between different
Figure 1: (a), (b), (e) and (f): Elliptic flow as a function of transverse momentum (\\(p_{t}\\)) for the simulation with different parameters of EOS with or without symmetry energy term. (a) for soft EOS with symmetry potential (\\(soft\\_iso\\)); (b) for hard EOS with symmetry potential (\\(hard\\_iso\\)); (e) for soft EOS without symmetry potential (\\(soft\\_niso\\)); and (f) for hard EOS without symmetry potential (\\(hard\\_niso\\)). Squares represent for neutrons, circles for protons, triangles for fragments of A=2, diamonds for A=3 and stars for A=4. Fig.(c), (d), (g) and (h) presents nucleon-number normalized elliptic flow as a function of transverse momentum per nucleon corresponding to the case of (a), (b), (e) and (f), respectively.
simulations is big for neutrons and protons but a little small for the fragments of \\(A=2\\), \\(A=3\\) and \\(A=4\\). The reason is that the emitted protons and neutrons can feel the role of mean field (EOS) directly, while the light fragments have weak sensitivity since they are indirected products by the coalescence mechanism in the present model. Approximately at the same \\(p_{t}/A\\), the elliptic flow is larger for soft EOS than the one for hard EOS, and it is larger for EOS with symmetry potential than the case without symmetry potential. Considering that the symmetry potential is basically positive for the studied reaction system (more neutrons than protons), symmetry potential will make the whole EOS stiffer. In this case, we can say that the stiffer the EOS, the smaller the flow. In other words, we can say the strength of elliptic flow per nucleon is sensitive to the EOS and symmetry potential.
So far, there is rare studies about higher order flows, such as \\(v_{4}\\), experimentally and theoretically in this energy domain. Here we try to explore the behavior of \\(v_{4}\\). First we draw \\(v_{4}/A\\) as a function of \\(p_{t}/A\\) mimicing the behavior of elliptic flow (see (a), (b), (e) and (f) of Fig.3) for four different calculation conditions. It shows that \\(v_{4}/A\\) is positive and increases with \\(p_{t}/A\\), but there seems no simple scaling behavior as \\(v_{2}\\) shows. Considering that RHIC experimental data have demonstrated that a scaling relation among hadron anisotropic flows holds, i.e., \\(v_{n}(p_{t})\\sim v_{2}^{n/2}(p_{t})\\)[30], we plot \\(v_{4}/A^{2}\\) as a function of \\((p_{t}/A)^{2}\\) in Fig. 3 (c), (d), (g) and (h) for the corresponding calculation conditions of Fig.3(a), (b), (e) and (f). Now the points of different light fragments nearly merge together at low \\((p_{t}/A)^{2}\\), which means a certain of scaling law holds between two variables. All the calculation cases show that there is the scaling behavior for \\(v_{4}/A^{2}\\) versus \\((p_{t}/A)^{2}\\), and this behavior is robust regardless the parameters which we used for EOS.
Since the above scaling behavior assumes \\(v_{n}(p_{t})\\sim v_{2}^{n/2}(p_{t})\\), so we plot \\(v_{4}/v_{2}^{2}\\) as a function of \\(p_{t}\\) in Fig. 4 for the four simulations. The figures show that the ratios
Figure 4: The ratios of \\(v_{4}/v_{2}^{2}\\) for neutrons (squares), proton (circles), the fragments of \\(A=2\\) (triangles), \\(A=3\\) (diamonds) and \\(A=4\\) (stars) versus \\(p_{t}\\) for the four simulations in different calculation conditions.
Figure 2: Comparisons of the values of \\(v_{2}/A\\) versus \\(p_{t}/A\\) in the different simulation conditions. The meanings of the symbols are depicted in right bottom corner.
of \\(v_{4}/v_{2}^{2}\\) for different fragments up to \\(A=4\\) are about a constant of 1/2 in all simulation cases. Because \\(v_{2}/A\\) can be scaled with \\(p_{t}/A\\), \\(v_{4}/A^{2}\\) should scale versus \\((p_{t}/A)^{2}\\), which is exactly what we see in Fig. 4. One point is worth to be mentioned comparing to the RHIC studies where the data shows \\(v_{4}/v_{2}^{2}\\sim 1.2\\)[30], \\(v_{4}/v_{2}^{2}\\sim 1/2\\) for the light nuclear fragments in this nucleonic level coalescence mechanism rather than the value of 3/4 for mesons or 2/3 for baryons in quark coalescence model [31]. Coincidentally, the predicted value of the ratio of \\(v_{4}/v_{2}^{2}\\) for hadrons is also 1/2 if the matter produced in ultra-relativistic heavy ion collisions reaches to thermal equilibrium and its subsequent evolution follows the laws of ideal fluid dynamics [32]. It is interesting to note the same ratio was predicted in two different models at very different energies, which is of course worth to be further investigated in near future. One possible interpretation is that the big nucleon-nucleon cross sections in low energy HIC make the system to reach thermal equilibrium and may induce the fluid-like behavior of nuclear medium before the light fragments are coalesced by nucleons. In this case, the value of \\(v_{4}/v_{2}^{2}\\) of light fragments could be \\(\\sim 1/2\\) as Ref.[32] shows.
The values of \\(v_{4}/A\\) versus \\(p_{t}/A\\) with different simulation parameters are also presented for light fragments, see Fig.5. The figures are similar to those in Fig.2, and the effects of EOS and symmetry potential on \\(v_{4}/A\\) are also similar to their effects on \\(v_{2}\\). However, comparing with the \\(v_{2}\\)'s sensitivity to the EOS and symmetry potential, \\(v_{4}/A\\) is not so salient.
To summarize, we investigated the behavior of anisotropic flows as a function of transverse momentum for light fragments for the simulations of 25 MeV/nucleon \\({}^{86}Kr+^{124}Sn\\) collisions in peripheral collisions by IDQMD model in the potential parameters of hard or soft EOS with or without symmetry energy term. It was found that for all the four type simulations \\(v_{2}\\) and \\(v_{4}\\) of light fragments are positive and increase with \\(p_{t}/A\\). When we plot \\(v_{2}\\) per nucleon (\\(v_{2}/A\\)) versus \\(p_{t}/A\\) for all light particles, all curves collapse onto the same curve. Similarly, the values of \\(v_{4}/A^{2}\\) merge together as a function of \\((p_{t}/A)^{2}\\) for all light particles. Furthermore, it was found that \\(v_{4}\\) can be well scaled by \\(v_{2}^{2}\\), and the value of \\(v_{4}/v_{2}^{2}\\sim 1/2\\) which does not depend on transverse momentum. The above scaling behaviors can be seen as an outcome of the nucleonic coalescence, and it illustrates that the number-of-nucleon scaling for elliptic flow exists in intermediate energy heavy ion collision. In addition, the values of \\(v_{2}/A\\) and \\(v_{4}/A\\) were compared in different simulation conditions, and it was shown that the values of the \\(v_{2}\\) are sensitive to the EOS and symmetry potential, especially for neutrons and protons.
## References
* (1) Ollitrault J 1992 _Phys. Rev._ D **46** 229.
* (2) Ma Y G, Shen W Q, Feng J, Ma Y Q 1993 _Phys. Rev._ C **48** 1492; Ma Y G, Shen W Q, Feng J, Ma Y Q 1993 _Z. Phys._ A **344** 469; Ma Y G, Shen W Q, Zhu Z Y 1995 _Phys. Rev._ C **51** 1029; Ma Y G and Shen W Q 1995 _Phys. Rev._ C **51** 3256.
* (3) Shen W Q, Peter J, Bizard G, Broua R, Cussola D, Lovela M, Patrya J P, Regimbarka R, Steckmeyera J C, Sulivana J P, Tamaina B, Cremah E, Doubreh H, Hagelb K, Jing G M, Pegahareb A, Saint-Laurent F, Cassagnouc Y, Legraine R, Lebrund C, Rosato E, MacGrathf R, Jeongh S C, Leeh S M, Nagashimah Y, Nakagawa T, Ogiharah M, Kasagii J and Motobayashi T 1993 _Nucl. Phys._ A **551** 333.
* (4) Sorge H 1997 _Phys. Lett._ B **402** 251; Sorge H 1997 _Phys. Rev. Lett._**78** 2309; Sorge H 1999 _Phys. Rev. Lett._**82** 2048.
* (5) Danielewicz P, Lacey R A, Gossiaux P -B, Pinkenburg C, Chung P, Alexander J M, and McGrath1et R L 1998 _Phys. Rev. Lett._**81** 2438.
* (6) Teaney D and Shuryak E V 1999 _Phys. Rev. Lett._**83** 4951.
* (7) Kolb P F, Sollfrank J, and Heinz U 2000 _Phys. Rev._ C **62** 054909.
* (8) Zheng Y M, Ko C M, Li B A, and Zhang B 1999 _Phys. Rev. Lett._**83** 2534.
* (9) Li Z X and Zhang Y X, p276 in AIP Conference Proceeding CP865: Nuclear Physics Trends: 6th China-Japan Joint Nuclear Physics Symposium (Melville, New York: American Institute of Physics Publisher), eds Y. G. Ma and A. Ozawa.
* (10) Perslam D and Gale C 2002 _Phys. Rev._ C **65** 064611.
* (11) Lukasik J, Auger G, Begemann-Blaich M L, Bellaize N, Bittiger R, Bocage F, Borderie B, Bougault R, Bouriquet B, Chavaret J L, Chibili A, Dayras R, Durand D, Frankland J D, Galichet E, Gourio D, Guinet D, Hudan S, Lautesse P, Lavaud F, Le Fevre A, Legrain R, Lopez O, Lynen U, Muller W F J, Nalpas L, Orth H, Plagnol
Figure 5: Comparison of the values of \\(v_{4}/A\\) versus \\(p_{t}/A\\). The meanings of the symbols are depicted in right bottom corner.
E, Rosato E, Saija A, Schwarz C, Sfienti C, Tamain B, Trautmann W, Trzcinski A, Turzo K, Vient E, Vigilante M, Volant S and Zwieglinski B 2004 _Phys. Lett._ B **608** 223.
* (12) Adams J et al. (STAR Collaboration) 2004 _Phys. Rev. Lett._**92** 052302; Adams J et al. (STAR Collaboration) 2005 _Phys. Rev._ C **72** 014904; Adams J et al. (STAR Collaboration) 2005 _Phys. Rev. Lett._**95** 122301.
* (13) Chen J H, Ma Y G, Ma G L, Cai X Z, He Z J, Huang H Z, Long J L, Shen W Q, Zhong C and Zuo J X. 2006 _Phys. Rev._ C **74** 064902.
* (14) Ma Y G 2006 _J. Phys._ G**32** S373.
* (15) Yan T Z, Ma Y G, Cai X Z, Chen J G, Fang D Q, Guo W, Ma C W, Ma E J, Shen W Q, Tian W D and Wang K 2006 _Phys. Lett._ B **638** 50.
* (16) Aichelin J 1991 _Phys. Rep._**202** 233.
* (17) Ma Y G and Shen W Q 1995 _Phys. Rev._ C **51** 710.
* (18) Zhang F S, Chen L W, Zhao Y M, Zhu Z Y 1999 _Phys. Rev. C_**60** 064604.
* (19) Liu J Y, Guo W J, Ren Z Z, Xing Y Z, Zuo W, Lee X G 2006 _Chin. Phys._**15** 1738.
* (20) Zhang H Y, Ma Y G, Su Q M, Shen W Q, Cai X Z, Fang D Q, Hu P Y and Han D D 2001 _Acta Physica Sinica_ (in Chinese) **50** 193.
* (21) Wei Y B, Ma Y G, Shen W Q, Ma G L, Wang K, Cai X Z, Zhong C, Guo W and Chen J G 2004 _Phys. Lett._ B **586** 225; Wei Y B, Ma Y G, Shen W Q, Ma G L, Wang K, Cai X Z, Zhong C, Guo W, Chen J G, Fang D Q, Tian W D and Zhou X F 2004 _J. Phys._ G **30** 2019.
* (22) Ma Y G, Zhang H Y, Shen W Q 2002 _Prog. Phys._ (in Chinese) **22** 99.
* (23) Ma Y G, Wei Y B, Shen W Q, Cai X Z, Chen J G. Chen J H, Fang D Q, Guo W, Ma C W, Ma G L, Su Q M, Tian W D, Wang K, Yan T Z, Zhong C, Zuo J X 2006 _Phys. Rev._ C **73** 014604.
* (24) Ma Y G 2000 _Acta Phys Sin._**49** 654; Ma Y G 1999 _Acta Phys Sin._**48** 1839; Ma Y G, Su Q M, Shen W Q, Han D D, Wang J S, Cai X Z, Fang D Q, and Zhang H Y 2000 _Phys. Rev._ C **60** 024607.
* (25) Zhong C, Ma Y G, Fang D Q, Cai X Z,Chen J G, Shen W Q, Tian W D, Wang K, Wei Y B, Chen J H, Guo W, Ma C W, Ma G L, Su Q M, Yan T Z and Zuo J X 2006 _Chinese Physics_**15** 1481.
* (26) Yong G C, Lee B A and Zuo W 2005 _Chinese Physics_**14** 1549.
* (27) Sullivan J P and Peter J 1992 _Nucl. Phys._ A**540** 275.
* (28) Lacey R, Elmaani A, Lauret J, Li T, Bauer W, Craig D, Cronqvist M, Gualtieri E, Hannuschke S, Reposeur T, Vander Molen A, Westfall G D, Wilson W K, Winfield J S, Yee J, Yennello S, Nadasen A, Tickle R S, and Norbeck E 1993 _Phys. Rev. Lett._**70** 1224.
* (29) He Z Y, Angelique J C, Auger A, Bizard G, Brou R, Buta A, Cabot C, Crema E, Cussol D, Dai G X, Masri Y E, Eudes P, Gonin M, Hagel K, Jin G M, Kerambrum A, Lebrun C, Ma Y G, Peghaire A, Peter J, Popescu R, Regimbart R, Rosato E, Saint-Laurent F, Shen W Q, Steckmeyer J C, Tamain B, Vient E, Wada R and Zhang F S 1996 _Nucl. Phys._ A **598** 248.
* (30) Adams J et al. (STAR Collaboration) 2004 _Phys. Rev. Lett._**92** 062301; 2005 _Phys. Rev._ C **72** 014904.
* (31) Kolb P 2002 _Phys. Rev._ C **68** 031902(R).
* (32) Borghini N and Ollitrault J Y 2006 _Phys. Lett. B_**642** 227. | Elliptic flow (\\(v_{2}\\)) and hexadecupole flow (\\(v_{4}\\)) of light clusters have been studied in details for 25 MeV/nucleon \\({}^{86}\\)Kr + \\({}^{124}\\)Sn at large impact parameters by Quantum Molecular Dynamics model with different potential parameters. Four parameter sets which include soft or hard equation of state (EOS) with/without symmetry energy term are used. Both number-of-nucleon (\\(A\\)) scaling of the elliptic flow versus transverse momentum (\\(p_{t}\\)) and the scaling of \\(v_{4}/A^{2}\\) versus (\\(p_{t}\\)/\\(A\\))\\({}^{2}\\) have been demonstrated for the light clusters in all above calculation conditions. It was also found that the ratio of \\(v_{4}/v_{2}\\)\\({}^{2}\\) keeps a constant of 1/2 which is independent of \\(p_{t}\\) for all the light fragments. By comparisons among different combinations of EOS and symmetry potential term, the results show that the above scaling behaviors are solid which do not depend the details of potential, while the strength of flows is sensitive to EOS and symmetry potential term. | Provide a brief summary of the text. |
arxiv-format/0709_1981v1.md | **Street-based Topological Representations and Analyses for Predicting Traffic Flow in GIS**
Bin Jiang and Chengke Liu
Department of Land Surveying and Geo-informatics
The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Email: [email protected]
# Introduction
Space syntax is developed as a tool for understanding spatial structure, and consequently human life (e.g., human movement) in space from a topological point of view (Hillier and Hanson 1984, Hillier 1996). Supported mainly by two empirical studies (Hillier et al. 1993, Penn et al. 1998), extended recently by Jiang (2007a), it is well received in the space syntax community that traffic flow (referring to either pedestrian or vehicle flow) is significantly correlated to a morphological property of streets (a strict definition of street will be given). The modeling process starts with representing streets as axial lines that are intersected forming an axial map, and then ranking the individual lines using graph-theoretic measures for prediction purposes. The axial line is the longest visibility line, representing a street or part of a street.
The axial representation has been questioned by researchers for its validity, as it is neither cognitively sound nor computationally operable (Jiang and Claramunt 2002, Ratti 2004). For example, a ring road often acts as a hub for traffic, and is represented by many axial lines, forming a close chain of lines. It is not cognitively sound, as the ring road is a well received cognitive entity in our minds. More critically, the axial map, consisting of the least number of the longest axial lines, can not be automatically generated, and must be drawn manually. Although the drawing process can be guided by a restricted rule (Hillier and Hanson 1984, Turner 2005), it is remarkably difficult in practice to guarantee that two axial maps by two different people (or one person at different times) are the same. This imposes a significant doubt on its validity for various applications. In this paper, we provide further evidence that an axial map is indeed not a good representation in predicting traffic flow. Instead an alternative street-based representation is suggested.
A street is defined as a linear geographic entity that stretches in two dimensional space, and is often given a unique name. It should not be confused to a street segment between two junctions. We will illustrate that traffic flow can be better predicted using street-based topological representations. The street-based topological representations are developed from the previous studies (Thomson 2003, Jiang and Claramunt 2004), where streets are used to replace axial lines for a kind of topological representation. In the context of his paper, the term topology, in contrast to geometry, refers to a description of the relationship between geographic objects or locations. At a GIS database level, streets are merged street segments which belong to the same name - named streets (Jiang and Claramunt 2004), or alternatively, streets are _naturally_ merged street segments that form good continuity - natural streets (or strokes in terms of Thomson (2003)). Therefore we have two types of streets that constitute the street-based topological representations. The two representations are examined against observed traffic flow. It is found to our surprise that the two are better than the axial map for traffic forecast. This finding verifies that the street-based topological representations are a better alternative GIS representation, which has far reaching implications to applications of Geographic Information Systems (GIS) for traffic management and transport planning, and to the understanding of self-organized cities.
This paper can be put under another context of topological analysis, which forms the fundamental for the new science of networks (Newman, Barabasi and Watts 2006). The new science of networks supports the study of real world networks of often large size, takes the view that the networks are not static, but evolving in a self-organized manner, and aims to understand complex behaviors of real world systems and their constituents from a topological perspective. The new science has received a disproportionate amount of attention in a variety of disciplines including for instance biology, physics, sociology and computer sciences. The new wave of research interest in real world networks is mainly triggered by two seminal papers (Watts and Strogatz 1998, Barabasi and Albert 1999) published respectively in Nature and Science. It is found that many real world networks are far from random graphs (Erdos 1960), but demonstrate small world and scale free properties. The topological analysis has been applied to urban street networks for studying emerging properties (e.g. Jiang and Claramunt 2004, Rosvall et al. 2005, Buhl et al. 2006, Porta et al. 2006a, 2006b, Jiang 2007b). The topological analysis represents a new paradigm for geospatial analysis and modeling, with a focus on spatial interaction and relationships.
The contribution of this paper is three-fold: (1) it introduces topological representation in general and street-based topological representations in particular for structural analysis of urban systems; (2) it proves that street-based topological representations are superior to conventional axial maps via traffic prediction; (3) it provides a research prototype and related algorithms for further research along the line of topological analysis.
The remainder of this paper is structured as follows. In section 2, we introduce the street-based topological representations and topological measures for structural analysis, and speculate on the importance of the analysis. Section 3 reports in detail the experiments and results using the Hong Kong street network and Annual Average Daily Traffic (AADT) datasets, illustrating the fact that the street-based topological representations are superior to an axial map in traffic forecast. Finally section 4 concludes the paper.
## 2 Topological representations and analyses
In this section, we will introduce basic concepts of space syntax and some key measures for topological analysis in the context of GIS. The brief introduction is mainly for understanding this paper, and the reader is encouraged to consult relevant literature for more details (e.g. Hillier and Hanson 1984, Jiang et al. 2000, Jiang and Claramunt 2004).
### Geometric versus topological representations of urban street networks
Space syntax consists of a set of spatial representations for analyzing urban environments at both the city and built levels (Jiang et al. 2000). We mainly in this paper concentrate on the methods for a city. To illustrate, let us use a fictional city as an example (Figure 1). This is a typical GIS representation with two layers of information: streets as a network layer and buildings as a polygon layer. The city consists of 7 streets: a ring road A, and another 6 streets (B-H), but they are represented as a network with 11 nodes and the corresponding links between them (Figure 2d). The street network (or more precisely, the space separated by the building blocks) can be represented by an axial map, in which 13 axial lines (A-M) are intersected at 18 nodes (1-18) (Figure 2a). The beauty of space syntax lies in the fact that it collapses an entire axial line as a node and line-line intersection as a link (Figure 2b). In general, a space syntax graph representing line-line relationship is a non-planar graph, while an axial map is a planar graph.
What we adopted for predicting traffic flow is the kind of graph that encodes a street-street relationship (Figure 2e). This figure and figure 2b both represent primal graphs for a line-line or street-street relationship. In contrast, dual graphs for a point-point relationship can be established. This was first suggested by Jiang and Claramunt (2002) in what is called a point-based space syntax. Both figures 2c and 2f represent dual relationships between points, with respect to their primal graphs shown in figure 2b and figure 2e. It has been concluded in the previous study (Jiang and Claramunt 2002) that the primal and dual representations are identical from the point of view of morphological analysis. In other words, the morphological properties of points along a line are the same as those of the line. In this respect, Batty (2004) provides an elegant mathematical description for exploring primal and dual relationships.
It is important to point out a major difference between axial line-based and street-based topological representations. An axial line (or visibility line) is an approximation of a linear space while walking in it, so it is perception-based. A street (represented by a central line) is cognitive-based, as it is given a unique name. The difference can be clearly seen in figures 2a and 2d. The lines and streets, both related but different in essence, form respectively the line-based and street-based topologies in terms of intersection. In other words, the topologies are an interconnected whole of lines or streets.
### Topological measures
As shown previously, the line-line or street-street topologies are represented by a graph. In general, a graph (\\(G\\)) consists of a finite set of vertices (or nodes) \\(V=\\{v_{i},v_{2}, v_{n}\\}\\) (where the number of nodes is \\(n\\)) and a finite set of edges (or links) \\(E\\), which is a subset of the Cartesian product \\(V\\times V\\). The graph can be represented as a matrix \\(R(G)\\), whose element \\(r_{ij}\\) is 1 if intersected, and 0 otherwise. Formally it is represented as follows:
\\[R_{ij}=\\begin{cases}1&\\quad\\text{if }\\text{i and j are intersected}\\\\ 0&\\quad\\text{otherwise}\\end{cases} \\tag{1}\\]
Figure 1: (color online) The map of a fictional city
Figure 2: (color online) Geometric and topological representationsIt should be noted that this matrix \\(R(G)\\) is symmetric, i.e. \\(\\forall r_{i_{j}}\\Rightarrow r_{i_{j}}=r_{i_{j}}\\), and that all diagonal elements of \\(R(G)\\) are equal to zero. From a computational point of view, we compare each street to only those streets within the same envelop, a rectangular area that covers a street.
A range of measures for individual nodes (or vertices) are defined for topological analysis. Firstly the degree for a node is the number of other nodes directly connected to it. It is called connectivity in space syntax literature. Formally it is defined by:
\\[M(v_{i})=\\sum_{j=1}^{n}R_{ij} \\tag{2}\\]
Secondly, path length of a node is the measure of how far it is from all other nodes. It is defined by:
\\[L(v_{i})=\\sum_{j=1}^{n}d(i,j), \\tag{3}\\]
where \\(d(i,j)\\) denotes the distance between two vertices \\(i\\) and \\(j\\), which is the minimum length of the paths that connect the two vertices, i.e., the length of a _graph geodesic_.
Thirdly, clustering coefficient is the measure of the clustering degree of a node. It is defined as the probability that two neighbors of a given node are linked together, and measured by a ratio of the number of actual edges to that of possible edges.
\\[C(v_{i})=\\frac{\\#\\textit{of actual edges}}{\\#\\textit{of possible edges}}, \\tag{4}\\]
The measure path length can be used to derive integration, the key measure in space syntax. For the measure integration, it can be measured at both the local and global levels. To illustrate the difference, let us use the space syntax graph in figure 1(b), but re-arranged all the nodes around node A (a sort of Bacon zero1) according to how far or close other nodes are from the node (Figure 3). Node A has a path length of \\(4\\times 1+5\\times 2+3\\times 3\\) (which reads as 4 nodes in 1 step, 5 nodes in 2 steps, and 3 nodes in 3 steps), indicating how far it is from all other nodes. This is the global integration. Instead of all other nodes, if we consider nodes within two steps in equation (3), then we will have the path length at a local level, i.e., \\(4\\times 1+5\\times 2\\). This is the local integration. It is the default measure of space syntax for predicting traffic flow. For node A, its four neighbors would form 3 \\(\\times\\) 4/2 = 6 friendships, while there is only one actual friendship (between D and F), so the clustering coefficient of node A is 1/6.
Footnote 1: For those who are unaware of Bacon number, actor or actress who has filmed with Kevin Bacon gets Bacon number one. Actor or actress, who has not filmed with Kevin Bacon, but has filmed with someone who has filmed with Kevin Bacon, gets Bacon number two This way, Kevin Bacon himself has the Bacon number zero. For more details, refer to the site [http://en.wikipedia.org/wiki/Bacon_number](http://en.wikipedia.org/wiki/Bacon_number).
The above topological measures can be put into two categories according to how they are defined: local and global. For example, both connectivity and clustering coefficient are local measures, as only local nodes within a
Figure 3: Adjusted graph showing a line-line relationship in Figure 1(b)
few steps (instead of all other nodes) are involved. In this connection, local integration can be considered to be a local measure. Global integration and path length are both global measures. Usually local measures are correlated to global measures, which gives a second order space syntax measure - intelligibility. Intelligibility is simply defined by the R square of the correlation. Connectivity, local and global integrations are key measures of space syntax. Degree (or connectivity), path length and clustering coefficient are three key measures for topological analysis. They constitute essential measures for exploring small world and scale free properties. In addition, Google's PageRank has emerged as an important measure for topological analysis (Jiang 2007a). For the sake of simplicity, we present a simplified version of local and global integrations. In fact, in the following experiments we adopted a normalized integration called Real Relative Asymmetry (refer to e.g. Jiang and Claramunt 2002 for more details) implemented in many space syntax software packages.
**2.3 Why the topological representations and analyses?**
To illustrate why the topological representations and analyses matter, let us do a little experiment. Taking any street network shape file, merge all line segments between any pair of junctions as one single line, i.e., a street segment. This way we will get the network in which each line represents a segment between two adjacent junctions, which is truly a network of street segments. Based on the transformed network, we compute how many other segments intersected to any particular one. Figure 3(a) illustrates a histogram, which roughly follows a normal distribution with an exception at connectivity 3. However, a completely different pattern will emerge if we merge individual segments according to their unique street names. Figure 3(b) illustrates the histogram, where the connectivity ranges diversely from 1 to 60. As we can see, most streets have low degrees of connectivity, while a few have extremely high degrees. A more thorough investigation (Jiang 2007b) illustrates that 80% of streets have degrees less than the average, while 20% of streets have degrees higher than the average.
It is important to note that the links in the topological representations have no location sense, although it is derived from geographic objects or location. It sets a clear difference from the \"topology\" in the TIGER data model, developed by the U.S. Census Bureau in the 1960s. Coming back to the question raised, we can remark that topological representations and analyses help to uncover some hidden structure or patterns, which can not be illustrated by the geometric representation and analysis.
**3. Experiments and results**
In this section, we will use Hong Kong as a case study to illustrate that street-based topological representations are indeed superior to an axial map in predicting traffic flow. We also illustrate some topological properties of the Hong Kong street network. For the study, some essential data processing for forming topologies is carried out.
**3.1 Data sources and algorithms for forming topologies**
Two main data sources are used in the study. The first data source is the Hong Kong street network - central line street network. Using both the Hong Kong street network and building levels, we created an axial map consisting of 14378 axial lines. In order to generate street-based topological representations, we developed three algorithms for the processing. The first algorithm is based on the Gestalt principle of good continuity. The merging process can be described as follows (see also algorithm I in Appendix A): for every street segment, we trace its connected segments, and concatenate the segment and an adjacent one with the smallest deflection angle; this process will not be terminated until the smallest deflection angle reaches the threshold set. It should be noted that the threshold has a significant impact on the number of natural streets. We chose every 10 degrees between 20 and 70 as thresholds for determining continuity in the study. Thus for natural streets, we created 6 street-street topologies with different sizes (Table 1) for correlation comparison with traffic flow. The second algorithm is based on the name streets (see algorithm II in Appendix A). We merge the segments without names into
Figure 4: Degree distributions of street segments (a) and streets (b)
neighboring segments using algorithm I, and then merge all segments according to unique names. It generates in total 7488 named streets. The reader may have noted a big difference in the number of natural streets and named streets. This is mainly because many Hong Kong streets have multiple lanes that are separate by barriers. Consequently, these streets are represented as many multiple central lines, although they are associated with the same name. However, they are treated as different streets in the process of generating natural streets. The third algorithm is to identify isolated streets or lines, and exclude them in forming topologies. It is pretty simple, and works as follows. Take one street or line as a root, and adopt the well-known Breadth-First Search algorithm to explore all those streets (or lines) that directly or indirectly connect to the root one. Those not connected either directly or indirectly to the root are isolated ones. The three algorithms have now integrated as the basic functionality of the newly released Axwoman 4.0 based on ArcGIS, and interested readers can refer to the site (www.hig.se/~big/Axowman) for more details.
The second data source is the Hong Kong Annual Traffic Census (Hong Kong Transport Department 2006), provided by the Traffic and Transport Survey Division of the Hong Kong Government Transport Department. The traffic census is conducted through a total of 3191 counting stations across the Hong Kong territory, but a significant portion of the stations function on a rotation basis. There were 829 counting stations functioning for the year 2005. We used the AADT generated from raw data collected from the counting stations, where inductive loops and pneumatic tubes are installed on a carriageway, and connected to the roadside automatic counters (Lee 1989). The counting stations were pinpointed onto a map in order to decide which axial lines or which streets (either named or natural) the stations belong to. We first roughly pinpoint, then adjust more precisely to locate them onto respective axial lines and streets. This pinpointing process is remarkably tedious, but we did it with help of a designed computer code. We also found that there are 4 counting stations that are impossible to pinpoint, thus are excluded from the study. Figure 5 illustrates the distribution of the 825 stations on the Hong Kong map.
\\begin{table}
\\begin{tabular}{l c} Topology & Size \\\\ Axial lines & 14378 \\\\ Named streets & 7488 \\\\ Natural streets (20) & 21008 \\\\ Natural streets (30) & 17963 \\\\ Natural streets (40) & 15868 \\\\ Natural streets (50) & 14542 \\\\ Natural streets (60) & 13886 \\\\ Natural streets (70) & 13429 \\\\ \\end{tabular}
\\end{table}
Table 1: Size of axial line-based and street-based topologies
Figure 5: (color online) 825 counting stations across the Hong Kong territory
### Computing topological properties
We applied equations (3) and (4) to get the measures for individual nodes of a topology. Then we take the average measures to assign to their respective topologies. The results are shown in table 2. We can remark that the average degree is around 3 or 4. However, for the axial line topology, its path length is bigger than its random counterpart. It appears that it may not be a small world, or have a rather weak small world property. On the other hand, both named street topology and natural street topology (created with a threshold of 60 degrees) are small worlds, since the path length L is pretty close to that of random counterparts L\\({}_{\\text{rand}}\\). This finding of small worlds is not beyond our expectation, as it is a very common held property appearing in many real world network topologies. However, it reinforces our view speculated in section 2.3 as to how the topological representations help to uncover hidden structures and patterns.
Apart from small world property, the degree of axial lines and streets demonstrate a scale-free property, i.e., \\(p(x)\\sim cx^{-\\alpha}\\). Figure 6 demonstrates log-log plots, whose x-axis and y-axis represent the logarithms of degree and cumulative probability. We can remark that the three log-log curves are pretty close to a straight line with an exponent around 2.0, thus a clear indication of scale free property. The scale-free property can be further described in detail as follows: about 80% of streets with a street network have length or degrees less than the average value of the network, while 20% of streets have length or degrees greater than the average. Out of the 20%, there are less than 1% of streets which can form a backbone of the street network (Jiang 2007b). Figure 7 highlights the top 1% of well connected streets, forming the backbone of the Hong Kong street network.
\\begin{table}
\\begin{tabular}{c c c c c c} Topology & N & M & L & L\\({}_{\\text{rand}}\\) & C & C\\({}_{\\text{rand}}\\) \\\\ Axial lines & 14378 & 2.76 & 33.88 & 9.43 & 0.08 & 0.0002 \\\\ Named streets & 7488 & 3.29 & 9.29 & 7.38 & 0.21 & 0.0005 \\\\ Natural streets (20) & 21008 & 3.79 & 23.42 & 7.47 & 0.42 & 0.0002 \\\\ Natural streets (30) & 17963 & 3.71 & 16.95 & 7.47 & 0.38 & 0.0002 \\\\ Natural streets (40) & 15868 & 3.65 & 12.91 & 7.47 & 0.35 & 0.0002 \\\\ Natural streets (50) & 14542 & 3.6 & 10.98 & 7.48 & 0.32 & 0.0002 \\\\ Natural streets (60) & 13886 & 3.57 & 10.12 & 7.5 & 0.31 & 0.0003 \\\\ Natural streets (70) & 13429 & 3.54 & 9.73 & 7.52 & 0.3 & 0.0003 \\\\ \\end{tabular}
\\end{table}
Table 2: Small world parameters for three kinds of topologies
(NOTE: N = # of nodes, M = average degree of connectivity, L = path length, L\\({}_{\\text{rand}}\\) = path length of random counterpart, C = clustering coefficient, C\\({}_{\\text{rand}}\\) = clustering coefficient of random counterpart)
Figure 6: Power law distributions of axial lines (a), named streets (b) and natural streets (60) (c)The street-based topological representations play an important role in expanding our understanding of streets and a street network as a whole. Considering individual street segments as agents, the agents interact with their neighbors, forming what we call streets. The streets can be considered to be an emergence, whose size follows a power law distribution. In this respect, streets can be compared to avalanches of all sizes emerged from a sand pile, the model of self-organized criticality (Bak et al. 1987, Bak 1996). In other words, street segments are to streets what sand grains are to avalanches. The avalanches are emerged by gradually adding the sand grains in time dimension, whereas the streets are formed by gradually merging adjacent street segment in space dimension. The angle threshold we set for good continuity is sort of slope limit for forming avalanches in a sand pile. Therefore, streets or a street network in general can be understood as a self-organized phenomenon using the theory of self-organized criticality.
### Predictability of traffic flow
We extracted those axial lines and streets that have counting stations for correlation comparison. Basically, flow observed in the counting stations along axial lines or streets is summed up to correlate to some topological measures (including degree, local integration and global integration) of the individual lines or streets (see Table 3). Unfortunately, the results are not particularly encouraging. As shown in table 3, R square value for the correlation between local integration and traffic flow is indeed the best for either axial lines or streets. We can observe that different thresholds for forming natural streets have some effect on the correlation between flow and topological measures. We note also from the experiments that natural streets using a threshold of 60 degrees seem to be the best option. In what follows, we will adopt this particular natural street-based topology for further analysis.
The reason why the correlation is so poor can be attributed to the geographic nature of the Hong Kong territory. It consists of many islands, and much of the territory with mountains and hills remains undeveloped. The poor correlation could be possibly due to the uneven distribution of the counting stations in the sampled streets or
Figure 7: (color online) 1% vital streets highlighted (turquoise) selected from (a) named streets, and (b) natural streets
\\begin{table}
\\begin{tabular}{l c c c} Topology & Connect & LInteg & GInteg \\\\ Axial lines & 0.088 & 0.145 & 0.105 \\\\ Named streets & 0.164 & 0.277 & 0.275 \\\\ Natural streets (20) & 0.151 & 0.211 & 0.158 \\\\ Natural streets (30) & 0.187 & 0.258 & 0.206 \\\\ Natural streets (40) & 0.176 & 0.282 & 0.218 \\\\ Natural streets (50) & 0.166 & 0.314 & 0.242 \\\\ Natural streets (60) & 0.160 & 0.333 & 0.247 \\\\ Natural streets (70) & 0.142 & 0.308 & 0.218 \\\\ \\end{tabular}
\\end{table}
Table 3: Correlation coefficient (or R square value) between topological measures and traffic flow with different topologiesaxial lines. For this reason, we decided to take some small sample areas for a more in-depth study. We took two sampling methods. The first 10 samples are taken from the 10 most populated and urbanized areas (Figure (a)a, refer to Appendix B for an enlarged view), while the second 9 areas are sampled according to different morphological patterns (Figure (b)b, refer to Appendix C for an enlarged view). The morphological samples are put into three categories: grid-like, deformed-grid, and irregular (see Figure 9 for example). For all the samples, we chose local integration and correlate it with traffic flow using axial lines and streets. In what follows, we refer to the correlation coefficient between local integration and traffic flow as predictability.
The predictability for the first 10 samples is listed in table 4. We can note that the axial lines option is the poorest (second row with table 4), while both streets options show significant improvement (third and fourth rows with table 4) in predictability. We notice that the streets options demonstrate much better predictability than axial lines, either individually or overall by mean. The predictability for some areas goes up to 0.7.
The predictability of axial lines is significantly influenced by the morphology or shape of areas (Table 5). It appears that the predictability of axial maps for grid-like areas is pretty good, which can be compared to that of streets. However the predictability of axial maps decreases dramatically for a deformed-grid and irregular areas. We can notice in the mean time that the predictability of both named and natural streets is rather stable, even with the morphological change. Overall, both named and natural streets have a better predictability than axial lines.
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c} Areas & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & Mean \\\\ Axial lines & 0.17 & 0.00 & 0.04 & 0.06 & 0.11 & 0.00 & 0.12 & 0.13 & 0.18 & 0.06 & 0.09 \\\\ Named streets & 0.50 & 0.70 & 0.18 & 0.30 & 0.47 & 0.02 & 0.25 & 0.44 & 0.24 & 0.17 & 0.33 \\\\ Natural streets & 0.35 & 0.39 & 0.24 & 0.57 & 0.56 & 0.45 & 0.38 & 0.33 & 0.23 & 0.31 & 0.38 \\\\ \\end{tabular}
\\end{table}
Table 4: Predictability for traffic flow with the 10 sampled areas
Figure 8: (color online) Geographic distribution of sampled areas: (a) 10 most populated and intensively urbanized areas, and (b) 9 sampled areas according to morphology (Note: the network patterns of the sampled areas are shown at a detailed level in Appendix B)
Figure 9: (color online) Grid-like (a), deformed-grid (b) and irregular (c) samples
\\begin{table}
\\begin{tabular}{l l l l l l l l l l l} Areas & G1 & G2 & G3 & D1 & D2 & D3 & I1 & I2 & I3 & Mean \\\\ Axial lines & 0.59 & 0.56 & 0.50 & 0.02 & 0.23 & 0.13 & 0.12 & 0.17 & 0.01 & 0.26 \\\\ Named streets & 0.83 & 0.55 & 0.81 & 0.51 & 0.52 & 0.56 & 0.59 & 0.54 & 0.23 & 0.59 \\\\ Natural streets & 0.63 & 0.67 & 0.78 & 0.82 & 0.49 & 0.47 & 0.56 & 0.47 & 0.35 & 0.60 \\\\ \\end{tabular}
\\end{table}
Table 6: Intelligibilities of the 9 sampled areas according to morphology (Note: G* = Grid-like samples, D* = Deformed-grid samples, I* = Irregular samples)
\\begin{table}
\\begin{tabular}{l l l l l l l l l l} Areas & G1 & G2 & G3 & D1 & D2 & D3 & I1 & I2 & I3 & Mean \\\\ Axial lines & 0.4 & 0.32 & 0.6 & 0.22 & 0.18 & 0.3 & 0.02 & 0.16 & 0.08 & 0.25 \\\\ Named streets & 0.43 & 0.59 & 0.57 & 0.58 & 0.62 & 0.57 & 0.39 & 0.44 & 0.43 & 0.49 \\\\ Natural streets & 0.35 & 0.69 & 0.61 & 0.56 & 0.6 & 0.61 & 0.55 & 0.56 & 0.41 & 0.53 \\\\ \\end{tabular}
\\end{table}
Table 5: Predictability for traffic flow with the 9 sampled areas according to morphology (Note: G* = Grid-like samples, D* = Deformed-grid samples, I* = Irregular samples)We would like further to thank the three referees and Itzhak Omer for providing some useful comments, and Wu Chen for his support in the course of the study.
**References:**
Bak P., Tang C. and Wiesenfield K. (1987), Self-organized criticality: an explanation of 1/f noise, _Physical Review Letters_, 59, 381 - 384.
Bak P. (1996), _How Nature Works: the science of self-organized criticality_, Springer-Verlag: New York.
Barabasi, A.-L. and Albert R. (1999), Emergence of scaling in random networks, _Science_, 286, 509-512.
Batty M. (2004), A new theory of space syntax, _CASA working paper #75_, available at [http://www.casa.ucl.ac.uk/publications/full_list.htm](http://www.casa.ucl.ac.uk/publications/full_list.htm)
Buhl J., Gautrais J., Reeves N., Sole R. V., Valverde S., Kuntz P., and Theraulaz G. (2006), Topological patterns in street networks of self-organized urban settlements, _The European Physical Journal B_, 49, 513 - 522.
Erdos P. and Renyi A. (1960), On the evolution of random graphs, _Pub.Math. Inst. Hungarian Acad. Sci._, 5, 17-61.
Hillier B. (1996), _Space is the Machine: A Configurational Theory of Architecture_, Cambridge University Press: Cambridge.
Hillier B. (1999), The hidden geometry of deformed grids: or, why space syntax works, when it looks as though it shouldn't, _Environment and Planning B: Planning and Design_, 26, 169-191.
Hillier B. and Hanson J. (1984), _The Social Logic of Space_, Cambridge University Press: Cambridge.
Hillier B., Penn A., Hanson J., Grajewski T. and Xu J. (1993), Natural movement: or configuration and attraction in urban pedestrian movement, _Environment and Planning B: Planning and Design_, 20, 29 - 66
Hong Kong Transport Department (2006), _The Annual Traffic Census 2005_, TTSD Publication No. 06CAB1, The Hong Kong Transport Department: Hong Kong, available at [http://www.td.gov.hk/publications_and_press_releases/publications/free_publications/the_annual_traffic_census_2005/index.htm](http://www.td.gov.hk/publications_and_press_releases/publications/free_publications/the_annual_traffic_census_2005/index.htm)
Jiang B. (2007a), Ranking spaces for predicting human movement in an urban environment, Preprint, [http://arxiv.org/ftp/physics/papers/0612/0612011.pdf](http://arxiv.org/ftp/physics/papers/0612/0612011.pdf)
Jiang B. (2007b), A topological pattern of urban street networks: universality and peculiarity, _Physica A: Statistical Mechanics and its Applications_, 384, 647 - 655.
Jiang B. and Claramunt C. (2002), Integration of space syntax into GIS: new perspectives for urban morphology, _Transactions in GIS_, 6(3), 295-309.
Jiang B. and Claramunt C. (2004), Topological analysis of urban street networks, _Environment and Planning B: Planning and Design_, Pion Ltd., 31, 151- 162.
Jiang B., Claramunt C. and Klarqvist B. (2000), An integration of space syntax into GIS for modeling urban spaces, _International Journal of Applied Earth Observation and Geoinformation_, 2, 161-171
Lee S. C. (1989), Road traffic monitoring in Hong Kong, _Proceedings of the Second International Conference on Road Traffic Monitoring_, London, UK, 14-18
Newman M., Barabasi A.-L. and Watts D. J. (eds. 2006), _The Structure and Dynamics of Networks_, Princeton University Press: Princeton, New Jersey.
Penn A., Hillier B., Banister D., and Xu J. (1998), Configurational modeling of urban movement networks, _Environment and Planning B: Planning and Design_, 25, 59 - 84.
Penn A. (2003), Space syntax and spatial cognition: or why the axial line? _Environment and Behavior_, 35, 30 - 65.
Porta S., Crucitti P. and Latora V. (2006a), The network analysis of urban streets: a primal approach, _Environment and Planning B: Planning and Design_, 33(5), 705 - 725.
Porta S., Crucitti P. and Latora V. (2006b), The network analysis of urban streets: a dual approach, _Physica A_, 369, 853 - 866.
Ratti, C. (2004), Space syntax: some inconsistencies, _Environment and Planning B: Planning and Design_, 31(4), 501-511.
Rosvall M., Gronlund A., Minnhagen P. and Sneppen K. (2005), Searchability of networks, _Physical Review E_. 72, 046117.
Thomson R. C. (2003), Bending the axial line: smoothly continuous road centre-line segments as a basis for road network analysis, in Hanson, J. (ed.), _Proceedings of the Fourth Space Syntax International Symposium_, University College London, London.
Tomko M., Winter S. and Claramunt C. (forthcoming), Experiential hierarchies of streets, _Computers, Environment and Urban Systems_, xx(xx), xx - xx.
Turner A. (2005), An algorithmic definition of the axial map, _Environment and Planning B: Planning and Design_, 32(3) 425-444.
Watts D. J. and Strogatz S. H. (1998), Collective dynamics of'small-world' networks, _Nature_, 393, 440 - 442.
## Appendix A Algorithms for forming street-based topologies
```
Input: Street segments-based shape file Output: Street segments-based shape file with a new field namely StrokeID while(not last segment)do Start with the first endpoint of the first segment in the attribute table of road if (the current segment is processed) then exit end Use a spatial filter to search for all segments intersected with that point Calculate the deflection angles between the found segments and the start one Get the minimums deflection angle and compare with the threshold if (the minimums deflection angle < threshold) then Concatenate that segment with the least angle with the start segment Change the status of that segment to processed end if (the route is traced to the end) then Start with the other endpoint of the current processing line to repeat the process end Pick out another unprocessed segment to continue the process End while
```
Algorithm II (for extracting individual streets based on a unique name) (Note: majority of the following lines are to assign unnamed segments into neighboring segments according to a good continuation)
```
Input: Street segments-based shape file with street names Output: Street segments-based shape file with a new field namely NamedID
```
```
While(not last segment)do Start from the first unnamed segment (U) Search for connected streets to one side (e.g. left first) of segment U If (its neighbor streets have names) then Loop through all found segments to get the min. diversion angle (Al) Store the name of segment (N1) with angle Al Else Continue to search from the next connected unnamed segment End if Search for connected segment to the other side of segment U If (its neighbor streets have names) then Loop through all found segments to get the min. diversion angle (A2) Store the name of segment (N2) with angle A2 Else Continue to search from the next connected unnamed segment End if (Al<A2) AND (Al<60) then Assign the name of segment N1 to segment U Else If (A2<A1) AND (A2<60) then Assign the name of segment N2 to segment U Else Assign a random numeric number to segment U End if Pick out the next unprocessed unnamed segment End while
```Appendix B: (color online) 10 sampled areas according to districts
(j)Appendix C: (color online) 9 sampled areas according to morphology | It is well received in the space syntax community that traffic flow is significantly correlated to a morphological property of streets, which are represented by axial lines, forming a so called axial map. The correlation co-efficient (R square value) approaches 0.8 and even a higher value according to the space syntax literature. In this paper, we study the same issue using the Hong Kong street network and the Hong Kong Annual Average Daily Traffic (AADT) datasets, and find surprisingly that street-based topological representations (or street-street topologies) tend to be better representations than the axial map. In other words, vehicle flow is correlated to a morphological property of streets better than that of axial lines. Based on the finding, we suggest the street-based topological representations as an alternative GIS representation, and the topological analyses as a new analytical means for geographic knowledge discovery.
**Keywords:** Street-based topological representation, space syntax, topological analysis, traffic flow, and GIS. | Provide a brief summary of the text. |
arxiv-format/0709_1998v1.md | # A Boussinesqq system for two-way propagation of interfacial waves
Hai Yen Nguyen
Frederic Dias
CMLA, ENS Cachan, CNRS, PRES UniverSud, 61, avenue du President Wilson, 94230 Cachan cedex, France
## 1 Introduction
As emphasized by Helfrich & Melville [20] in their recent survey article on long nonlinear internal waves, observations over the past four decades have demonstrated that internal solitary-like waves are ubiquitous features of coastal oceans and marginal seas. Solitary waves are long nonlinear waves consisting of a localized central core and a decaying tail. They arise whenever there is a balance between dispersion and nonlinearity. They have been proved to exist in specific parameter regimes, and are often conveniently modelled by Korteweg-de Vries (KdV) equations or Boussinesq systems. As explained by Evans & Ford [16], the differences between \"free-surface\" and \"rigid lid\" internal waves are small for internal waves of interest. Therefore the \"rigid lid\" configuration remains popular for investigating internal waves even if it does not allow for generalized solitary waves, which are long nonlinear waves consisting of a localized central core and periodic non-decaying oscillations extending to infinity. Such waves arise whenever there is a resonance between a linear long wave speed of one wave mode in the system and a linear short wave speed of another mode [17].
When dealing with interfacial waves with rigid boundaries in the framework of the full Euler equations, the amplitude of the central core is bounded by the configuration. In the case of solitary waves, it is known that when the wave speed approaches a critical value the solution reaches a maximum amplitude while becoming indefinitely wider; these waves are often called 'table-top' waves. In the limit as the width of the central core becomes infinite, the wave becomes a front [13]. Such behavior is conveniently modelled by an extended Korteweg-de Vries (eKdV) equation, i.e. a KdV equation with a cubic nonlinear term [18]. Sometimes the terminology'modified KdV equation' or 'Gardner equation' is also used. KdV-type equations only describe one-way wave propagation. The natural extension toward two-way wave propagation is the class of Boussinesq systems. We will derive two sets of Boussinesq systems, one with quadratic nonlinearities and another one with quadratic and cubic nonlinearities. We will use the terminology 'extended' for a Boussinesq system with both quadratic and cubic terms. Some questions arise when dealing with 'table-top' solitary waves. What are their properties? How do they interact? The main goal of this work is to learn more about these waves by studying and integrating numerically an extended Boussinesq system which allows a comparison between fronts and the more standard solitary waves. More general models have also been derived by Choi & Camassa [9]. They considered shallow water as well as deep water configurations. In the shallow water case, their set of equations is the two-layer version of the Green-Naghdi equations. The equations derived in [9] were recently extended to the free-surface configuration [2]. Solitary waves for two-layer flows have also been computed numerically as solutions to the full incompressible Euler equations in the presence of an interface by various authors - see for example [22]. Similarly fronts have been computed for example in [13, 14].
The paper is organized as follows. In SS 2, we present the governing equations and the corresponding boundary conditions. A first Boussinesq system of three equations is derived in SS 3. Then it is shown in SS 4 how to reduce this system to a system of two equations, one for the evolution of the interface shape and the other one for the evolution of a combination of the horizontal velocities in each layer. The numerical scheme and the numerical solutions are described in SS 5. Results are shown for the propagation of a single wave, for the co-propagation of two waves and for the collision of two waves of equal as well as unequal sizes. When the square of the depth ratio is close to the density ratio, the coefficients of the quadratic nonlinearities become small and cubic nonlinearities must be considered. An extended Boussinesq system is derived in SS 6. Numerical solutions of the extended Boussinesq system are described in SS 7. In particular, the collision of 'table-top' waves is considered. A short conclusion is given in SS 8. In the Appendices, we provide very accurate results for wave run-up and phase shift, as well as some intermediate steps in the derivation of the extended Boussinesq system.
## 2 Governing equations
The origin of the systems of partial differential equations that will be derived below is explained in this section. The methods are standard, but to our knowledge some of these equations are derived for the first time.
Waves at the interface between two fluids are considered. The bottom as well as the upper boundary are assumed to be flat and rigid. A sketch is given in Figure 1. The analysis is restricted to two-dimensional flows. In other words, there is only one horizontal direction, \\(x^{*}\\), in addition to the vertical direction, \\(z^{*}\\). The interface is described by \\(z^{*}=\\eta^{*}(x^{*},t^{*})\\). The bottom layer \\(\\Omega_{t^{*}}=\\{(x^{*},z^{*}):x^{*}\\in\\mathbb{R},-h<z^{*}<\\eta^{*}(x^{*},t^{ *})\\}\\) and the upper layer \\(\\Omega^{\\prime}_{t^{*}}=\\{(x^{*},z^{*}):x^{*}\\in\\mathbb{R},\\eta^{*}(x^{*},t^{ *})<z^{*}<h^{\\prime}\\}\\) are filled with inviscid, incompressible fluids, with densities \\(\\rho\\) and \\(\\rho^{\\prime}\\) respectively. All quantities related to the upper layer are denoted with a prime. All physical variables are denoted with a star.
In addition the flows are assumed to be irrotational. Therefore we are dealing with potential flows and only stable configurations with \\(\\rho>\\rho^{\\prime}\\) are considered. Velocity potentials \\(\\phi^{*}=\\phi^{*}((x^{*},z^{*}),t^{*})\\) in \\(\\Omega_{t^{*}}\\) and \\(\\phi^{*^{\\prime}}=\\phi^{*^{\\prime}}((x^{*},z^{*}),t^{*})\\) in \\(\\Omega^{\\prime}_{t^{*}}\\) are introduced, so that the velocity vectors \\(\\mathbf{v}^{*}\\) and \\(\\mathbf{v}^{*^{\\prime}}\\) are given by
\\[\\mathbf{v}^{*} = \
abla\\phi^{*}, \\tag{1}\\] \\[\\mathbf{v}^{*^{\\prime}} = \
abla\\phi^{*^{\\prime}}. \\tag{2}\\]
Figure 1: Sketch of solitary waves propagating at the interface between two fluid layers with different densities \\(\\rho^{\\prime}\\) and \\(\\rho\\). The top and the bottom of the fluid domain are flat and rigid boundaries, located respectively at \\(z^{*}=h^{\\prime}\\) and \\(z^{*}=-h\\). (a) Sketch of a solitary wave of depression in physical space; (b) Sketch of a solitary wave of elevation in dimensionless coordinates, with the thickness \\(h\\) of the bottom layer taken as unit length and the long wave speed \\(c\\) as unit velocity. The dashed lines represent arbitrary fluid levels \\(\\theta\\) and \\(1+H-\\theta^{\\prime}\\) in each layer. The dimensionless number \\(H\\) is equal to \\(h^{\\prime}/h\\).
Writing the continuity equations in each layer leads to
\\[\\phi^{*}_{x^{*}x^{*}}+\\phi^{*}_{z^{*}z^{*}} = 0\\quad\\mbox{for}\\ \\ -h<z^{*}<\\eta^{*}(x^{*},t^{*}), \\tag{3}\\] \\[\\phi^{*^{\\prime}}_{x^{*}x^{*}}+\\phi^{*^{\\prime}}_{z^{*}z^{*}} = 0\\quad\\mbox{for}\\ \\ \\eta^{*}(x^{*},t^{*})<z^{*}<h^{\\prime}. \\tag{4}\\]
The boundary of the system \\(\\{\\Omega_{t^{*}},\\Omega^{\\prime}_{t^{*}}\\}\\) has two parts: the flat bottom \\(z^{*}=-h\\) and the flat roof \\(z^{*}=h^{\\prime}\\). The impermeability conditions along these rigid boundaries give
\\[\\phi^{*}_{z^{*}} = 0\\quad\\mbox{at}\\ \\ z^{*}=-h, \\tag{5}\\] \\[\\phi^{*^{\\prime}}_{z^{*}} = 0\\quad\\mbox{at}\\ \\ z^{*}=h^{\\prime}. \\tag{6}\\]
The kinematic conditions along the interface, namely \\(D(\\eta^{*}-z^{*})/Dt^{*}=0\\), give
\\[\\eta^{*}_{t^{*}}=\\phi^{*}_{z^{*}}-\\phi^{*}_{x}\\eta^{*}_{x}\\ \\ \\ \\mbox{at}\\ \\ z^{*}=\\eta^{*}(x^{*},t^{*}), \\tag{7}\\] \\[\\eta^{*}_{t^{*}}=\\phi^{*^{\\prime}}_{z^{*}}-\\phi^{*^{\\prime}}_{x} \\eta^{*}_{x}\\ \\ \\ \\mbox{at}\\ \\ z^{*}=\\eta^{*}(x^{*},t^{*}). \\tag{8}\\]
The dynamic boundary condition imposed on the interface, namely the continuity of pressure since surface tension effects are neglected, gives
\\[\\rho\\left(\\frac{\\partial\\phi^{*}}{\\partial t^{*}}+\\frac{1}{2}|\
abla\\phi^{*}|^ {2}+gz^{*}\\right)=\\rho^{\\prime}\\left(\\frac{\\partial\\phi^{*^{\\prime}}}{\\partial t ^{*}}+\\frac{1}{2}|\
abla\\phi^{*^{\\prime}}|^{2}+gz^{*}\\right)\\ \\ \\mbox{at}\\ \\ z^{*}=\\eta^{*}(x^{*},t^{*}), \\tag{9}\\]
where \\(g\\) is the acceleration due to gravity. The system of seven equations (3)-(9) represents the starting model for the study of wave propagation at the interface between two fluids. Combined with initial conditions or periodicity conditions, it is the classical interfacial wave problem, which has been studied for more than a century. A nice feature of this formulation is that the pressures in both layers have been removed. In some cases, it is advantageous to keep the pressures in the equations. For example, Bridges & Donaldson [8] in their study of the criticality of two-layer flows provide an appendix on the inclusion of the lid pressure in the calculation of uniform flows. In the next sections, we will derive simplified models based on certain additional assumptions on wave amplitude, wavelength and fluid depth.
## 3 System of three equations in the limit of long, weakly dispersive waves
The derivation follows closely that of [5] for a single layer. Let us now consider waves whose typical amplitude, \\(A\\), is small compared to the depth of the bottom layer \\(h\\), and whose typical wavelength, \\(\\ell\\), is large compared to the depth of the bottom layer1. Let us define the three following dimensionless numbers, with their characteristic magnitude:
Footnote 1: There is some arbitrariness in this choice since there are two fluid depths in the problem. We could have also chosen the depth of the top layer as reference depth. In fact, we implicitly make the assumption that the ratio of liquid depths is neither too small nor too large, without going into mathematical details. Models valid for arbitrary depth ratio have been derived for example by Choi & Camassa [9].
\\[\\alpha=\\frac{A}{h}\\ll 1,\\quad\\beta=\\frac{h^{2}}{\\ell^{2}}\\ll 1,\\quad S=\\frac{ \\alpha}{\\beta}=\\frac{A\\ell^{2}}{h^{3}}\\approx 1.\\]
Here \\(S\\) is the Stokes number. Let us also introduce the dimensionless density ratio \\(r\\) as well as the depth ratio \\(H\\):
\\[r=\\frac{\\rho^{\\prime}}{\\rho},\\quad H=\\frac{h^{\\prime}}{h}.\\]
Obviously \\(r\\) takes values between \\(0\\) and \\(1\\), the case \\(r=0\\) corresponding to water waves2 while the case \\(r\\approx 1\\) corresponds to two fluids with almost the same density such as an upper, warmer layer extending down to the interface with a colder, more saline layer. The depth ratio takes theoretical values between \\(0\\) and \\(\\infty\\) but as said above values \\(H\\ll 1\\) or \\(H\\gg 1\\) should be avoided in the framework of our weakly nonlinear analysis.
Footnote 2: In a recent paper, Kataoka [21] showed that when \\(H\\) is near unity, the stability of solitary waves changes drastically for small density ratios \\(r\\). Therefore one must be careful in evaluating the stability of air-water solitary waves. In other words, there may be differences between \\(r=0\\) and the true value \\(r=0.0013\\).
The procedure is most transparent when working with the variables scaled in such a way that the dependent quantities appearing in the problem are all of order one, while the assumptions about small amplitude and long wavelength appear explicitly connected with small parameters in the equations of motion. Such consideration leads to the scaled, dimensionless variables
\\[x^{*}=\\ell x,\\quad z^{*}=h(z-1),\\quad\\eta^{*}=A\\eta,\\quad t^{*}=\\ell t/c_{0}, \\quad\\phi^{*}=gA\\ell\\phi/c_{0},\\quad\\phi^{*^{\\prime}}=gA\\ell\\phi^{\\prime}/c_{0},\\]
where \\(c_{0}=\\sqrt{gh}\\). The speed \\(c_{0}\\), which represents the long wave speed in the limit \\(r\\to 0\\), is not necessarily the most natural choice for interfacial waves. The natural choice would be to take
\\[c_{0}=\\sqrt{gh}\\sqrt{\\frac{1-r}{1+r/H}},\\]
which is the speed of long waves in the configuration shown in Figure 1. It does not matter for the asymptotic expansions to be performed later.
In these new variables, the set of equations (3)-(9) becomes after reordering
\\[\\beta\\phi_{xx}+\\phi_{zz} = 0\\quad\\mbox{in}\\ \\ 0<z<1+\\alpha\\eta, \\tag{10}\\] \\[\\phi_{z} = 0\\quad\\mbox{on}\\ \\ z=0,\\] (11) \\[\\eta_{t}+\\alpha\\phi_{x}\\eta_{x}-\\frac{1}{\\beta}\\phi_{z} = 0\\quad\\mbox{on}\\ \\ z=1+\\alpha\\eta,\\] (12) \\[\\beta\\phi^{\\prime}_{xx}+\\phi^{\\prime}_{zz} = 0\\quad\\mbox{in}\\ \\ 1+\\alpha\\eta<z<1+H,\\] (13) \\[\\phi^{\\prime}_{z} = 0\\quad\\mbox{on}\\ \\ z=1+H,\\] (14) \\[\\eta_{t}+\\alpha\\phi^{\\prime}_{x}\\eta_{x}-\\frac{1}{\\beta}\\phi^{ \\prime}_{z} = 0\\quad\\mbox{on}\\ \\ z=1+\\alpha\\eta,\\] (15) \\[\\left(\\eta+\\phi_{t}+\\frac{1}{2}\\alpha\\phi^{2}_{x}+\\frac{1}{2} \\frac{\\alpha}{\\beta}\\phi^{2}_{z}\\right)=r\\left(\\eta+\\phi^{\\prime}_{t}+\\frac{ 1}{2}\\alpha\\phi^{{}^{\\prime}2}_{x}+\\frac{1}{2}\\frac{\\alpha}{\\beta}\\phi^{{}^{ \\prime}2}_{z}\\right)\\ \\ \\mbox{on}\\ \\ z=1+\\alpha\\eta. \\tag{16}\\]
We represent the potential \\(\\phi\\) as a formal expansion,
\\[\\phi((x,z),t)=\\sum_{m=0}^{\\infty}f_{m}(x,t)z^{m}.\\]
Demanding that \\(\\phi\\) formally satisfy Laplace's equation (10) leads to the recurrence relation
\\[(m+2)(m+1)f_{m+2}(x,t)=-\\beta(f_{m}(x,t))_{xx},\\ \\ \\forall m=0,1,2,\\ldots. \\tag{17}\\]
Let \\(F(x,t)=f_{0}(x,t)\\) denote the velocity potential at the bottom \\(z=0\\) and use (17) repeatedly to obtain
\\[f_{2k}(x,t) = \\frac{(-1)^{k}\\beta^{k}}{(2k)!}\\frac{\\partial^{2k}F(x,t)}{ \\partial x^{2k}},\\ \\ \\forall k=0,1,2,\\ldots,\\] \\[f_{2k+1}(x,t) = \\frac{(-1)^{k}\\beta^{k}}{(2k+1)!}\\frac{\\partial^{2k}f_{1}(x,t)}{ \\partial x^{2k}},\\ \\ \\forall k=0,1,2,\\ldots.\\]
Equation (11) implies that \\(f_{1}(x,t)=0\\), so
\\[f_{2k+1}(x,t)=0,\\ \\ \\forall k=0,1,2,\\ldots, \\tag{18}\\]
and therefore
\\[\\phi((x,z),t)=\\sum_{k=0}^{\\infty}\\frac{(-1)^{k}\\beta^{k}}{(2k)!}\\frac{ \\partial^{2k}F(x,t)}{\\partial x^{2k}}z^{2k}.\\]
Let \\(\\partial F(x,t)/\\partial x=u(x,t).\\) Substitute the latter representation into (12) to obtain
\\[\\eta_{t}+u_{x}+\\alpha(u\\eta)_{x}-\\frac{1}{6}\\beta u_{xxx}-\\frac{1}{2}\\alpha \\beta(\\eta u_{xx})_{x}+\\frac{1}{120}\\beta^{2}u_{xxxxx}+O(\\beta^{3})=0. \\tag{19}\\]Similarly we represent the potential \\(\\phi^{\\prime}\\) as a formal expansion,
\\[\\phi^{\\prime}((x,z),t)=\\sum_{m=0}^{\\infty}f^{\\prime}_{m}(x,t)(1+H-z)^{m}.\\]
Demanding that \\(\\phi^{\\prime}\\) formally satisfy Laplace's equation (13) leads to the recurrence relation
\\[(m+2)(m+1)f^{\\prime}_{m+2}(x,t)=-\\beta(f^{\\prime}_{m}(x,t))_{xx},\\ \\ \\forall m=0,1,2,\\ldots. \\tag{20}\\]
Let \\(F^{\\prime}(x,t)=f^{\\prime}_{0}(x,t)\\) denote the velocity potential on the roof \\(z=1+H\\) and use (20) repeatedly to obtain
\\[f^{\\prime}_{2k}(x,t) = \\frac{(-1)^{k}\\beta^{k}}{(2k)!}\\frac{\\partial^{2k}F^{\\prime}(x,t) }{\\partial x^{2k}},\\ \\ \\forall k=0,1,2,\\ldots,\\] \\[f^{\\prime}_{2k+1}(x,t) = \\frac{(-1)^{k}\\beta^{k}}{(2k+1)!}\\frac{\\partial^{2k}f^{\\prime}_{ 1}(x,t)}{\\partial x^{2k}},\\ \\ \\forall k=0,1,2,\\ldots.\\]
Equation (14) implies that \\(f^{\\prime}_{1}(x,t)=0\\), so
\\[f^{\\prime}_{2k+1}(x,t)=0,\\ \\ \\forall k=0,1,2,\\ldots, \\tag{21}\\]
and therefore
\\[\\phi^{\\prime}((x,z),t)=\\sum_{k=0}^{\\infty}\\frac{(-1)^{k}\\beta^{k}}{(2k)!}\\frac {\\partial^{2k}F^{\\prime}(x,t)}{\\partial x^{2k}}(1+H-z)^{2k}.\\]
Let \\(\\partial F^{\\prime}(x,t)/\\partial x=u^{\\prime}(x,t).\\) Substitute the latter representation into (15) to obtain
\\[\\eta_{t}-Hu^{\\prime}_{x}+\\alpha(u^{\\prime}\\eta)_{x}+\\frac{1}{6} \\beta H^{3}u^{\\prime}_{xxx}-\\frac{1}{2}\\alpha\\beta H^{2}(\\eta u^{\\prime}_{xx}) _{x}\\] \\[-\\frac{1}{120}\\beta^{2}H^{5}u^{\\prime}_{xxxxx}+O(\\beta^{3}) = 0. \\tag{22}\\]
It is important at this stage that \\(H=O(1)\\).
Substitute the representations for \\(\\phi\\) and \\(\\phi^{\\prime}\\) into the dynamic condition (16) to obtain the third equation
\\[(1-r)\\eta+F_{t}-rF^{\\prime}_{t}-\\frac{1}{2}\\beta\\left(u_{xt}-rH^ {2}u^{\\prime}_{xt}\\right)\\] \\[-\\alpha\\beta\\eta(u_{xt}+rHu^{\\prime}_{xt})+\\frac{1}{24}\\beta^{2} \\left(u_{xxxt}-rH^{4}u^{\\prime}_{xxxt}\\right)\\]\\[+\\frac{1}{2}\\alpha(u^{2}-\\beta uu_{xx})-\\frac{1}{2}\\alpha r(u^{ \\prime 2}-\\beta H^{2}u^{\\prime}u^{\\prime}_{xx})+\\frac{1}{2}\\alpha\\beta(u^{2}_{ x}-rH^{2}u^{\\prime 2}_{x})+O(\\beta^{3})=0.\\]
Differentiating with respect to \\(x\\) yields
\\[(1-r)\\eta_{x}+u_{t}-ru^{\\prime}_{t}-\\frac{1}{2}\\beta(u_{xxt}-rH^ {2}u^{\\prime}_{xxt})+\\alpha(uu_{x}-ru^{\\prime}u^{\\prime}_{x})\\] \\[-\\alpha\\beta(\\eta u_{xt})_{x}-\\alpha\\beta rH(\\eta u^{\\prime}_{xt} )_{x}+\\frac{1}{24}\\beta^{2}(u_{xxxt}-rH^{4}u^{\\prime}_{xxxt})\\] \\[-\\frac{1}{2}\\alpha\\beta(uu_{xx}-rH^{2}u^{\\prime}u^{\\prime}_{xx}) _{x}+\\alpha\\beta(u_{x}u_{xx}-rH^{2}u^{\\prime}_{x}u^{\\prime}_{xx})+O(\\beta^{3} )=0. \\tag{23}\\]
The three equations (19),(22) and (23) provide a Boussinesq system of equations describing waves at the interface \\(\\eta(x,t)\\) between two fluid layers based on the horizontal velocities \\(u\\) and \\(u^{\\prime}\\) along the bottom and the roof, respectively. It is correct up to second order in \\(\\alpha\\), \\(\\beta\\).
One can derive a class of systems which are formally equivalent to the system we just derived. This will be accomplished by considering changes in the dependent variables and by making use of lower-order relations in higher-order terms. Toward this goal, begin by letting \\(w(x,t)\\) be the scaled horizontal velocity corresponding to the physical depth \\((1-\\theta)h\\) below the unperturbed interface, and \\(w^{\\prime}(x,t)\\) be the scaled horizontal velocity corresponding to the physical depth \\((H-\\theta^{\\prime})h\\) above the unperturbed interface. The ranges for the parameters \\(\\theta\\) and \\(\\theta^{\\prime}\\) are \\(0\\leq\\theta\\leq 1\\) and \\(0\\leq\\theta^{\\prime}\\leq H\\). Note that \\((\\theta,\\theta^{\\prime})=(0,0)\\) leads to \\(w=u\\) and \\(w^{\\prime}=u^{\\prime}\\), while \\((\\theta,\\theta^{\\prime})=(1,H)\\) leads to both velocities evaluated along the interface. A formal use of Taylor's formula with remainder shows that
\\[w=\\phi_{x}|_{z=\\theta} = \\left(F_{x}-\\frac{1}{2}\\beta F_{xxx}\\theta^{2}+\\frac{1}{24}\\beta ^{2}\\theta^{4}F_{xxxxx}\\right)+O(\\beta^{3})\\] \\[= u-\\frac{1}{2}\\beta\\theta^{2}u_{xx}+\\frac{1}{24}\\beta^{2}\\theta^ {4}u_{xxxx}+O(\\beta^{3})\\]
as \\(\\beta\\to 0\\). In Fourier space, the latter relationship may be written as
\\[\\hat{w}=\\left(1+\\frac{1}{2}\\beta\\theta^{2}k^{2}+\\frac{1}{24}\\beta^{2}\\theta^ {4}k^{4}\\right)\\hat{u}+O(\\beta^{3}).\\]
Inverting the positive Fourier multiplier yields
\\[\\hat{u} = \\left(1+\\frac{1}{2}\\beta\\theta^{2}k^{2}+\\frac{1}{24}\\beta^{2} \\theta^{4}k^{4}\\right)^{-1}\\hat{w}+O(\\beta^{3})\\] \\[= \\left(1-\\frac{1}{2}\\beta\\theta^{2}k^{2}+\\frac{5}{24}\\beta^{2} \\theta^{4}k^{4}\\right)\\hat{w}+O(\\beta^{3})\\]as \\(\\beta\\to 0.\\) Thus there appears the relationship
\\[u=w+\\frac{1}{2}\\beta\\theta^{2}w_{xx}+\\frac{5}{24}\\beta^{2}\\theta^{4}w_{xxxx}+O( \\beta^{3}). \\tag{24}\\]
Similarly
\\[w^{\\prime}=\\phi^{\\prime}_{x}|_{z=1+H-\\theta^{\\prime}} = \\left(F^{\\prime}_{x}-\\frac{1}{2}\\beta F^{\\prime}_{xxx}\\theta^{ \\prime 2}+\\frac{1}{24}\\beta^{2}F^{\\prime}_{xxxxx}\\theta^{\\prime 4}\\right)+O( \\beta^{3})\\] \\[= u^{\\prime}-\\frac{1}{2}\\beta\\theta^{\\prime 2}u^{\\prime}_{xx}+ \\frac{1}{24}\\beta^{2}\\theta^{\\prime 4}u^{\\prime}_{xxxx}+O(\\beta^{3})\\]
and
\\[\\hat{w^{\\prime}}=\\left(1+\\frac{1}{2}\\beta\\theta^{\\prime 2}k^{2}+\\frac{1}{24} \\beta^{2}\\theta^{\\prime 4}k^{4}\\right)\\hat{u^{\\prime}}+O(\\beta^{3}).\\]
Inverting the positive Fourier multiplier yields
\\[\\hat{u^{\\prime}}=\\left(1-\\frac{1}{2}\\beta\\theta^{\\prime 2}k^{2}+\\frac{5}{24} \\beta^{2}\\theta^{\\prime 4}k^{4}\\right)\\hat{w^{\\prime}}+O(\\beta^{3})\\]
and thus the relationship
\\[u^{\\prime}=w^{\\prime}+\\frac{1}{2}\\beta\\theta^{\\prime 2}w^{\\prime}_{xx}+ \\frac{5}{24}\\beta^{2}\\theta^{\\prime 4}w^{\\prime}_{xxxx}+O(\\beta^{3}). \\tag{25}\\]
Substitute the expressions (24) and (25) for \\(u\\) and \\(u^{\\prime}\\) into (19) and (22), respectively, to obtain
\\[\\eta_{t}+w_{x}+\\alpha(w\\eta)_{x}+\\frac{1}{2}\\beta\\left(\\theta^{2 }-\\frac{1}{3}\\right)w_{xxx}\\] \\[+\\frac{1}{2}\\alpha\\beta(\\theta^{2}-1)(\\eta w_{xx})_{x}+\\frac{5}{2 4}\\beta^{2}\\left(\\theta^{2}-\\frac{1}{5}\\right)^{2}w_{xxxxx}+O(\\beta^{3}) = 0 \\tag{26}\\] \\[\\eta_{t}-Hw^{\\prime}_{x}+\\alpha(w^{\\prime}\\eta)_{x}-\\frac{1}{2} \\beta H\\left(\\theta^{\\prime 2}-\\frac{1}{3}H^{2}\\right)w^{\\prime}_{xxx}\\] \\[+\\frac{1}{2}\\alpha\\beta\\left(\\theta^{\\prime 2}-H^{2}\\right)(\\eta w^{ \\prime}_{xx})_{x}-\\frac{5}{24}\\beta^{2}H\\left(\\theta^{\\prime 2}-\\frac{1}{5}H^{2} \\right)^{2}w^{\\prime}_{xxxxx}+O(\\beta^{3}) = 0.\\]
Substitute the expressions (24) and (25) for \\(u\\) and \\(u^{\\prime}\\) into (23) to obtain
\\[(1-r)\\eta_{x}+w_{t}-rw^{\\prime}_{t}+\\frac{1}{2}\\beta\\left[(\\theta ^{2}-1)w-r(\\theta^{\\prime 2}-H^{2})w^{\\prime}\\right]_{xxt}+\\alpha(ww_{x}-rw^{ \\prime}w^{\\prime}_{x})\\] \\[+\\frac{1}{24}\\beta^{2}\\left[(\\theta^{2}-1)(5\\theta^{2}-1)w_{xxxxt }-r(\\theta^{\\prime 2}-H^{2})(5\\theta^{\\prime 2}-H^{2})w^{\\prime}_{xxxxxt}\\right]\\]\\[-\\alpha\\beta\\left[(\\eta w_{xt})_{x}+rH(\\eta w^{\\prime}_{xt})_{x} \\right]+\\frac{1}{2}\\alpha\\beta\\left[(\\theta^{2}-1)ww_{xxx}-r(\\theta^{\\prime 2}-H^{2})w^{ \\prime}w^{\\prime}_{xxx}\\right]\\] \\[+\\frac{1}{2}\\alpha\\beta\\left[(\\theta^{2}+1)w_{x}w_{xx}-r(\\theta^ {\\prime 2}+H^{2})w^{\\prime}_{x}w^{\\prime}_{xx}\\right]+O(\\beta^{3}) = 0.\\]
The system of three equations (26)-(28) is formally equivalent to the previous system but it allows one to choose the fluid levels \\(\\theta\\) and \\(\\theta^{\\prime}\\) as reference for the horizontal velocities. Among all these systems that model the same physical problem one can select those with the best dispersion relations. Neglecting terms of \\(O(\\alpha^{2},\\beta^{2},\\alpha\\beta)\\), the system (26)-(28) reduces to
\\[\\eta_{t}+w_{x}+\\alpha(w\\eta)_{x}+\\frac{1}{2}\\beta(\\theta^{2}- \\frac{1}{3})w_{xxx} = 0\\] \\[\\eta_{t}-Hw^{\\prime}_{x}+\\alpha(w^{\\prime}\\eta)_{x}-\\frac{1}{2} \\beta H(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})w^{\\prime}_{xxx} = 0 \\tag{29}\\] \\[(1-r)\\eta_{x}+w_{t}-rw^{\\prime}_{t}+\\frac{1}{2}\\beta[(\\theta^{2 }-1)w-r(\\theta^{\\prime 2}-H^{2})w^{\\prime}]_{xxt}+\\alpha(ww_{x}-rw^{\\prime}w^{ \\prime}_{x}) = 0\\]
## 4 System of two equations
The systems obtained in the previous section are not appropriate for numerical computations. One would like to obtain a system of two evolution equations for the variables \\(\\eta\\) and \\(W=w-rw^{\\prime}\\). In fact, Benjamin and Bridges [3] (see also [12, 11, 1] ) formulated the interfacial wave problem using Hamiltonian formalism and showed that the canonical variables for interfacial waves are \\(\\eta^{*}(x^{*},t^{*})\\) and \\(\\rho\\phi^{*}(x^{*},\\eta^{*},t^{*})-\\rho^{\\prime}\\phi^{*^{\\prime}}(x^{*},\\eta^{ *},t^{*})\\).
At leading order, the first two equations of system (29) give
\\[\\left\\{\\begin{array}{l}\\eta_{t}+w_{x}=0,\\\\ \\eta_{t}-Hw^{\\prime}_{x}=0.\\end{array}\\right.\\]
Assuming the fluids to be at rest as \\(x\\rightarrow\\infty\\), one has \\(w=-Hw^{\\prime}\\). Therefore
\\[w=\\frac{H}{r+H}W+O(\\beta),\\quad w^{\\prime}=\\frac{-1}{r+H}W+O(\\beta). \\tag{30}\\]
Adding \\(H\\) times the first equation to \\(r\\) times the second equation of system (29) yields
\\[\\begin{array}{l}(r+H)\\eta_{t}+H(w-rw^{\\prime})_{x}+\\alpha[(Hw+rw^{\\prime}) \\eta]_{x}\\\\ \\qquad+\\frac{H}{2}\\beta\\Big{[}(\\theta^{2}-\\frac{1}{3})w_{xxx}-r\\Big{(}\\theta^ {\\prime 2}-\\frac{1}{3}H^{2}\\Big{)}w^{\\prime}_{xxx}\\Big{]}=0.\\end{array} \\tag{31}\\]Using (30) and neglecting higher-order terms, one obtains
\\[\\eta_{t}=-\\frac{H}{r+H}W_{x}-\\alpha\\frac{H^{2}-r}{(r+H)^{2}}(W\\eta)_{x}-\\beta \\left(\\frac{1}{2}\\frac{H^{2}S}{(r+H)^{2}}+\\frac{1}{3}\\frac{H^{2}(1+rH)}{(r+H)^{2 }}\\right)W_{xxx},\\]
where
\\[S=(\\theta^{2}-1)+\\frac{r}{H}\\left(\\theta^{\\prime 2}-H^{2}\\right).\\]
In the third equation of system (29), the term with the \\(xxt-\\)derivatives can be written as
\\[\\frac{1}{2}\\beta\\left[(\\theta^{2}-1)w_{xxt}-r(\\theta^{\\prime 2}-H^{2})w^{ \\prime}_{xxt}\\right]=\\frac{1}{2}\\beta\\frac{HS}{r+H}W_{xxt}.\\]
The quadratic terms of the third equation of system (29) can be written as
\\[\\alpha(ww_{x}-rw^{\\prime}w^{\\prime}_{x})=\\alpha\\frac{H^{2}-r}{(r+H)^{2}}WW_{x}.\\]
Then the third equation of system (29) becomes
\\[W_{t}=-(1-r)\\eta_{x}-\\frac{1}{2}\\beta\\frac{HS}{r+H}W_{xxt}-\\alpha\\frac{H^{2}-r }{(r+H)^{2}}WW_{x}.\\]
The final system of two equations for interfacial waves in the limit of long, weakly dispersive waves, can be written in terms of the horizontal velocities at arbitrary fluid levels as (in dimensionless form)
\\[\\left\\{\\begin{array}{l}\\eta_{t}=-\\frac{H}{r+H}W_{x}-\\alpha\\frac{H^{2}-r}{(r +H)^{2}}(W\\eta)_{x}-\\beta\\left(\\frac{1}{2}\\frac{H^{2}S}{(r+H)^{2}}+\\frac{1}{3 }\\frac{H^{2}(1+rH)}{(r+H)^{2}}\\right)W_{xxx}\\\\ W_{t}=-(1-r)\\eta_{x}-\\alpha\\frac{H^{2}-r}{(r+H)^{2}}WW_{x}-\\frac{1}{2}\\beta \\frac{HS}{r+H}W_{xxt},\\end{array}\\right. \\tag{32}\\]
or as (in physical variables)
\\[\\left\\{\\begin{array}{l}\\eta^{*}_{t^{*}}=-hd_{1}W^{*}_{x^{*}}-d_{4}(W^{*} \\eta^{*})_{x^{*}}-h^{3}d_{2}W^{*}_{x^{*}x^{*}x^{*}},\\\\ W^{*}_{t^{*}}=-g(1-r)\\eta^{*}_{x^{*}}-d_{4}W^{*}W^{*}_{x^{*}}-h^{2}d_{3}W^{*}_{ x^{*}x^{*}t^{*}},\\end{array}\\right. \\tag{33}\\]
where
\\[d_{1}=\\frac{H}{r+H},\\quad d_{2}=\\frac{H^{2}}{2(r+H)^{2}}\\left(S+\\frac{2}{3}(1 +rH)\\right),\\quad d_{3}=\\frac{1}{2}Sd_{1},\\quad d_{4}=\\frac{H^{2}-r}{(r+H)^{2 }}. \\tag{34}\\]
Notice that Choi & Camassa [9] also derived a system of two equations (see their equations (3.33) and (3.34)), but it is different from ours. In particular, their coefficient \\(d_{2}\\) is equal to \\(0\\), and their equation for \\(W_{t}\\) possesses an extra quadratic term \\(\\eta\\eta_{x}\\). The reason is that their '\\(W\\)' is the mean horizontal velocity through the upper layer. The value of \\(S\\) which best approximates the Choi & Camassa equations is \\(S=-\\frac{2}{3}(1+rH)\\). Indeed the coefficient \\(d_{2}\\) then vanishes. This particular value for \\(S\\) can be explained as follows. The leading order correction to the horizontal velocity is given by
\\[w(z)=u-\\tfrac{1}{2}\\beta z^{2}u_{xx}.\\]
The value of \\(z\\), say \\(z=\\theta\\), for which the mean velocity
\\[\\overline{w}=\\int_{0}^{1}w(z)\\,dz\\]
is equal to \\(w(\\theta)\\) is given by \\(\\theta=1/\\sqrt{3}\\). Similarly, one finds \\(\\theta^{\\prime}=(1/\\sqrt{3})H\\) for the upper layer. Therefore \\(S=-\\frac{2}{3}(1+rH)\\).
Recall that the scaling that led to our Boussinesq system is given by
\\[\\frac{x^{*}}{h}=\\frac{x}{\\sqrt{\\beta}},\\quad\\frac{t^{*}}{h/c_{0}}=\\frac{t}{ \\sqrt{\\beta}},\\quad\\frac{\\eta^{*}}{h}=\\alpha\\eta,\\quad\\frac{W^{*}}{gh/c_{0}}= \\alpha W,\\]
with \\(c_{0}=\\sqrt{gh}\\), \\(\\alpha\\ll 1\\), \\(\\beta\\ll 1\\) and \\(\\alpha=O(\\beta)\\). Linearizing system (33) and looking for solutions \\((\\eta^{*},W^{*})\\) proportional to \\(\\exp(ikx^{*}-i\\omega t^{*})\\) leads to the dispersion relation
\\[\\frac{\\omega^{2}}{k^{2}}=\\frac{gh(1-r)(d_{1}-d_{2}k^{2}h^{2})}{1-d_{3}k^{2}h^ {2}}.\\]
Plots of the dispersion relation are given in the next section. Since \\(0\\leq\\theta\\leq 1\\) and \\(0\\leq\\theta^{\\prime}\\leq H\\), the definition of \\(S\\) implies that
\\[-1-rH\\leq S\\leq 0.\\]
It follows that \\(d_{3}\\leq 0\\) and therefore the denominator \\(1-d_{3}h^{2}k^{2}\\) is positive. In order to have well-posedness (that is \\(\\omega^{2}/k^{2}\\) positive for all values of \\(k\\)), \\(d_{2}\\) must be negative, which is the case if \\(S\\leq-\\frac{2}{3}(1+rH)\\). Finally the condition we want to impose on \\(S\\) is that
\\[-(1+rH)\\leq S\\leq-\\frac{2}{3}(1+rH). \\tag{35}\\]
It is satisfied if one takes the horizontal velocities on the bottom and on the roof (\\(S=-(1+rH)\\)) or the mean horizontal velocities in the bottom and upper layers (\\(S=-\\frac{2}{3}(1+rH)\\)), but it is not if one takes the horizontal velocities along the interface (\\(S=0\\)).
The numerical scheme and numerical solutions
In order to integrate numerically the Boussinesq system (33), we introduce a slightly different change of variables, where the stars still denote the physical variables and no new notation is introduced for the dimensionless variables:
\\[x=\\frac{x^{*}}{h},\\ \\ \\eta=\\frac{\\eta^{*}}{h},\\ \\ t=\\frac{c}{h}t^{*},\\ \\ W=\\frac{W^{*}}{c},\\ \\ \\ \\mbox{with}\\ \\ c^{2}=gh\\frac{H(1-r)}{r+H}.\\]
The system (33) becomes
\\[\\left\\{\\begin{aligned} &\\eta_{t}=-d_{1}W_{x}-d_{4}(W\\eta)_{x}-d_{2}W_{xxx }\\\\ & W_{t}=-\\frac{1}{d_{1}}\\eta_{x}-d_{4}WW_{x}-d_{3}W_{xxt}\\end{aligned} \\right., \\tag{36}\\]
with dispersion relation
\\[\\frac{\\omega^{2}}{k^{2}}=\\frac{d_{1}-d_{2}k^{2}}{d_{1}(1-d_{3}k^{2})}. \\tag{37}\\]
As \\(k\\to 0\\), \\(\\omega/k\\to 1\\). As \\(k\\rightarrow\\infty\\),
\\[\\frac{\\omega^{2}}{k^{2}}\\rightarrow\\frac{d_{2}}{d_{1}d_{3}}=1+\\frac{2(1+rH)} {3S}.\\]
Typical plots of the dispersion relation (37) are given in Figure 2. Comparisons between the approximate and the exact dispersion relations, given by
\\[\\frac{\\omega^{2}}{k^{2}}=\\frac{\\tanh k\\tanh kH}{d_{1}k(\\tanh kH+r\\tanh k)}\\]
are also shown. A very good agreement is found for small \\(k\\). Taking the Fourier transform of the system (36) gives
\\[\\left\\{\\begin{aligned} &\\hat{\\eta}_{t}=(d_{2}k^{2}-d_{1})ik\\hat{W}-d_{4 }ik\\widehat{W}\\eta\\\\ &\\hat{W}_{t}=-\\frac{1}{d_{1}(1-d_{3}k^{2})}ik\\hat{\\eta}-\\frac{d_ {4}}{2(1-d_{3}k^{2})}ik\\widehat{W^{2}}\\end{aligned}\\right..\\]
The system of differential equations is solved by a pseudo-spectral method in space with a number \\(N\\) of Fourier modes on a periodic domain of length \\(L\\). For most applications, \\(N=1024\\) was found to be sufficient. The time integration is performed using the classical fourth-order explicit Runge-Kutta scheme. The time step \\(\\Delta t\\) was optimized through a trial and error process and was found to have a dependence in \\(1/N\\).
Since the main goal is to study the propagation and the collision of solitary waves, we first look for solitary wave solutions of the system (36). As opposed to the KdV equation, there are no explicit solitary wave solutions of the Boussinesq system that are physically relevant. Therefore we look for an approximate solitary wave solution to (36) as in [4] (see also [15] for the existence of solitary wave solutions). The leading-order terms give
\\[\\eta_{t}=-d_{1}W_{x},\\quad W_{t}=-\\frac{1}{d_{1}}\\eta_{x}.\\]
A solution representing a right-running wave is
\\[W(x-t)=\\frac{1}{d_{1}}\\eta(x-t).\\]
Let us look for solutions of system (36) in the form
\\[W(x,t)=\\frac{1}{d_{1}}[\\eta(x,t)+M(x,t)],\\]
where \\(M\\) is assumed to be small compared to \\(\\eta\\) and \\(W\\). Substituting the expression for \\(W\\) into (36) and neglecting higher-order terms yields
\\[\\left\\{\\begin{aligned} \\eta_{t}&=-\\eta_{x}-M_{x}- \\frac{d_{4}}{d_{1}}(\\eta^{2})_{x}-\\frac{d_{2}}{d_{1}}\\eta_{xxx}\\\\ \\eta_{t}&=-\\eta_{x}-M_{t}-\\frac{1}{2}\\frac{d_{4}}{d _{1}}(\\eta^{2})_{x}-d_{3}\\eta_{xxt}\\end{aligned}\\right.. \\tag{38}\\]
Figure 2: Dispersion relation (37) for the Boussinesq system (36) with \\(S=-1-rH\\), \\(r=0.9\\): (a) \\(H=1.2\\), (b) \\(H=0.8\\). The dashed curves represent the dispersion relation for the linearized interfacial wave equations, without the long wave assumption (see for example [22]).
Assuming that the solitary wave goes to the right, one has \\(M_{t}\\approx-M_{x}.\\) Therefore
\\[M_{x}=-\\frac{1}{4}\\frac{d_{4}}{d_{1}}(\\eta^{2})_{x}-\\frac{1}{2}\\frac{d_{2}}{d_{1} }\\eta_{xxx}+\\frac{1}{2}d_{3}\\eta_{xxt}.\\]
Substituting the expression for \\(M_{x}\\) into one of the equations of system (38) yields
\\[\\eta_{t}+\\eta_{x}+\\frac{3d_{4}}{4d_{1}}(\\eta^{2})_{x}+\\frac{d_{2}}{2d_{1}}\\eta _{xxx}+\\frac{d_{3}}{2}\\eta_{xxt}=0. \\tag{39}\\]
This is essentially the model equation that was studied in [7].
Looking for solitary wave solutions of (39) in the form
\\[\\eta=\\eta_{0}\\,\\mbox{sech}^{2}[\\kappa(x+x_{0}-Vt)] \\tag{40}\\]
leads to two equations for \\(\\kappa\\) and \\(V\\):
\\[\\left\\{\\begin{aligned} -V+1+2(d_{2}/d_{1})\\kappa^{2}-2d_{3} \\kappa^{2}V=0\\\\ d_{4}\\eta_{0}-4d_{2}\\kappa^{2}+4d_{1}d_{3}\\kappa^{2}V=0\\end{aligned} \\right..\\]
Solving for \\(\\kappa^{2}\\) and \\(V\\) yields
\\[\\kappa^{2}=\\frac{d_{4}\\eta_{0}}{4\\left(d_{2}-d_{1}d_{3}-\\frac{1}{2}d_{3}d_{4} \\eta_{0}\\right)},\\quad V=1+\\frac{d_{4}\\eta_{0}}{2d_{1}},\\]
and, assuming \\(M(\\pm\\infty)=0,\\) one obtains explicitly the following expression for \\(M\\):
\\[M=-\\frac{d_{4}}{4d_{1}}\\eta^{2}-\\frac{d_{2}}{2d_{1}}\\eta_{xx}+\\frac{d_{3}}{2} \\eta_{xt}.\\]
For a given pair \\((r,H),\\) one must only consider values of \\(\\eta_{0}\\) which are such that \\(\\kappa^{2}>0.\\) In addition one has the condition (35) on \\(S.\\) The sign of \\(d_{4}\\) depends on the relation between \\(H^{2}\\) and \\(r.\\) Let us assume first that \\(H^{2}>r\\) so that \\(d_{4}>0.\\) In order for the condition \\(\\kappa^{2}>0\\) to be satisfied, one needs
\\[\\eta_{0}\\left(d_{2}-d_{1}d_{3}-\\frac{1}{2}d_{3}d_{4}\\eta_{0}\\right)>0.\\]
The values of \\(\\eta_{0}\\) for which the left-hand side of the inequality vanishes are
\\[\\eta_{01}=0,\\quad\\eta_{02}=\\frac{4H(r+H)(1+rH)}{3(H^{2}-r)S}.\\]Since \\(S<0\\), \\(\\eta_{02}<0\\) and therefore \\(\\eta_{02}<\\eta_{01}\\). The coefficient of \\(\\eta_{0}^{2}\\) in the inequality is positive. Consequently one must have
\\[\\eta_{0}>\\eta_{01}=0\\quad\\mbox{or}\\quad\\eta_{0}<\\eta_{02}=\\frac{4H(r+H)(1+rH)}{3 (H^{2}-r)S}.\\]
This second branch is not acceptable since
\\[\\frac{4H(r+H)(1+rH)}{3(H^{2}-r)}>1+rH>-S>0.\\]
Therefore
\\[\\frac{4H(r+H)(1+rH)}{3(H^{2}-r)S}<-1,\\]
which gives an amplitude larger than the depth!
Similarly, when \\(H^{2}<r\\) one finds a second branch which is not acceptable. The summary of acceptable values for \\(\\eta_{0}\\) is given in the table
\\begin{tabular}{|l|l|} \\hline \\(H^{2}-r>0\\) & \\(0<\\eta_{0}<H\\) \\\\ \\hline \\(H^{2}-r<0\\) & \\(-1<\\eta_{0}<0\\) \\\\ \\hline \\end{tabular}
For a \"thick\" upper layer (\\(H^{2}>r\\)), the solitary waves are of elevation, while they are of depression for a \"thick\" bottom layer (\\(H^{2}<r\\)). The weakly nonlinear theory developed in the present section does not provide any bounds on the amplitude of the solitary waves. We have added a physical constraint based on the fact that both layers are bounded by flat solid boundaries. It is well-known in the framework of the full interfacial wave equations (see for example [22]) that the rigid top and bottom provide natural bounds on the solitary wave amplitudes. As the speed increases, the wave amplitude reaches a limit. In the next section, we extend our weakly nonlinear analysis to cubic terms so that this effect can be incorporated.
Once the approximate solitary wave (40) has been obtained, it is possible to make it cleaner by iterative filtering. This technique has been used by several authors, including [4, 6], and is explained in Appendix A. In order to study run-ups and phase shifts during collision of solitary waves, it is important to use clean solitary waves for the initial conditions. On the other hand, in order to show only the qualitative behavior, it is not necessary. Therefore results in this Section are given for non-filtered solitary waves. Some results with filtered waves are described in Appendix A.
In Figure 3, we show the propagation of an almost perfect right-running solitary wave of elevation. Even though all computations are performed with dimensionless variables, it is interesting to provide numerical applications for a configuration that could be realized in the laboratory [23]. Keeping \\(r=0.9\\)as in the figure, one could take for example \\(h=10\\) cm, \\(h^{\\prime}=11\\) cm (\\(H=1.1\\)). The solitary wave amplitude is 1 cm, its speed \\(c\\approx 23.2\\) cm/s, the length of the domain 51.2 m (a bit long!). The plots (b)-(e) would then correspond to snapshots at \\(t=21.5\\) s, \\(t=68.9\\) s, \\(t=94.8\\) s and \\(t=163.7\\) s.
In Figure 4, we show the head-on collision of two almost perfect solitary waves of elevation of equal amplitude moving in opposite directions. As in the one-layer case, the solution rises to an amplitude slightly larger than the sum of the amplitudes of the two incident solitary waves (see Appendix A). After the collision, two similar waves emerge and return to the form of two separated solitary waves. As a result of this collision, the amplitudes of the two result
Figure 3: An approximate solitary wave propagating to the right. This is a solution to the system of quadratic Boussinesq equations (36), with parameters \\(H=1.1\\), \\(r=0.9\\), \\(L=512\\), \\(N=1024\\), \\(S=-1-rH\\), \\(\\eta_{0}=0.1\\).
ing solitary waves are slightly smaller than the incident amplitudes and their centers are slightly retarded from the trajectories of the incoming centers (see again Appendix A).
In Figure 5, we show the collision of two almost perfect solitary waves of depression of unequal amplitudes moving in opposite directions. The numerical simulations exhibit a number of the same features that have been observed in the symmetric case.
Figure 4: Head-on collision of two approximate solitary waves of elevation of equal size. This is a solution to the system of quadratic Boussinesq equations (36), with parameters \\(H=1.2\\), \\(r=0.8\\), \\(L=512\\), \\(N=1024\\), \\(S=-1-rH\\), \\(\\eta_{0}^{\\ell}=\\eta_{0}^{r}=0.1\\), where the superscripts \\(\\ell\\) and \\(r\\) stand for left and right respectively.
In Figure 6, we show the co-propagation of two solitary waves of elevation of different amplitudes. A sequence of spatial profiles is shown. The larger one, which is faster, eventually passes the smaller one, which is slower. Again there is a phase shift after the interaction. The amplitude of the solution \\(\\eta(x,t)\\) never exceeds that of the larger solitary wave, nor does it dip below the amplitude of the smaller.
Figure 5: Head-on collision of two almost perfect solitary waves of depression of different sizes. This is a solution to the system of quadratic Boussinesq equations (36), with parameters \\(H=0.6\\), \\(r=0.85\\), \\(L=256\\), \\(N=1024\\), \\(S=-1-rH\\), \\(\\eta_{0}^{\\ell}=-0.04\\), \\(\\eta_{0}^{r}=-0.11\\), where the superscripts \\(\\ell\\) and \\(r\\) stand for left and right respectively. In plot (f), note that \\(-\\eta(x,t)\\) has been plotted for the sake of clarity.
## 6 Extended Boussinesq system of two equations with cubic terms
When \\(|H^{2}-r|\\) is small, one needs to go one step beyond and take into consideration the cubic terms. Again one would like to obtain a system of two equations for the variables \\(\\eta\\) and \\(W=w-rw^{\\prime}\\). We derive first a general system of two equations with cubic terms. Then we introduce a specific scaling for the case where \\(|H^{2}-r|\\) is small. A lot of terms in the system drop out because they are of higher order.
Figure 6: Co-propagation of two almost perfect solitary waves of elevation of different sizes. This is a solution to the system of quadratic Boussinesq equations (36), with parameters \\(H=1.6\\), \\(r=0.95\\), \\(L=2^{14}\\), \\(N=2^{14}\\), \\(S=-1-rH\\), \\(\\eta_{0}^{\\ell}=0.1\\), \\(\\eta_{0}^{r}=0.03\\), where the superscripts \\(\\ell\\) and \\(r\\) stand for left and right respectively.
The leading order terms lead to the same equation as before, namely \\(w=-Hw^{\\prime}\\). And again
\\[w=\\frac{H}{r+H}W+O(\\beta),\\quad w^{\\prime}=\\frac{-1}{r+H}W+O(\\beta). \\tag{41}\\]
At the next order, the first two equations of (29) give
\\[w_{x}+\\alpha(w\\eta)_{x}+\\frac{1}{2}\\beta\\left(\\theta^{2}-\\frac{1}{3}\\right)w_{ xxx}=-Hw^{\\prime}_{x}+\\alpha(w^{\\prime}\\eta)_{x}-\\frac{1}{2}\\beta H\\left( \\theta^{\\prime 2}-\\frac{1}{3}H^{2}\\right)w^{\\prime}_{xxx}.\\]
Since the speeds \\(w\\) and \\(w^{\\prime}\\) vanish as \\(x\\rightarrow\\infty\\) one has
\\[w=-Hw^{\\prime}+\\alpha(w^{\\prime}-w)\\eta-\\frac{1}{2}\\beta\\left(H\\left(\\theta^{ \\prime 2}-\\frac{1}{3}H^{2}\\right)w^{\\prime}_{xx}+\\left(\\theta^{2}-\\frac{1}{3} \\right)w_{xx}\\right).\\]
Using (41) for the terms containing \\(\\alpha\\) or \\(\\beta\\) and neglecting terms of \\(O(\\beta^{2})\\), one obtains
\\[w = -Hw^{\\prime}-\\alpha\\frac{1+H}{r+H}W\\eta+\\frac{1}{2}\\beta H\\frac{ \\left(\\theta^{\\prime 2}-\\frac{1}{3}H^{2}\\right)-\\left(\\theta^{2}-\\frac{1}{3} \\right)}{r+H}W_{xx}, \\tag{42}\\] \\[w^{\\prime} = -\\frac{w}{H}-\\alpha\\frac{1+H}{H(r+H)}W\\eta+\\frac{1}{2}\\beta\\frac {\\left(\\theta^{\\prime 2}-\\frac{1}{3}H^{2}\\right)-\\left(\\theta^{2}-\\frac{1}{3} \\right)}{r+H}W_{xx}. \\tag{43}\\]
In Appendix B, after several substitutions, one obtains the system of two equations (B.8) and (B.15). Switching back to the physical variables
\\[x^{*}=\\ell x,\\quad\\eta^{*}=A\\eta,\\quad t^{*}=\\ell t/c_{0},\\quad W^{*}=gAW/c_{0 },\\quad\\mbox{with}\\;\\;c_{0}=\\sqrt{gh},\\]
the system (B.8)-(B.15) becomes
\\[\\begin{array}{l}(r+H)\\eta^{*}_{t^{*}}+hHW^{*}_{x^{*}}+\\frac{H^{2}-r}{r+H}(W ^{*}\\eta^{*})_{x^{*}}+\\frac{1}{2}h^{3}H\\frac{H^{(\\theta^{2}-\\frac{1}{3})+r( \\theta^{\\prime 2}-\\frac{1}{3}H^{2})}}{r+H}W^{*}_{x^{*}x^{*}x^{*}}\\\\ -\\frac{1}{h}\\frac{r(1+H)^{2}}{(r+H)^{2}}(W^{*}\\eta^{*2})_{x^{*}}+\\frac{1}{2}h^ {2}rH(1+H)\\frac{(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{2}-\\frac{1}{3})}{(r+H)^{2}}(W^ {*}\\eta^{*})_{x^{*}x^{*}x^{*}}\\\\ +\\frac{1}{2}h^{2}\\left(rH(1+H)\\frac{(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-( \\theta^{2}-\\frac{1}{3})}{(r+H)^{2}}+\\frac{H^{2}(\\theta^{2}-1)-r(\\theta^{\\prime 2 }-H^{2})}{r+H}\\right)(W^{*}_{x^{*}x^{*}}\\eta^{*})_{x^{*}}\\\\ -\\frac{1}{4}h^{5}\\left(\\frac{rH^{2}\\left((\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-( \\theta^{2}-\\frac{1}{3})\\right)^{2}}{(r+H)^{2}}-\\frac{5}{6}\\frac{H^{2}( \\theta^{2}-\\frac{1}{5})^{2}+rH(\\theta^{\\prime 2}-\\frac{1}{5}H^{2})^{2}}{r+H} \\right)W^{*}_{x^{*}x^{*}x^{*}x^{*}x^{*}}=0,\\end{array} \\tag{44}\\]\\[g(1-r)\\eta^{*}_{x^{*}}+W^{*}_{t^{*}}+\\frac{H^{2}-r}{(r+H)^{2}}W^{*}W^ {*}_{x^{*}}+\\frac{1}{2}h^{2}\\frac{H(\\theta^{2}-1)+r(\\theta^{2}-H^{2})}{r+H}W^{*} _{x^{*}x^{*}t^{*}}\\] \\[-\\frac{1}{h}\\frac{r(1+H)^{2}}{(r+H)^{3}}(W^{*2}\\eta^{*})_{x^{*}}+ \\frac{1}{2}h^{2}rH(1+H)\\frac{(\\theta^{2}-\\frac{1}{2}H^{2})-(\\theta^{2}-\\frac{1} {3})}{(r+H)^{3}}(W^{*}W^{*}_{x^{*}x^{*}})_{x^{*}}\\] \\[+h\\frac{H(1-r)}{(\\eta^{*}W^{*}_{x^{*}t^{*}})_{x^{*}}}+\\frac{1}{2}h ^{2}\\frac{H^{2}(\\theta^{2}-1)-r(\\theta^{2}-H^{2})}{(r+H)^{2}}W^{*}W^{*}_{x^{*}x ^{*}x^{*}}\\] \\[+\\frac{1}{2}h^{2}\\frac{H^{2}(\\theta^{2}+1)-r(\\theta^{2}+H^{2})}{( r+H)^{2}}W^{*}_{x^{*}}W^{*}_{x^{*}x^{*}}-\\frac{1}{2}hrH(1+H)\\frac{(\\theta^{2}-1)-( \\theta^{2}-H^{2})}{(r+H)^{2}}(W^{*}\\eta^{*})_{x^{*}x^{*}t^{*}}\\] \\[+h^{4}\\left(\\frac{rH\\left((\\theta^{2}-1)-(\\theta^{\\prime 2}-H^{2}) \\right)\\left((\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{2}-\\frac{1}{3}) \\right)}{4(r+H)^{2}}+\\frac{H(\\theta^{2}-1)(5\\theta^{2}-1)+r(\\theta^{\\prime 2}-H^{2}) (5\\theta^{\\prime 2}-H^{2})}{2(r+H)}\\right)\\] \\[W^{*}_{x^{*}x^{*}x^{*}t^{*}}=0.\\]
The specific scaling for small values of \\(|H^{2}-r|\\),
\\[\\frac{x^{*}}{h}=\\frac{x}{\\beta},\\quad\\frac{t^{*}}{h/c_{0}}=\\frac{t}{\\alpha}, \\quad\\frac{\\eta^{*}}{h}=\\alpha\\eta,\\quad\\frac{W^{*}}{gh/c_{0}}=\\alpha W,\\quad H ^{2}-r=\\alpha{\\cal C},\\]
with \\(c_{0}=\\sqrt{gh}\\), \\(\\alpha\\ll 1\\), \\(\\beta\\ll 1\\), \\(\\alpha=O(\\beta)\\), will lead to a new Boussinesq system with cubic terms. A lot of terms in (44)-(45) drop out because they are of higher order. Keeping terms of order \\(\\alpha^{2}\\) and \\(\\alpha^{4}\\) and going back to physical variables, the system of two equations becomes
\\[\\eta^{*}_{t^{*}} = -h\\frac{H}{r+H}W^{*}_{x^{*}}-h^{3}\\left(\\frac{1}{2}\\frac{H^{2}S}{ (r+H)^{2}}+\\frac{1}{3}\\frac{H^{2}(1+rH)}{(r+H)^{2}}\\right)W^{*}_{x^{*}x^{*}x^{*}}\\] \\[-\\frac{H^{2}-r}{(r+H)^{2}}(W^{*}\\eta^{*})_{x^{*}}+\\frac{1}{h}\\frac {r(1+H)^{2}}{(r+H)^{3}}(W^{*}\\eta^{*2})_{x^{*}}\\] \\[W^{*}_{t^{*}} = -g(1-r)\\eta^{*}_{x^{*}}-\\frac{1}{2}h^{2}\\frac{HS}{r+H}W^{*}_{x^{ *}x^{*}t^{*}}-\\frac{H^{2}-r}{(r+H)^{2}}W^{*}W^{*}_{x^{*}}+\\frac{1}{h}\\frac{r(1 +H)^{2}}{(r+H)^{3}}(W^{*2}\\eta^{*})_{x^{*}}.\\]
This is the same system as (33) with two extra terms, the cubic terms. We will call it a system of extended Boussinesq equations (see also [11]). Linearizing (46) gives the same dispersion relation as before.
## 7 Numerical solutions of the extended Boussinesq system
In order to integrate numerically the extended Boussinesq system (46), we introduce a slightly different change of variables, where the stars still denote the physical variables and no new notation is introduced for the dimensionless variables:
\\[x=\\frac{x^{*}}{h},\\ \\ \\eta=\\frac{\\eta^{*}}{h},\\ \\ t=\\frac{c}{h}t^{*},\\ \\ W=\\frac{W^{*}}{c},\\ \\ \\mbox{with}\\ \\ c^{2}=\\frac{ghh^{\\prime}(\\rho-\\rho^{\\prime})}{\\rho^{\\prime}h+\\rho h^{ \\prime}}=\\frac{ghH(1-r)}{r+H}.\\]Using the same coefficients as in (34), we rewrite system (46) with the new variables as
\\[\\begin{split}\\eta_{t}=&-d_{1}W_{x}-d_{2}W_{xxx}-d_{4}(W \\eta)_{x}+d_{5}(W\\eta^{2})_{x}\\\\ W_{t}=&-(1/d_{1})\\eta_{x}-d_{3}W_{xxt}-d_{4}WW_{x}+d_ {5}(W^{2}\\eta)_{x}\\end{split} \\tag{47}\\]
where the new coefficient \\(d_{5}\\) is equal to
\\[d_{5}=\\frac{r(1+H)^{2}}{(r+H)^{3}}.\\]
When \\((\\theta,\\theta^{\\prime})=(0,0)\\), one recovers the system with horizontal velocities on the bottom and on the roof.
Taking the Fourier transform of the system (47) gives
\\[\\begin{split}\\hat{\\eta_{t}}=&\\,(d_{2}k^{2}-d_{1}) ik\\hat{W}-d_{4}ik\\widehat{(W\\eta)}+d_{5}ik\\widehat{(W\\eta^{2})},\\\\ (1-d_{3}k^{2})\\hat{W}_{t}=&-\\frac{1}{d_{1}}ik\\hat{ \\eta}-\\frac{d_{4}}{2}ik\\widehat{(W^{2})}+d_{5}ik\\widehat{(W^{2}\\eta)}.\\end{split}\\]
The system of differential equations is integrated numerically with the same method as in SS 5.
Again we look for approximate solitary wave solutions to (47). As before we look for solutions of the form
\\[W(x,t)=\\frac{1}{d_{1}}[\\eta(x,t)+M(x,t)],\\]
where \\(M\\) is assumed to be small compared to \\(\\eta\\) and \\(W\\). Substituting the expression for \\(W\\) into (47) and neglecting higher-order terms yields
\\[M_{x}=-\\frac{1}{4}\\frac{d_{4}}{d_{1}}(\\eta^{2})_{x}-\\frac{1}{2}\\frac{d_{2}}{d_ {1}}\\eta_{xxx}+\\frac{1}{2}d_{3}\\eta_{xxt}. \\tag{48}\\]
Substituting the expression for \\(M_{x}\\) into one of the equations of system (47) yields
\\[\\eta_{t}+\\eta_{x}+\\frac{3d_{4}}{4d_{1}}(\\eta^{2})_{x}-\\frac{d_{5}}{d_{1}}(\\eta ^{3})_{x}+\\frac{d_{2}}{2d_{1}}\\eta_{xxx}+\\frac{d_{3}}{2}\\eta_{xxt}=0. \\tag{49}\\]
We have checked that the extended KdV equation (49) is in agreement with previously derived eKdV equations such as in [14].
Let \\(V=1+c_{1}\\) be the wave speed, with \\(c_{1}\\) small. In the moving frame of reference \\(X=x-(1+c_{1})t,T=t\\), equation (49) becomes
\\[-c_{1}\\eta_{X}+\\eta_{T}+\\frac{3d_{4}}{4d_{1}}(\\eta^{2})_{X}-\\frac{d_{5}}{d_{1}} (\\eta^{3})_{X}+\\frac{d_{2}}{2d_{1}}\\eta_{XXX}+\\frac{d_{3}}{2}\\left[-(1+c_{1}) \\eta_{XXX}+\\eta_{XXT}\\right]=0.\\]
Looking for stationary solutions and integrating with respect to \\(X\\) yields
\\[-c_{1}\\eta+\\frac{3d_{4}}{4d_{1}}\\eta^{2}-\\frac{d_{5}}{d_{1}}\\eta^{3}+\\frac{1}{ 2}\\left(\\frac{d_{2}}{d_{1}}-d_{3}-c_{1}d_{3}\\right)\\eta_{XX}=0. \\tag{50}\\]
Letting
\\[\\alpha_{1}=\\frac{3}{2}\\frac{H^{2}-r}{H(r+H)},\\ \\ \\beta_{1}=3\\frac{r(1+H)^{2}}{H (r+H)^{2}},\\ \\ \\lambda_{1}=\\frac{1}{6}\\frac{H(rH+1)}{r+H}-\\frac{1}{4}\\frac{HS}{r+H}c_{1},\\]
equation (50) becomes
\\[-c_{1}\\eta+\\frac{1}{2}\\alpha_{1}\\eta^{2}-\\frac{1}{3}\\beta_{1}\\eta^{3}+\\lambda _{1}\\eta_{XX}=0.\\]
It has solitary wave solutions
\\[\\eta(X)=\\left(\\frac{\\alpha_{1}}{\\beta_{1}}\\right)\\frac{1-\\epsilon^{2}}{1+ \\epsilon\\cosh(\\sqrt{\\frac{c_{1}}{\\lambda_{1}}}X)},\\ \\ \\ \\mbox{with}\\ \\ \\epsilon=\\frac{\\sqrt{\\alpha_{1}^{2}-6\\beta_{1}c_{1}}}{|\\alpha_{1}|}.\\]
In the fixed frame of reference, the profile of the solitary waves is given by
Figure 7: ‘Table-top’ solitary waves which are approximate solutions of the extended Boussinesq system (47). The horizontal velocities are taken on the top and the bottom so that \\(S=-(1+rH)\\). (a) \\(H=1.8\\), \\(r=0.8\\). The wave speeds \\(V\\) are, going from the smallest to the widest solitary wave, \\(V_{\\max}-V\\sim 10^{-3},10^{-9},10^{-15}\\); (b) \\(H=0.4\\), \\(r=0.9\\). The wave speeds \\(V\\) are, going from the smallest to the widest solitary wave, \\(V_{\\max}-V\\sim 10^{-3},10^{-9},10^{-14}\\).
\\[\\eta(x,t)=\\left(\\frac{\\alpha_{1}}{\\beta_{1}}\\right)\\frac{1-\\epsilon^{2}}{1+ \\epsilon\\cosh\\left(\\sqrt{\\frac{V-1}{\\lambda_{1}}}(x-Vt)\\right)}. \\tag{51}\\]
When \\(H^{2}>r\\) the solitary waves are of elevation. When \\(H^{2}<r\\) they are of depression. The parameter \\(\\epsilon\\) can take values ranging from \\(0\\) (infinitely wide solution) to \\(1\\) (solution of infinitesimal amplitude). Assuming \\(M(\\pm\\infty)=0\\), one can compute \\(M\\) explicitly by integrating equation (48) with respect to \\(x\\):
\\[M=-\\frac{d_{4}}{4d_{1}}\\eta^{2}-\\frac{d_{2}}{2d_{1}}\\eta_{xx}+\\frac{d_{3}}{2} \\eta_{xt}.\\]
Typical approximate solitary waves solutions are shown in Figure 7. Notice that the condition \\(|H^{2}-r|\\) small is not really satisfied for the selected values of \\(H\\) and \\(r\\). The reason is that otherwise the waves would have been too small to be clearly visible. Of course we still have the conditions on \\(S\\) for well-posedness:
\\[-(1+rH)\\leq S\\leq-\\frac{2}{3}(1+rH).\\]
The solitary waves are characterized by wave velocities larger than \\(1\\) (\\(c_{1}>0\\)). The maximum wave velocity \\(V_{\\rm max}\\) is obtained when \\(\\epsilon\\to 0\\). One finds \\(c_{1}\\rightarrow\\alpha_{1}^{2}/6\\beta_{1}\\), so that
\\[V_{\\rm max}=1+\\frac{(H^{2}-r)^{2}}{8rH(1+H)^{2}}.\\]
Once the approximate solitary wave (51) has been obtained, it is again possible to make it cleaner by iterative filtering. Qualitative results for non-filtered solitary waves are given in this Section. Some accurate results for run-ups and phase shifts with filtered waves are described in Appendix A.
In Figure 8, we show the head-on collision of two almost perfect 'table-top' solitary waves of elevation of equal amplitude moving in opposite directions. As in the case with only quadratic nonlinearities, the solution rises to an amplitude larger than the sum of the amplitudes of the two incident solitary waves. After the collision, two similar waves emerge and return to the form of two separated 'table-top' solitary waves. As a result of this collision, the amplitudes of the two resulting solitary waves are slightly smaller than the incident amplitudes and their centers are slightly retarded from the trajectories of the incoming centers.
In Figure 9, we show the collision of two almost perfect solitary waves of depression of equal amplitude moving in opposite directions. The numerical simulations exhibit the same features that have been observed in the elevation case.
In Figure 10, we show the collision of an almost perfect 'table-top' solitary wave of elevation with a solitary wave of elevation moving in the opposite direction. The numerical simulations exhibit a number of the same features that have been observed in the symmetric case. The phase lag is asymmetric, with the smaller solitary wave being delayed more significantly than the larger.
Note that in the quadratic as well as in the cubic cases, it is not possible to consider the collision between a solitary wave of depression and a solitary wave of elevation. Indeed the sign of \\(H^{2}-r\\) determines whether the wave is of elevation or of depression.
Figure 8: Head-on collision of two approximate ‘table-top’ elevation solitary waves of equal size. This is a solution to the system of cubic Boussinesq equations (47), with parameters \\(H=0.95\\), \\(r=0.8\\), \\(L=4096\\), \\(N=1024\\), \\(S=-1-rH,V_{\\max}-V\\sim 10^{-17}\\).
## 8 Conclusion
In this paper, we derived a system of extended Boussinesq equations in order to describe weakly nonlinear waves at the interface between two heavy fluids in a 'rigid-lid' configuration. To our knowledge we have described for the first time the collision between 'table-top' solitary waves. The extension to a 'free-surface' configuration and to arbitrary wave amplitude is left to future studies. Indeed, since the waves we considered are only weakly nonlinear, we do not have to worry about the resulting wave reaching the roof or the bottom.
Figure 9: Head-on collision of two approximate ‘table-top’ depression solitary waves of equal size. This is a solution to the system of cubic Boussinesq equations (47), with parameters \\(H=0.9\\), \\(r=0.85\\), \\(L=4096\\), \\(N=1024\\), \\(S=-1-rH,V_{\\max}-V\\sim 10^{-14}\\). In plot (f), note that \\(-\\eta(x,t)\\) has been plotted for the sake of clarity.
However, in a fully nonlinear regime, this could happen. Indeed the maximum amplitude \\(A\\) for 'table-top' solitary waves is given by
\\[\\frac{A}{h}=\\frac{H-\\sqrt{r}}{1+\\sqrt{r}}.\\]
Take the case where \\(H^{2}>r\\). It is easy to see that while \\(A/h\\) is always smaller than \\(H\\), \\(2A/h\\) can exceed \\(H\\), so that the resulting wave will hit the roof. Therefore it will be interesting to consider the collision of solitary waves of arbitrary amplitudes by using the full Euler equations. On the other hand, for 'table-top' solitary waves of depression, the resulting wave cannot touch the
Figure 10: Head-on collision of a solitary wave of elevation and of a ‘table-top’ solitary wave of elevation. This is a solution to the system of cubic Boussinesq equations (47), with parameters \\(H=0.95\\), \\(r=0.8\\), \\(L=2048\\), \\(N=1024\\), \\(S=-1-rH\\), \\(V_{\\max}-V^{\\ell}\\sim 10^{-4}\\), \\(V_{\\max}-V^{r}\\sim 10^{-11}\\).
bottom.
## Appendix A Additional results on run-ups and phase shifts
In this appendix, we provide accurate results on run-ups and phase shifts. The terminology 'run-up' denotes the fact that during the collision of two counterpropagating solitary waves the wave amplitude increases beyond the sum of the two single wave amplitudes. Since run-ups and phase shifts are always very small, they must be computed with high accuracy. This is why it is important to clean the solitary waves obtained by the approximate expressions (40) or (51). We proceed as follows. We begin with an approximate solution, let it propagate across the domain, truncate the leading pulse, use it as new initial value by translating it to the left of the domain, let it propagate again and distance itself from the trailing dispersive tail, truncate again, and repeat the whole process over and over until a clean, at least to the eye, solitary wave is produced. Then we use this new filtered solution as initial guess to study the various collisions.
For solitary wave solutions to the system of equations with quadratic nonlinearities (36), the behavior is the same as the behavior shown for example in [10]. In particular we obtain pictures that look very similar to their Figure 2 for the phase shift resulting from the head-on collision of two solitary waves of equal height, to their Figure 4 for the time evolution of the maximum amplitude of the solution (it rises sharply to more than twice the elevation of the incident solitary waves, then descends to below this level after crest detachment, and finally relaxes back to almost its initial level) and to their Figure 12 for the asymmetric head-on collision of two solitary waves of different heights.
Since the main contribution of the present paper is the inclusion of cubic terms in addition to the quadratic terms, we focus on results for the extended Boussinesq system (46). Figure A.1 shows the effect of cleaning. In Figure A.2, the collision between two clean 'table-top' solitary waves (the cleaning has been applied 400 times) is shown. Their speed is \\(V=1.00183358\\). The amplitude before cleaning was \\(\\eta_{\\rm max}=0.063476\\). After iterative cleaning, it reached \\(\\eta_{\\rm max}=0.06812113\\). The run-up during collision is extremely small: indeed \\(
in the previous case: indeed \\(\\eta_{\\rm max}=0.10556057\\) at collision, which is slightly larger than \\(0.06812113+0.03719492=0.10531605\\). The phase shift is very small and the crest trajectory shows an interesting path. The overall conclusion is that run-ups and phase shifts are smaller for 'table-top' solitary waves than for 'classical' solitary waves.
Figure A.1: Flat solitary wave produced by iterative cleaning. This is a solution to the system of extended Boussinesq equations (47). (a) Difference in the profile before (solid line) and after (dashed line) cleaning. (b) Profile of the approximate solitary wave (51) after one propagation across the domain. (c) Profile (b) after cleaning and translation to the left of the domain. (d) Profile after several cleanings. Notice the change of scale in the vertical axis. (e) Evolution of the maximum amplitude \\(\\eta_{\\rm max}\\) as cleaning is repeated over and over. The amplitude reaches an asymptotic level.
## Appendix B Intermediate steps in the derivation of the extended Boussinesq system with cubic terms
Adding \\(H\\) times equation (26) to \\(r\\) times equation (27) yields
\\[(r+H)\\eta_{t}+H(w_{x}-rw_{x}^{\\prime})+\\alpha[(Hw+rw^{\\prime})\\eta] _{x}\\] \\[+\\frac{H}{2}\\beta\\left[(\\theta^{2}-\\frac{1}{3})w_{xxx}-r(\\theta^{ \\prime 2}-\\frac{1}{3}H^{2})w_{xxx}^{\\prime}\\right]\\] \\[+\\frac{1}{2}\\alpha\\beta\\left[H(\\theta^{2}-1)(\\eta w_{xx})_{x}+r( \\theta^{\\prime 2}-H^{2})(\\eta w_{xx}^{\\prime})_{x}\\right]\\] \\[+\\frac{5H}{24}\\beta^{2}\\left[\\left(\\theta^{2}-\\frac{1}{5}\\right)^ {2}w_{xxxxx}-r\\left(\\theta^{\\prime 2}-\\frac{1}{5}H^{2}\\right)^{2}w_{xxxxx}^{ \\prime}\\right] = 0.\\] (B.1)
Let us replace the variables \\(w\\) and \\(w^{\\prime}\\) in (B.1) by their expressions (42)-(43) in terms of \\(W\\) and let
\\[F=\\frac{1+H}{r+H},\\quad G=H\\frac{(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{ 2}-\\frac{1}{3})}{r+H}.\\]
We consider all the terms one by one:
Figure A.2: A collision between two clean ‘table-top’ solitary waves of equal height. This is a solution to the system of extended Boussinesq equations (47). (a) Initial profiles. (b) Time evolution of the amplitude \\(\\eta_{\\max}\\). (c) Crest trajectory.
1. _Term_ \\(H(w_{x}-rw_{x}^{\\prime})\\) \\[H(w_{x}-rw_{x}^{\\prime})=HW_{x}\\] (B.2)
2. _Term_ \\(\\alpha[(Hw+rw^{\\prime})\\eta]_{x}\\) From equation (42), we have \\[w-rw^{\\prime}=-(r+H)w^{\\prime}-\\alpha FW\\eta+\\frac{1}{2}\\beta GW_{xx},\\] so that \\[w^{\\prime}=-\\frac{1}{r+H}W-\\alpha\\frac{1}{r+H}FW\\eta+\\frac{1}{2}\\beta\\frac{ 1}{r+H}GW_{xx}.\\] (B.3) Similarly from equation (43), we obtain \\[w=\\frac{H}{r+H}W-\\alpha\\frac{r}{r+H}FW\\eta+\\frac{1}{2}\\beta\\frac{r}{r+H}GW_{ xx}.\\] (B.4) Combining (B.3) and (B.4) yields \\[Hw+rw^{\\prime}=\\frac{H^{2}-r}{r+H}W-\\alpha\\frac{r(1+H)}{r+H}FW\\eta+\\frac{1}{2} \\beta\\frac{r(1+H)}{r+H}GW_{xx}.\\]
Figure A.3: A collision between a clean solitary wave and a clean ‘table-top’ solitary wave. This is a solution to the system of extended Boussinesq equations (47). (a) Initial profiles. (b) Time evolution of the amplitude \\(\\eta_{\\max}\\). (c) Crest trajectory.
Therefore
\\[\\alpha\\Big{[}(Hw+rw^{\\prime})\\eta\\Big{]}_{x} = \\alpha\\frac{H^{2}-r}{r+H}(W\\eta)_{x}-\\alpha^{2}\\frac{r(1+H)^{2}}{(r+ H)^{2}}(W\\eta^{2})_{x}\\] \\[+\\frac{1}{2}\\alpha\\beta\\frac{rH(1+H)\\left((\\theta^{\\prime 2}-\\frac{1}{ 3}H^{2})-(\\theta^{2}-\\frac{1}{ 3})\\right)}{(r+H)^{2}}(W_{xx}\\eta)_{x}.\\]
3. _Term in \\(w_{xxx}\\) and \\(w^{\\prime}_{xxx}\\)_ Combining (B.3) and (B.4) yields \\[\\frac{H}{2}\\beta\\left[(\\theta^{2}-\\frac{1}{3})w_{xxx}-r(\\theta^{\\prime 2}- \\frac{1}{3}H^{2})w^{\\prime}_{xxx}\\right]\\] \\[= \\frac{1}{2}\\beta H\\frac{H(\\theta^{2}-\\frac{1}{3})+r(\\theta^{ \\prime 2}-\\frac{1}{3}H^{2})}{r+H}W_{xxx}\\] \\[+\\frac{1}{2}\\alpha\\beta rH(1+H)\\frac{\\left(\\theta^{\\prime 2}- \\frac{1}{3}H^{2}\\right)-\\left(\\theta^{2}-\\frac{1}{3}\\right)}{(r+H)^{2}}(W \\eta)_{xxx}\\] \\[-\\frac{1}{4}\\beta^{2}rH^{2}\\frac{\\left((\\theta^{\\prime 2}- \\frac{1}{3}H^{2})-(\\theta^{2}-\\frac{1}{3})\\right)^{2}}{(r+H)^{2}}W_{xxxxx}.\\] (B.5)
4. _Term in \\((\\eta w_{xx})_{x}\\) and \\((\\eta w^{\\prime}_{xx})_{x}\\)_ Using (41) yields \\[\\frac{1}{2}\\alpha\\beta\\left[H(\\theta^{2}-1)(\\eta w_{xx})_{x}+r( \\theta^{\\prime 2}-H^{2})(\\eta w^{\\prime}_{xx})_{x}\\right]\\] \\[=\\frac{1}{2}\\alpha\\beta\\frac{H^{2}(\\theta^{2}-1)-r(\\theta^{\\prime 2 }-H^{2})}{r+H}(\\eta W_{xx})_{x}\\] (B.6)
5. _Term in \\(w_{xxxxx}\\) and \\(w^{\\prime}_{xxxxx}\\)_ Using (41) yields \\[\\frac{5}{24}H\\beta^{2}\\left[\\left(\\theta^{2}-\\frac{1}{5}\\right)^ {2}w_{xxxxx}-r\\left(\\theta^{\\prime 2}-\\frac{1}{5}H^{2}\\right)^{2}w^{\\prime}_{xxxxx}\\right]\\] \\[= \\frac{5}{24}H\\beta^{2}\\frac{H(\\theta^{2}-\\frac{1}{5})^{2}+r( \\theta^{\\prime 2}-\\frac{1}{5}H^{2})^{2}}{r+H}W_{xxxxx}\\] (B.7)Combining all terms (B.2)-(B.7) yields the first equation of the extended Boussinesq system
\\[\\framebox{$(r+H)\\eta_{t}+HW_{x}+\\alpha\\frac{H^{2}-r}{r+H}(W\\eta)_{x}$}\\] \\[+\\frac{1}{2}\\beta\\frac{H\\left(H(\\theta^{2}-\\frac{1}{3})+r(\\theta^{ \\prime 2}-\\frac{1}{3}H^{2})\\right)}{r+H}W_{xxx}-\\alpha^{2}\\frac{r(1+H)^{2}}{( r+H)^{2}}(W\\eta^{2})_{x}\\] \\[+\\frac{1}{2}\\alpha\\beta\\frac{rH(1+H)}{(r+H)^{2}}\\left((\\theta^{ \\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{2}-\\frac{1}{3})\\right)(W_{xx}\\eta)_{x}\\] \\[+\\frac{1}{2}\\alpha\\beta\\frac{H^{2}(\\theta^{2}-1)-r(\\theta^{ \\prime 2}-H^{2})}{r+H}(W_{xx}\\eta)_{x}\\] (B.8) \\[+\\frac{1}{2}\\alpha\\beta rH(1+H)\\frac{(\\theta^{\\prime 2}-\\frac{1}{3}H^{ 2})-(\\theta^{2}-\\frac{1}{3})}{(r+H)^{2}}(W\\eta)_{xxx}\\] \\[-\\frac{1}{4}\\beta^{2}\\frac{rH^{2}\\Big{(}(\\theta^{\\prime 2}-\\frac{1}{ 3}H^{2})-(\\theta^{2}-\\frac{1}{3})\\Big{)}^{2}}{(r+H)^{2}}W_{xxxxx}\\] \\[+\\frac{5}{24}H\\beta^{2}\\frac{H(\\theta^{2}-\\frac{1}{5})^{2}+r( \\theta^{\\prime 2}-\\frac{1}{5}H^{2})^{2}}{r+H}W_{xxxxx}=0\\]
We proceed the same way for equation (28).
(1) _Term in \\(w_{xxt}\\) and \\(w^{\\prime}_{xxt}\\)_
\\[\\frac{1}{2}\\beta\\left[(\\theta^{2}-1)w-r(\\theta^{\\prime 2}-H^{2})w^{\\prime} \\right]_{xxt}=\\] \\[+\\frac{1}{4}\\beta^{2}\\frac{rH\\Big{(}(\\theta^{2}-1)-(\\theta^{ \\prime 2}-H^{2})\\Big{)}\\Big{(}(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{2}- \\frac{1}{3})\\Big{)}}{(r+H)^{2}}W_{xxxxt}\\] (B.9)
(2) _Term \\(\\alpha(ww_{x}-rw^{\\prime}w^{\\prime}_{x})\\)_
\\[\\alpha(ww_{x}-rw^{\\prime}w^{\\prime}_{x})\\!=\\!\\alpha\\frac{H^{2}-r}{(r+H)^{2}}WW _{x}-\\alpha^{2}\\frac{r(H+1)^{2}}{(r+H)^{3}}(W^{2}\\eta)_{x}\\]\\[+\\frac{1}{2}\\alpha\\beta\\frac{rH(H+1)\\Big{(}(\\theta^{\\prime 2}-\\frac{1}{3}H^{2 })-(\\theta^{2}-\\frac{1}{3})\\Big{)}}{(r+H)^{3}}(WW_{xx})_{x}\\]
(B.10)
(3) _Term_
\\[\\alpha\\beta\\Big{[}(\\eta w_{xt})_{x}+rH(\\eta w^{\\prime}_{xt})_{x}\\Big{]}= \\alpha\\beta\\frac{H(1-r)}{r+H}(\\eta W_{xt})_{x}\\] (B.11)
(4) _Term_
\\[\\frac{1}{2}\\alpha\\beta\\Big{[}(\\theta^{2}-1)ww_{xxx}-r(\\theta^{ \\prime 2}-H^{2})w^{\\prime}w^{\\prime}_{xxx}\\Big{]} = \\frac{1}{2}\\alpha\\beta\\frac{H^{2}(\\theta^{2}-1)-r(\\theta^{\\prime 2}-H^{2})}{(r+H)^ {2}}WW_{xxx}\\]
(5) _Term_
\\[\\frac{1}{2}\\alpha\\beta\\Big{[}(\\theta^{2}+1)w_{x}w_{xx}-r(\\theta^ {\\prime 2}+H^{2})w^{\\prime}_{x}w^{\\prime}_{xx}\\Big{]} = \\frac{1}{2}\\alpha\\beta\\frac{H^{2}(\\theta^{2}+1)-r(\\theta^{\\prime 2}+H^{2})} {(r+H)^{2}}W_{x}W_{xxx}\\]
(6) _Term_
\\[\\frac{1}{2}\\beta^{2}\\Big{(}(\\theta^{2}-1)(5\\theta^{2}-1)w_{xxxxt} -r(\\theta^{\\prime 2}-H^{2})(5\\theta^{\\prime 2}-H^{2})w^{\\prime}_{xxxxt}\\Big{)}\\] \\[=\\frac{1}{2}\\beta^{2}\\frac{H(\\theta^{2}-1)(5\\theta^{2}-1)+r( \\theta^{\\prime 2}-H^{2})(5\\theta^{\\prime 2}-H^{2})}{r+H}W_{xxxxt}\\]Combining all terms (B.9)-(B.14) yields
\\[\\begin{array}{|l|}\\hline(1-r)\\eta_{x}+W_{t}+\\alpha\\frac{H^{2}-r}{(r+H)^{2}}WW_{x }\\\\ \\\\ +\\frac{1}{2}\\beta\\frac{H(\\theta^{2}-1)+r(\\theta^{\\prime 2}-H^{2})}{r+H}W_{xxt}- \\alpha^{2}\\frac{r(1+H)^{2}}{(r+H)^{3}}(W^{2}\\eta)_{x}\\\\ \\\\ +\\frac{1}{2}\\alpha\\beta\\frac{rH(H+1)\\Big{(}(\\theta^{\\prime 2}-\\frac{1}{3}H^{2}) -(\\theta^{2}-\\frac{1}{3})\\Big{)}}{(r+H)^{3}}(WW_{xx})_{x}\\\\ \\\\ +\\alpha\\beta\\frac{H(1-r)}{r+H}(\\eta W_{xt})_{x}+\\frac{1}{2}\\alpha\\beta\\frac{H^ {2}(\\theta^{2}-1)-r(\\theta^{\\prime 2}-H^{2})}{(r+H)^{2}}WW_{xxx}\\\\ \\\\ +\\frac{1}{2}\\alpha\\beta\\frac{H^{2}(\\theta^{2}+1)-r(\\theta^{\\prime 2}+H^{2})}{ (r+H)^{2}}W_{x}W_{xx}\\\\ \\\\ -\\frac{1}{2}\\alpha\\beta\\frac{r(1+H)\\Big{(}(\\theta^{2}-1)-(\\theta^{\\prime 2}-H^{ 2})\\Big{)}}{(r+H)^{2}}(W\\eta)_{xxt}\\\\ \\\\ +\\frac{1}{4}\\beta^{2}\\frac{rH\\Big{(}(\\theta^{2}-1)-(\\theta^{\\prime 2}-H^{ 2})\\Big{)}\\Big{(}(\\theta^{\\prime 2}-\\frac{1}{3}H^{2})-(\\theta^{2}-\\frac{1}{3}) \\Big{)}}{(r+H)^{2}}W_{xxxt}\\\\ \\\\ +\\frac{1}{2}\\beta^{2}\\frac{H(\\theta^{2}-1)(5\\theta^{2}-1)+r(\\theta^{\\prime 2 }-H^{2})(5\\theta^{\\prime 2}-H^{2})}{r+H}W_{xxxt}=0\\\\ \\hline\\end{array}\\]
## References
* [1] D.S. Agafontsev, F. Dias, E.A. Kuznetsov, Deep-water internal solitary waves near critical density ratio, _Physica D_**225** (2007) 153-168.
* [2] R. Barros, S.L. Gavrilyuk, V.M. Teshukov, Dispersive nonlinear waves in two-layer flows with free surface. I. Model derivation and general properties, _Studies in Applied Mathematics_ (2007), in press.
* [3] T.B. Benjamin, T.J. Bridges, Reappraisal of the Kelvin-Helmholtz problem. I. Hamiltonian structure, _J. Fluid Mech._**333** (1997) 301-325.
* [4] J.L. Bona, M. Chen, A Boussinesq system for two-way propagation of nonlinear dispersive waves, _Physica D_**116** (1998) 417-430.
* [5] J.L. Bona, M. Chen, J.-C. Saut, Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media. I: Derivation and linear theory, _J. Nonlinear Sci._**12** (2002) 283-318.
* [6] J.L. Bona, V.A. Dougalis, D.E. Mitsotakis, Numerical solution of KdV-KdV systems of Boussinesq equations. I. The numerical scheme and generalized solitary waves, _Mathematics and Computers in Simulation_**74** (2007) 214-228.
* [7] J.L. Bona, W.G. Pritchard, L.R. Scott, An evaluation of a model equation for water waves, _Phil. Trans. R. Soc. Lond. A_**302** (1981) 457-510.
* [8] T.J. Bridges, N.M. Donaldson, Reappraisal of criticality for two-layer flows and its role in the generation of internal solitary waves, _Phys. Fluids_ (2007), to appear
* [9] W. Choi, R. Camassa, Fully nonlinear internal waves in a two-fluid system, _J. Fluid Mech._**396** (1999) 1-36.
* [10] W. Craig, P. Guyenne, J. Hammack, D. Henderson, C. Sulem, Solitary water wave interactions, _Phys. Fluids_**18** (2006) 057106.
* [11] W. Craig, P. Guyenne, H. Kalisch, Hamiltonian long wave expansions for free surfaces and interfaces, _Comm. Pure Appl. Math._**58** (2005) 1587-1641.
* [12] F. Dias, T. Bridges, Geometric aspects of spatially periodic interfacial waves, _Stud. Appl. Math._**93** (1994) 93-132.
* [13] F. Dias, J.-M. Vanden-Broeck, On internal fronts, _J. Fluid Mech._**479** (2003) 145-154.
* [14] F. Dias, J.-M. Vanden-Broeck, Two-layer hydraulic falls over an obstacle, _Europ. J. Mech. B/Fluids_**23** (2004) 879-898.
* [15] V.A. Dougalis, D.E. Mitsotakis, Solitary waves of the Bona-Smith system, Advances in scattering theory and biomedical engineering, ed. by D. Fotiadis and C. Massalas, World Scientific, New Jersey, (2004), pp. 286-294.
* [16] W.A.B. Evans, M.J. Ford, An integral equation approach to internal (2-layer) solitary waves, _Phys. Fluids_**8** (1996) 2032-2047.
* [17] C. Fochesato, F. Dias, R. Grimshaw, Generalized solitary waves and fronts in coupled Korteweg-de Vries systems, _Physica D_**210** (2005) 96-117.
* [18] M. Funakoshi, M. Oikawa, Long internal waves of large amplitude in a two-layer fluid, _J. Phys. Soc. Japan_**55** (1986) 128-144.
* [19] R. Grimshaw, D. Pelinovsky, E. Pelinovsky, A. Slunyaev, Generation of large-amplitude solitons in the extended Kortewegde Vries equation, _Chaos_**12** (2002) 1070-1076.
* [20] K.R. Helfrich, W.K. Melville, Long nonlinear internal waves, _Annu. Rev. Fluid Mech._**38** (2006) 395-425.
* [21] T. Kataoka, The stability of finite-amplitude interfacial solitary waves, _Fluid Dynamics Research_**38** (2006) 831-867.
* [22] O. Laget, F. Dias, Numerical computation of capillary-gravity interfacial solitary waves, _J. Fluid Mech._**349** (1997) 221-251.
* [23] H. Michallet, E. Barthelemy, Experimental study of interfacial solitary waves, _J. Fluid Mech._**366** (1998) 159-177. | The theory of internal waves between two layers of immiscible fluids is important both for its applications in oceanography and engineering, and as a source of interesting mathematical model equations that exhibit nonlinearity and dispersion. A Boussinesq system for two-way propagation of interfacial waves in a rigid lid configuration is derived. In most cases, the nonlinearity is quadratic. However, when the square of the depth ratio is close to the density ratio, the coefficients of the quadratic nonlinearities become small and cubic nonlinearities must be considered. The propagation as well as the collision of solitary waves and/or fronts is studied numerically. | Give a concise overview of the text below. |
arxiv-format/0709_3307v2.md | # Unitary equivalence between ordinary intelligent states and generalized intelligent states
Hyunchul Nha
Department of Physics, Texas A & M University at Qatar, PO Box 23874, Doha, Qatar
September 20, 2007
## I Introduction
For a general quantum state, the uncertainties of two noncommuting observables \\(\\{A,\\,B\\}\\) cannot be made arbitrarily small at the same time, and the product of them, \\(\\langle(\\Delta A)^{2}\\rangle\\langle(\\Delta B)^{2}\\rangle\\), has a certain lower bound. The Heisenberg uncertainty relation (HUR) [1], which is most widely used, provides the bound as
\\[\\langle(\\Delta A)^{2}\\rangle\\langle(\\Delta B)^{2}\\rangle\\geq\\frac{1}{4}| \\langle[A,B]\\rangle|^{2}. \\tag{1}\\]
The quantum states that satisfy equality in (1) are usually referred to as minimum-uncertainty states. The lower bound, however, does not always exhibit a true local minimum, e.g., when the commutator \\([A,B]\\) is not a constant multiple of identity operator. From this perspective, the states holding equality in (1) are generally termed _ordinary intelligent states_ (OISs) [2; 3].
On the other hand, the Schrodinger-Robertson relation (SRR) generally provides a stronger bound than the HUR [4; 5] as
\\[\\langle(\\Delta A)^{2}\\rangle\\langle(\\Delta B)^{2}\\rangle\\geq\\frac{1}{4}| \\langle[A,B]\\rangle|^{2}+\\langle\\Delta A\\Delta B\\rangle_{S}^{2}, \\tag{2}\\]
where the covariance \\(\\langle\\Delta A\\Delta B\\rangle_{S}\\) is defined in a symmetric form as
\\[\\langle\\Delta A\\Delta B\\rangle_{S}\\equiv\\frac{1}{2}\\langle\\Delta A\\Delta B+ \\Delta B\\Delta A\\rangle. \\tag{3}\\]
The HUR is a special form of the SRR under the condition \\(\\langle\\Delta A\\Delta B\\rangle_{S}=0\\), which is of course not always true. The states holding equality in the SRR are termed _generalized intelligent states_ (GISs) as an analogy to OISs [6]. The OISs and GISs have been extensively studied for many decades and they have attracted a great deal of interest particularly in the context of squeezing [7; 8; 9; 10; 11; 12; 13]. More specifically, the intelligent states for the su(2) and su(1,1) algebras were proposed to employ for quantum optical interferometry to achieve the quantum-limited precision in phase measurement [14; 15; 16].
Furthermore, the su(2) and the su(1,1) algebras have recently attracted some renewed interest from the perspective of quantum information theory particularly for the treatment of continuous variables. Specifically, the entanglement criteria applicable to non-Gaussian entangled states were derived from those two algebras [17; 18; 19]. Very recently, it was also shown that the SRR, in conjunction with partial transposition, can generally provide a stronger inequality than the HUR to detect entanglement [20]. As an illustration, the entanglement condition derived from the su(2) and su(1,1) algebra was refined to a form invariant with respect to local phase shift in Ref. [20]. On an application side, the intelligent states for the su(2) and su(1,1) algebras can be potentially useful for quantum information processing because they all form the class of non-Gaussian entangled states when expressed in terms of two boson operators [19].
Quite obviously, an arbitrary OIS, which has the vanishing covariance \\(\\langle\\Delta A\\Delta B\\rangle_{S}=0\\), is also a GIS, but the converse is not always true. Thus, the proposition follows that OISs form a subset of GISs in general [12]. In the previous literature, there have been a number of attempts to separately obtain the OISs and the GISs for certain algebras, most prominently, for the su(2) and the su(1,1) algebras. In this paper, we aim at clarifying to some extent the connection between the OISs and the GISs. In particular, we consider the case in which there exists a unitary operator \\(U\\) that transforms two operators \\(\\{A,\\,B\\}\\) to another pair of operators in a _rotation_ form. In this case, it is shown that an _arbitrary_ GIS, \\(|\\Psi\\rangle\\), can be generated from a certain OIS, \\(|\\Phi\\rangle\\), by applying the _rotation_ operator \\(U\\) as \\(|\\Psi\\rangle=U|\\Phi\\rangle\\). In this sense, it can be said that the set of GISs is unitarily equivalent to the set of GISs. This is particularly the case with the su(2) and the su(1,1) algebra, of which operators canbe represented in terms of boson operators. The unitary operator \\(U\\) is then realized by phase shifter, beam splitter, or parametric amplifier, depending on the pair of two operators \\(\\{A,\\,B\\}\\).
This paper is organized as follows. In Sec. II, the intelligent states are briefly introduced with their statistical properties. In Sec. III, the equivalence between the set of OISs and that of GISs is demonstrated under the condition that there exists a unitary operation that transform the two observables \\(\\{A,\\,B\\}\\) to \\(\\{A^{\\prime},\\,B^{\\prime}\\}\\) in a form of rotation. This finding is more concretized for the cases of the su(2) and the su(1,1) algebras in Sec. IV, and the main results are summarized in Sec. V.
## II Intelligent states
First, let us briefly introduce the intelligent states with their statistical characteristics. The SRR in Eq. (2) can be derived from the Cauchy-Schwartz inequality
\\[\\langle f|f\\rangle\\langle g|g\\rangle\\geq|\\langle f|g\\rangle|^{2}, \\tag{4}\\]
where the state vectors \\(|f\\rangle\\) and \\(|g\\rangle\\) are given by \\(|f\\rangle=\\Delta A|\\Psi\\rangle\\) and \\(|g\\rangle=\\Delta B|\\Psi\\rangle\\), respectively, for a general state \\(|\\Psi\\rangle\\)[21]. The variance operator \\(\\Delta O\\) is defined as \\(\\Delta O\\equiv O-\\langle O\\rangle\\), where \\(\\langle O\\rangle\\) is the quantum average for the state \\(|\\Psi\\rangle\\) (\\(O=A,B\\)).
Clearly, the equality holds in Eq. (4) when the two vectors are linearly dependent, i.e., \\(|f\\rangle=-i\\lambda|g\\rangle\\), where the parameter \\(\\lambda=\\lambda_{x}+i\\lambda_{y}\\) is complex in general. In other words, the GISs, \\(|\\Psi\\rangle\\), satisfy the characteristic eigenvalue equation
\\[(A+i\\lambda B)|\\Psi\\rangle=\\beta|\\Psi\\rangle, \\tag{5}\\]
where \\(\\beta=\\langle A\\rangle+i\\lambda\\langle B\\rangle\\).
From Eq. (5), the equality in the SRR follows along with the condition \\(\\langle(\\Delta A)^{2}\\rangle=|\\lambda|^{2}\\langle(\\Delta B)^{2}\\rangle\\) for a general \\(\\lambda\\). In light of the SRR [Eq. (2)], coherent states may be defined as those ones for which the two variances, \\(\\langle(\\Delta A)^{2}\\rangle\\) and \\(\\langle(\\Delta B)^{2}\\rangle\\), are all equal to \\(V_{c}\\equiv\\sqrt{\\frac{1}{4}|\\langle(A,B)|\\rangle^{2}+\\langle\\Delta A\\Delta B \\rangle_{S}^{2}}\\) ( case of \\(|\\lambda|=1\\)). On the other hand, squeezing may be defined as one of the two variances reduced below the critical value \\(V_{c}\\)[9]. In other words, if \\(|\\lambda|\\) is smaller (larger) than unity, the observable \\(A\\) (\\(B\\)) is squeezed, and the degree of squeezing is parameterized by \\(|\\lambda|\\).
**Special cases**:
(i) if the squeezing parameter \\(\\lambda\\) is real (\\(\\lambda_{y}=0\\)), the condition \\(\\langle\\Delta A\\Delta B\\rangle_{S}=0\\) follows from Eq. (5), hence the equality in Eq. (1). In other words, the GISs are obtained by solving the eigenvalue equation, Eq. (5), for real values of \\(\\lambda\\).
(ii) On the other hand, if \\(\\lambda\\) is pure imaginary (\\(\\lambda_{x}=0\\)), it follows that \\(\\langle[A,B]\\rangle=0\\), and the ordinary Heisenberg uncertainty relation only provides a trivial lower bound, zero [9].
## III Equivalence between ordinary intelligent states and generalized intelligent states
In this section, we consider the connection between the set of OISs and the set of GISs on the condition that the two operators \\(\\{A,B\\}\\) can be transformed by a certain unitary operator \\(U\\) to new operators \\(\\{A^{\\prime},B^{\\prime}\\}\\) in a form of rotation. That is,
\\[\\left(\\begin{array}{c}A^{\\prime}\\\\ B^{\\prime}\\end{array}\\right)=U\\left(\\begin{array}{c}A\\\\ B\\end{array}\\right)U^{\\dagger}=\\left(\\begin{array}{cc}\\cos\\phi&-\\sin\\phi\\\\ \\sin\\phi&\\cos\\phi\\end{array}\\right)\\left(\\begin{array}{c}A\\\\ B\\end{array}\\right). \\tag{6}\\]
(i) First, let the state \\(|\\Phi\\rangle\\) be an OIS satisfying the eigenvalue equation
\\[(A+i\\lambda B)|\\Phi\\rangle=\\beta|\\Phi\\rangle, \\tag{7}\\]
where \\(\\lambda\\) is real [22]. On applying the unitary operator \\(U\\) on both sides of Eq. (7), we obtain the eigenvalue equation as
\\[(A+i\\Lambda B)|\\Psi\\rangle=\\beta^{\\prime}|\\Psi\\rangle, \\tag{8}\\]
where \\(|\\Psi\\rangle\\) is defined as \\(|\\Psi\\rangle\\equiv U|\\Phi\\rangle\\). The new parameters \\(\\Lambda\\) and \\(\\beta^{\\prime}\\) are given by
\\[\\Lambda \\equiv \\frac{\\lambda\\cos\\phi+i\\sin\\phi}{\\cos\\phi+i\\lambda\\sin\\phi},\\] \\[\\beta^{\\prime} \\equiv \\frac{\\beta}{(\\cos\\phi+i\\lambda\\sin\\phi)}, \\tag{9}\\]
respectively.
Note that \\(\\Lambda\\) can take any arbitrary complex values in Eq. (9), and the transformed state \\(|\\Psi\\rangle=U|\\Phi\\rangle\\) is thus none other than a certain GIS. In other words, for an arbitrarily fixed value of \\(\\Lambda\\equiv\\Lambda_{x}+i\\Lambda_{y}\\), one can choose the real squeezing parameter \\(\\lambda\\) and the rotation angle \\(\\phi\\) as
\\[\\tan 2\\phi = \\frac{2\\Lambda_{y}}{1-\\Lambda_{x}^{2}-\\Lambda_{y}^{2}},\\hskip 28.452756pt \\left(-\\frac{\\pi}{4}<\\phi\\leq\\frac{\\pi}{4}\\right)\\] \\[\\lambda = \\frac{\\Lambda_{x}}{1+\\Lambda_{y}\\tan\\phi}. \\tag{10}\\]
In short, if their exists a certain unitary operator \\(U\\) that implements the rotation as in Eq. (6), an arbitrary GIS can be generated from a certain OIS by applying the unitary operator \\(U\\), as prescribed in Eq. (II).
(ii) The converse is of course true, and in fact, an OIS is by definition a GIS. Furthermore, an arbitrary GIS can be transformed to an OIS under the same unitary operator \\(U\\). To more deeply understand how it works, let us take a different perspective as follows. Consider a \\(2\\times 2\\) covariance matrix \\(C\\) of which elements are defined as
\\[C_{ij}\\equiv\\frac{1}{2}\\langle\\Delta O_{i}\\Delta O_{j}+\\Delta O_{j}\\Delta O_{ i}\\rangle,\\hskip 14.226378pt(i,j=1,2), \\tag{11}\\]where \\(O_{1}\\equiv A\\) and \\(O_{2}\\equiv B\\). Namely,
\\[C=\\left(\\begin{array}{cc}\\langle(\\Delta A)^{2}\\rangle&\\langle\\Delta A\\Delta B \\rangle_{S}\\\\ \\langle\\Delta A\\Delta B\\rangle_{S}&\\langle(\\Delta B)^{2}\\rangle\\end{array} \\right). \\tag{12}\\]
Then, the determinant of the matrix \\(C\\) is given by
\\[{\\rm Det}\\{C\\}=\\langle(\\Delta A)^{2}\\rangle\\langle(\\Delta B)^{2}\\rangle-\\langle \\Delta A\\Delta B\\rangle_{S}^{2}, \\tag{13}\\]
which is invariant under rotation in Eq. (6). The characteristic equation satisfied by the GISs in Eq. (2) now reads as
\\[{\\rm Det}\\{C\\}=\\frac{1}{4}|\\langle[A,B]\\rangle|^{2}. \\tag{14}\\]
Suppose that the state \\(|\\Psi\\rangle\\) satisfies Eq. (14). Then, due to the relation \\([A,B]=[A^{\\prime},B^{\\prime}]\\) and the invariance under rotation, the inversely transformed state \\(|\\Phi\\rangle=U^{\\dagger}|\\Psi\\rangle\\) must also satisfy Eq. (14). More importantly, the off-diagonal covariance for \\(|\\Phi\\rangle\\) becomes
\\[\\langle\\Delta A\\Delta B\\rangle_{S,|\\Phi\\rangle}= \\frac{1}{2}\\sin 2\\phi\\left[\\langle(\\Delta B)^{2}\\rangle-\\langle( \\Delta A)^{2}\\rangle\\right] \\tag{15}\\] \\[+\\cos 2\\phi\\langle\\Delta A\\Delta B\\rangle_{S},\\]
by the relation in Eq. (6). Note that the quantum averages on the right side of Eq. (15) refer to the ones for the state \\(|\\Psi\\rangle\\). Thus, if one chooses the rotation angle \\(\\phi\\) as
\\[\\tan 2\\phi=\\frac{2\\langle\\Delta A\\Delta B\\rangle_{S}}{\\langle(\\Delta A)^{2} \\rangle-\\langle(\\Delta B)^{2}\\rangle}, \\tag{16}\\]
the off-diagonal covariance \\(\\langle\\Delta A\\Delta B\\rangle_{S,|\\Phi\\rangle}\\) vanishes in the rotated frame. That is, the GIS, \\(|\\Psi\\rangle\\), is transformed to an OIS, \\(|\\Phi\\rangle\\), satisfying
\\[{\\rm Det}\\{C\\}=\\langle(\\Delta A)^{2}\\rangle\\langle(\\Delta B)^{2}\\rangle=\\frac {1}{4}|\\langle[A,B]\\rangle|^{2}, \\tag{17}\\]
under the rotation by the unitary operation \\(U^{\\dagger}\\).
By (i) and (ii), the set of OISs is unitarily equivalent to the set of GISs.
## IV Su(2)- and Su(1,1)-Intelligent states
In this section, we show that the preceding argument can be generally applied to the su(2) and the su(1,1) algebras along with their intelligent states.
### su(2)-intelligent states
The su(2) algebra describes the angular momentum operators, as characterized by the commutation relations,
\\[[J_{i},J_{j}]=i\\epsilon_{ijk}J_{k},\\hskip 28.452756pt(i,j,k=1,2,3), \\tag{18}\\]
where \\(J_{i}\\)'s denote the Cartesian components of the angular momentum. For the choice of \\(A=J_{1}\\) and \\(B=J_{2}\\), the relation in (6) can be implemented by the unitary operator \\(U=e^{i\\phi J_{3}}\\). Therefore, a generalized intelligent state \\(|\\Psi\\rangle\\) can be written in a form as \\(|\\Psi\\rangle=e^{i\\phi J_{3}}|\\Phi\\rangle\\), where \\(|\\Phi\\rangle\\) is an ordinary intelligent state satisfying the eigenvalue equation
\\[(J_{1}+i\\lambda J_{2})|\\Phi\\rangle=\\beta|\\Phi\\rangle, \\tag{19}\\]
for a real \\(\\lambda\\).
As a specific example, let us consider the GISs for which the condition \\(\\langle[J_{1},J_{2}]\\rangle=i\\langle J_{3}\\rangle=0\\) holds, i.e., the case that the squeezing parameter \\(\\Lambda\\) is pure imaginary in Eq. (8). (See the last paragraph of Sec. II.) Then, with \\(\\Lambda_{x}=0\\) in Eq. (10), one has the prescription \\(\\lambda=0\\) and \\(\\tan\\phi=\\Lambda_{y}\\). In other words, we start with the ordinary intelligent state satisfying \\(J_{1}|\\Phi\\rangle=\\beta|\\Phi\\rangle\\) in Eq. (19), which is none other than the \\(J_{1}\\)-eigenstate. Since a general \\(J_{1}\\)-eigenstate can be obtained by applying the rotation \\(e^{-i\\frac{\\pi}{2}J_{2}}\\) to the \\(J_{3}\\)- eigenstates \\(|J,m\\rangle\\), a generalized intelligent state \\(|\\Psi\\rangle\\) is expressed as \\(|\\Psi\\rangle=e^{i\\phi J_{3}}e^{-i\\frac{\\pi}{2}J_{2}}|J,m\\rangle\\). This class of intelligent states was in fact studied by R. Puri, and the expression for those states given in Ref. [9] exactly coincides with that obtained here.
Note that the above argument equally applies to other pairs of observables due to the permutation symmetry in the su(2) algebra. For example, for the pair of observables \\(\\{J_{2},J_{3}\\}\\), the GISs can be obtained by applying the rotation \\(U=e^{i\\phi J_{1}}\\) to the OISs.
In the case that the angular momentum operators are represented by two boson operators \\(a\\) and \\(b\\)[23], as
\\[J_{1} = \\frac{1}{2}\\left(a^{\\dagger}b+ab^{\\dagger}\\right),\\] \\[J_{2} = \\frac{1}{2i}\\left(a^{\\dagger}b-ab^{\\dagger}\\right),\\] \\[J_{3} = \\frac{1}{2}\\left(a^{\\dagger}a-b^{\\dagger}b\\right), \\tag{20}\\]
the unitary operator \\(e^{i\\phi J_{3}}\\) simply denotes a local phase shift for the two modes \\(a\\) and \\(b\\). In fact, only one local phase shift can implement the necessary rotation by a proper choice of phase angle. On the other hand, the unitary operators \\(e^{i\\phi J_{1}}\\) and \\(e^{i\\phi J_{2}}\\) correspond to the action of the beam splitter [24].
### su(1,1)-intelligent states
In the su(1,1) algebra, the operators \\(K_{x},K_{y}\\) and \\(K_{z}\\) satisfy the commutation relations, \\([K_{1},K_{2}]=-iK_{3},[K_{2},K_{3}]=iK_{1}\\), and \\([K_{3},K_{1}]=iK_{2}\\). In spite that the commutators in the su(1,1) algebra differ in sign from those in the su(2) algebra, the rotation of the operators \\(K_{1}\\) and \\(K_{2}\\) can be realized by the unitary operator \\(e^{i\\phi K_{3}}\\), similar to the case in su(2) algebra. That is, the relation \\(|\\Psi\\rangle=e^{i\\phi K_{3}}|\\Phi\\rangle\\) holds between GISs and OISs.
Of course, a significant difference can arise in the su(1,1) algebra due to the lack of permutation symmetry. For instance, for a different choice of two observables\\(\\{K_{1},K_{3}\\}\\), the unitary operator \\(e^{i\\phi K_{2}}\\) does not effect rotation, but gives the transformation as
\\[\\left(\\begin{array}{c}K_{1}^{\\prime}\\\\ K_{3}^{\\prime}\\end{array}\\right)=\\left(\\begin{array}{cc}\\cosh\\phi&-\\sinh\\phi \\\\ -\\sinh\\phi&\\cosh\\phi\\end{array}\\right)\\left(\\begin{array}{c}K_{1}\\\\ K_{3}\\end{array}\\right). \\tag{21}\\]
Nonetheless, the equivalence of GIS and OIS is similarly deduced along with the lines in Sec. III. More concretely, the relations in Eq. (10) now become
\\[\\tanh 2\\phi = \\frac{2\\Lambda_{y}}{1+\\Lambda_{x}^{2}+\\Lambda_{y}^{2}},\\] \\[\\lambda = \\frac{\\Lambda_{x}}{1-\\Lambda_{y}\\tanh\\phi}, \\tag{22}\\]
so that the prescriptions for \\(\\phi\\) and \\(\\lambda\\) exists for any values of \\(\\Lambda_{x}\\) and \\(\\Lambda_{y}\\) to produce an arbitrary GIS from an OIS.
Conversely, in a similar method used to derive Eqs. (15) and (16) in Sec. III, the off-diagonal covariance \\(\\langle\\Delta A\\Delta B\\rangle_{S,|\\Phi\\rangle}\\) can be made vanish by choosing the transformation as
\\[\\tanh 2\\phi=\\frac{2\\langle\\Delta A\\Delta B\\rangle_{S}}{\\langle(\\Delta A)^{2} \\rangle+\\langle(\\Delta B)^{2}\\rangle}, \\tag{23}\\]
to transform a GIS to an OIS.
The su(1,1) operators can be represented by two bosonic operators as
\\[K_{1} = \\frac{1}{2}\\left(a^{\\dagger}b^{\\dagger}+ab\\right),\\] \\[K_{2} = \\frac{1}{2i}\\left(a^{\\dagger}b^{\\dagger}-ab\\right),\\] \\[K_{3} = \\frac{1}{2}\\left(a^{\\dagger}a+b^{\\dagger}b+1\\right), \\tag{24}\\]
or by a single bosonic operator as
\\[K_{1} = \\frac{1}{4}\\left(a^{\\dagger 2}+a^{2}\\right),\\] \\[K_{2} = \\frac{1}{4i}\\left(a^{\\dagger 2}-a^{2}\\right),\\] \\[K_{3} = \\frac{1}{4}\\left(2a^{\\dagger}a+1\\right). \\tag{25}\\]
The unitary operation \\(e^{i\\phi K_{3}}\\) can also be implemented by a local phase shift in both representations. On the other hand, \\(e^{i\\phi K_{1}}\\) and \\(e^{i\\phi K_{2}}\\) are realized by the nondegenerate parametric amplification for two-mode case or by the degenerate parametric amplification for single-mode case.
## V Summary
In this paper, the connection of the OISs and the GISs holding equality in the uncertainty relations has been studied and made clarified to some degree. In particular, it has been shown that there exists a unitary equivalence between the set of OISs and that of GISs for two noncommuting observables \\(\\{A,\\,B\\}\\) in the case that there exists a _rotational_ unitary operator \\(U\\) for those observables in view of Eq. (6). This is particularly true for the su(2) and the su(1,1) algebras, and in the latter case, although only a pseudo-rotation is effected for a particular choice of two observables, it was shown that the unitary equivalence still holds good. In the case that these algebras are represented by bosonic operators, the unitary operation corresponds to phase shift, beam splitting, or parametric amplification depending on the choice of the observables.
## References
* (1) W. Heisenberg, Z. Phys. **43**, 122 (1927).
* (2) C. Aragone, G. Guerri, S. Salamo, and J. L. Tani, J. Phys. A **7**, L149 (1974); C. Aragone, E. Chalbaud, and S. Salamo, J. Math. Phys. **17**, 1963 (1976).
* (3) K. Wodkiewicz and J. H. Eberly, J. Opt. Soc. Am. B **2**, 458 (1985).
* (4) E. Schrodinger, Sitzunsber. Preuss. Akad. Wiss. p. 296 (Berlin,1930).
* (5) H. R. Robertson, Phys. Rev. **46** 794 (1934).
* (6) D. A. Trifonov, J. Math. Phys. **35**, 2297 (1994).
* (7) J. A. Bergou, M. Hillery and D. Yu, Phys. Rev. A **43**, 515 (1991); D. Yu and M. Hillery, Quantum Opt. **6**, 37 (1994).
* (8) G. S. Agarwal and R. R. Puri, Phys. Rev. A **41**, 3782 (1990).
* (9) R. R. Puri, Phys. Rev. A **49**, 2178 (1994).
* (10) C. C. Gerry and R. Grobe, Phys. Rev. A **51**, 4123 (1995).
* (11) A. Luis and J. Perina, Phys. Rev. A **53**, 1886 (1996).
* (12) C. Brif, Int. J. Theor. Phys. **36**, 1651 (1997).
* (13) R. A. Campos and C. G. Gerry, Phys. Rev. A **60**, 1572 (1999).
* (14) B. Yurke, S. L. McCall, and J. R. Klauder, Phys. Rev. A **33**, 4033 (1986).
* (15) M. Hillery and L. Mlodinow, Phys. Rev. A **48**, 1548 (1993).
* (16) C. Brif and A. Mann, Phys. Rev. A **54**, 4505 (1996).
* (17) M. Hillery and M. Zubairy, Phys. Rev. Lett. **96**, 050503 (2006); M. Hillery and M. Zubairy, Phys. Rev. A **74**, 032333 (2006).
* (18) G. S. Agarwal and A. Biswas, New J. Phys. **7**, 211 (2005).
* (19) H. Nha and J. Kim, Phys. Rev. A **74**, 012317 (2006).
* (20) H. Nha, Phys. Rev. A **76**, 014305 (2007).
* (21) V. V. Dodonov, E. V. Kurmyshev, and V. I. Man'ko, Phys. Lett. **79**A, 150 (1980); B. Nagel, eprint quant-ph/9711028.
* (22) Throughout this paper, the notation \\(|\\Psi\\rangle\\) represents a GIS, whereas \\(|\\Phi\\rangle\\) an OIS.
* (23) J. Schwinger, in _Quantum Theory of Angular Momentum_, edited by L. C. Biedenharn and H. van Dam (Academic, New York, 1965).
* (24) R. A. Campos, B. E. A. Saleh, and M. C. Teich, Phys. Rev. A **40**, 1371 (1989). | Ordinary intelligent states (OIS) hold equality in the Heisenberg uncertainty relation involving two noncommuting observables \\(\\{A,\\,B\\}\\), whereas generalized intelligent states (GIS) do so in the more generalized uncertainty relation, the Schrodinger-Robertson inequality. In general, OISs form a subset of GISs. However, if there exists a unitary evolution \\(U\\) that transforms the operators \\(\\{A,\\,B\\}\\) to a new pair of operators in a _rotation_ form, it is shown that an arbitrary GIS can be generated by applying the _rotation_ operator \\(U\\) to a certain OIS. In this sense, the set of OISs is unitarily equivalent to the set of GISs. It is the case, for example, with the su(2) and the su(1,1) algebra that have been extensively studied particularly in quantum optics. When these algebras are represented by two bosonic operators (nondegenerate case), or by a single bosonic operator (degenerate case), the rotation, or pseudo-rotation, operator \\(U\\) corresponds to phase shift, beam splitting, or parametric amplification, depending on two observables \\(\\{A,\\,B\\}\\).
pacs: 03.65.Ta, 42.50.Dv | Provide a brief summary of the text. |
arxiv-format/0710_0191v1.md | # Physics Engineering in the Study of the Pioneer Anomaly
Slava G. Turyshev
Jet Propulsion Laboratory, California Institute of Technology\\({}^{*}\\)
Viktor T. Toth
Ottawa, ON K1N 9H5, Canada\\({}^{\\dagger}\\)
## I Introduction
The first spacecraft to leave the inner solar system [1; 2; 3], Pioneers 10 and 11 were designed to conduct an exploration of the interplanetary medium beyond the orbit of Mars and perform close-up observations of Jupiter during the 1972-73 Jovian opportunities.
The spacecraft were launched in March 1972 (Pioneer 10) and April 1973 (Pioneer 11) on top of identical three-stage Atlas-Centaur launch vehicles. After passing through the asteroid belt, Pioneer 10 reached Jupiter in December 1973. The trajectory of its sister craft, Pioneer 11, in addition to visiting Jupiter in 1974, also included an encounter with Saturn in 1979 (see [2; 4] for more details).
After the planetary encounters and successful completion of their primary missions, both Pioneers continued to explore the outer solar system. Due to their excellent health and navigational capabilities, the Pioneers were used to search for trans-Neptunian objects and to establish limits on the presence of low-frequency gravitational radiation [5].
Eventually, Pioneer 10 became the first man-made object to leave the solar system, with its official mission ending in March 1997. Since then, NASA's Deep Space Network (DSN) made occasional contact with the spacecraft. The last successful communication from Pioneer 10 was received by the DSN on 27 April 2002. Pioneer 11 sent its last coherent Doppler data in October 1990; the last scientific observations were returned by Pioneer 11 in September 1995.
The orbits of Pioneers 10 and 11 were reconstructed based primarily on radio-metric (Doppler) tracking data. The reconstruction between heliocentric distances of 20-70 AU yielded a persistent small discrepancy between observed and computed values [2; 3; 4]. After accounting for known systematic effects [2], the unmodeled change in the Doppler residual for Pioneer 10 and 11 is equivalent to an approximately sunward constant acceleration of
\\[a_{P}=(8.74\\pm 1.33)\\times 10^{-10}\\ \\mathrm{m/s^{2}}.\\]
The magnitude of this effect, measured between heliocentric distances of 40-70 AU, remains approximately constant within the 3 dB gain bandwidth of the HGA. The nature of this anomalous acceleration remains unexplained; this signal has become known as the Pioneer anomaly.
There were numerous attempts in recent years to provide an explanation for the anomalous acceleration of Pioneers 10 and 11. These can be broadly categorized as either invoking conventional mechanisms or utilizing principles of \"new physics\".
Initial efforts to explain the Pioneer anomaly focused on the possibility of on-board systematic forces. While these cannot be conclusively excluded [2; 3], the evidence to date does not support these mechanisms: the magnitude of the anomaly exceeds the acceleration that these mechanisms would likely produce, and the temporal evolution of the anomaly differs from that which one would expect, for instance, if the anomaly were due to thermal radiation of a decaying nuclear power source.
Conventional mechanisms external to the spacecraft were also considered. First among these was the possibility that the anomaly may be due to perturbations of the spacecrafts' orbits by as yet unknown objects in the Kuiper belt. Another possibility is that dust in thesolar system may exert a drag force, or it may cause a frequency shift, proportional to distance, in the radio signal. These proposals could not produce a model that is consistent with the known properties of the Pioneer anomaly, and may also be in contradiction with the known properties of planetary orbits.
The value of the Pioneer anomaly happens to be approximately \\(cH_{0}\\), where \\(c\\) is the speed of light and \\(H_{0}\\) is the Hubble constant at the present epoch. Attempts were made to exploit this numerical coincidence to provide a cosmological explanation for the anomaly, but it has been demonstrated that this approach would produce an effect with the opposite sign [2; 4].
As the search for a conventional explanation for the anomaly appeared unsuccessful, this provided a motivation to seek an explanation in \"new physics\". No such attempt to date produced a clearly viable mechanism for the anomaly [4].
The inability to explain the anomalous behavior of the Pioneers with conventional physics has resulted in a growing discussion about the origin of the detected signal. The limited size of the previously analyzed data set, also limits our current knowledge of the anomaly. To determine the origin of \\(a_{P}\\) and especially before any serious discussion of new physics can take place, one must analyze the entire set of radio-metric Doppler data received from the Pioneers.
As of October 2007, an effort to recover this critical information, initiated at JPL in June 2005, has been completed; we now have almost 30 years of Pioneer 10 and 20 years of Pioneer 11 Doppler data, most of which was never used in the investigation of the anomaly. The primary objective of the upcoming analysis is to determine the origin of the Pioneer anomaly. To achieve this goal, we will investigate the recently recovered radio-metric Doppler and telemetry data focusing on the possibility that the anomaly might have a thermal nature; if so, our analysis will find the physical origin of the effect and will identify its basic properties.
A unique feature of these efforts is the use of telemetry files documenting the thermal and electrical state of the spacecraft. This information was not available previously; however, by May 2006, the telemetry files for the entire durations of both missions were recovered, pre-processed and are ready for the upcoming study. Both of the newly assembled data sets are pivotal to establishing the origin of the detected signal.
In this paper we will report on the status of the recovery of the Pioneers' flight telemetry and its usefulness for the analysis of the Pioneer anomaly.
## II Using flight telemetry to study the spacecrafts' behavior
All transmissions of both Pioneer spacecraft, including all engineering telemetry, were archived [4] in the form of files containing Master Data Records (MDRs). Originally, MDRs were scheduled for limited retention. Fortunately, the Pioneers' mission records avoided this fate: with the exception of a few gaps in the data [4] the entire mission record has been saved. These recently recovered telemetry readings are important in reconstructing a complete history of the thermal, electrical, and propulsion systems for both spacecraft. This, it is hoped, may in turn lead to a better determination of the spacecrafts' acceleration due to on-board systematic effects.
Telenetry formats can be broadly categorized as science formats versus engineering formats. Telemetry words included both analog and digital values. Digital values were used to represent sensor states, switch states, counters, timers, and logic states. Analog readings, from sensors measuring temperatures, voltages, currents and more, were encoded using 6-bit words. This necessarily limited the sensor resolution and introduced a significant amount of quantization noise. Furthermore, the analog-to-digital conversion was not necessarily linear; prior to launch, analog sensors were calibrated using a fifth-order polynomial. Calibration ranges were also established; outside these ranges, the calibration polynomials are known to yield nonsensical results.
With the help of the information contained in these words, it is possible to reconstruct the history of RTG temperatures and power, radio beam power, electrically generated heat inside the spacecraft, spacecraft temperatures, and propulsion system.
Telenetry words are labeled using identifiers in the form of \\(C_{n}\\), where \\(n\\) is a number indicating the word position in the fixed format telemetry frames.
### RTG temperatures and power
The exterior temperatures of the RTGs were measured by one sensor on each of the four RTGs: the so-called \"fin root temperature\" sensor. Telemetry words \\(C_{201}\\) through \\(C_{204}\\) contain the fin root temperature sensor readings for RTGs 1 through 4, respectively. Figure 1 depicts the evolution of the RTG 1 fin root temperature for Pioneer 10.
A best fit analysis confirms that the RTG temperature indeed evolves in a manner consistent with the radioactive decay of the nuclear fuel on board. The results for all the other RTGs on both spacecraft are similar, confirming that the RTGs were performing thermally in accordance with design expectations.
RTG electrical power can be estimated using two sensor readings per RTG, measuring RTG current and volt age. Currents for RTGs 1 through 4 appear as telemetry words \\(C_{127}\\), \\(C_{105}\\), \\(C_{114}\\), and \\(C_{123}\\), respectively; voltages are in telemetry words \\(C_{110}\\), \\(C_{125}\\), \\(C_{131}\\), and \\(C_{113}\\). Combined, these words yield the total amount of electrical power available on board:
\\[P_{E}=C_{110}C_{127}+C_{125}C_{105}+C_{131}C_{114}+C_{113}C_{123}.\\]
All this electrical power is eventually converted to waste heat by the spacecrafts' instruments, with the exception of power radiated away by transmitters.
### Electrically generated heat
Whatever remains of electrical energy (Fig. 2) after accounting for the power of the transmitted radio beam is converted to heat on-board. Some of it is converted to heat outside the spacecraft body.
The Pioneer electrical system is designed to maximize the lifetime of the RTG thermocouples by ensuring that the current draw from the RTGs is always optimal. This means that power supplied by the RTGs may be more than that required for spacecraft operations. Excess electrical energy is absorbed by a shunt circuit that includes an externally mounted radiator plate. Especially early in the mission, when plenty of RTG power was still available, this radiator plate was the most significant component external to the spacecraft body that radiated heat. Telemetry word \\(C_{122}\\) tells us the shunt circuit current, from which the amount of power dissipated by the external radiator can be computed using the known ohmic resistance (\\(\\sim\\)5.25 \\(\\Omega\\)) of the radiator plate.
Other externally mounted components that consume electrical power are the Plasma Analyzer (\\(P_{\\rm PA}=4.2\\) W, telemetry word \\(C_{108}\\) bit 2), the Cosmic Ray Telescope (\\(P_{\\rm CRT}=2.2\\) W, telemetry word \\(C_{108}\\), bit 6), and the Asteroid/Meteoroid Detector (\\(P_{\\rm AMD}=2\\) W, telemetry word \\(C_{124}\\), bit 5). Though these instruments' exact power consumption is not telemetered, we know their average power consumption from design documentation, and the telemetry bits tell us when these instruments were powered.
Two additional external loads are the battery heater and the propellant line heaters. These represent a load of \\(P_{\\rm LH}=P_{\\rm BH}=2\\) W (nominal) each. The power state of these loads is not telemetered. According to mission logs, the battery heater was commanded off on both spacecraft on 12 May 1993.
Yet a further external load is the set of cables connecting the RTGs to the inverters. The resistance of these cables is known: it is 0.017 \\(\\Omega\\) for the inner RTGs (RTG 3 and 4), and 0.021 \\(\\Omega\\) for the outer RTGs (RTG 1 and 2). Using the RTG current readings it is possible to accurately determine the amount of power dissipated by these cables in the form of heat:
\\[P_{\\rm cable}=0.017(C_{114}^{2}+C_{123}^{2})+0.021(C_{127}^{2}+C_{105}^{2}).\\]
After accounting for all these external loads, whatever remains of the available electrical power on board is converted to heat inside the spacecraft. So long as the body of the spacecraft is in equilibrium with its surroundings, heat dissipated through its walls has to be equal to the heat generated inside:
\\[P_{\\rm body}=P_{E}-P_{\\rm cable}-P_{\\rm PA}-P_{\\rm CRT}-P_{\\rm AMD}-P_{\\rm LH }-P_{\\rm BH},\\]
with all the terms defined above.
### Compartment temperatures and thermal radiation
As evident from Fig. 3, the appearance of the Pioneer spacecraft is dominated by the 2.74 m diameter high gain
Figure 1: RTG 1 fin root temperatures (telemetry word \\(C_{201}\\); in \\({}^{\\circ}\\)F) for Pioneer 10.
Figure 2: Changes in total RTG electrical output (in W) on board Pioneer, as computed using the mission’s on-board telemetry.
antenna (HGA). The spacecraft body, located behind the HGA, consists of a larger, regular hexagonal compartment housing the propellant tank and spacecraft electronics; an adjacent, smaller compartment housed science instruments. The spacecraft body is covered by multilayer thermal insulating blankets, except for a lower system located on the side opposite the HGA, which was activated by bimetallic springs to expel excess heat from the spacecraft.
Each spacecraft was powered by four radioisotope thermoelectric generators (RTGs) mounted in pairs at the end of two booms, approximately three meters in length, extended from two sides of the spacecraft body at an angle of 120\\({}^{\\circ}\\). A third boom, approximately 6 m long, held a magnetometer.
The total (design) mass of the spacecraft was \\(\\sim\\)250 kg at launch, of which 27 kg was propellant [5].
For the purposes of attitude control, the spacecraft were designed to spin at the nominal rate of 4.8 rpm. Six small monopropellant (hydrazine) thrusters, mounted in three thruster cluster assemblies, were used for spin correction, attitude control, and trajectory correction maneuvers (see Fig. 2).
The passive thermal control system consisted of a series of spring-activated louvers (see Fig. 4). The springs were bimetallic, and thermally (radiatively) coupled to the electronics platform beneath the louvers. The louver blades were highly reflective in the infrared. The assembly was designed so that the louvers fully open when temperatures reach 30\\({}^{\\circ}\\)C, and fully close when temperatures drop below 5\\({}^{\\circ}\\)C.
The effective emissivity of the thermal blankets used on the Pioneers is \\(\\epsilon_{\\rm sides}=0.085\\)[6]. The total exterior area of the spacecraft body is \\(A_{\\rm walls}=4.92\\) m\\({}^{2}\\). The front side of the spacecraft body that faces the HGA has an area of \\(A_{\\rm front}=1.53\\) m\\({}^{2}\\), and its effective emissivity, accounting for the fact that most thermal radiation this side emits is reflected by the back of the HGA, can be computed as \\(\\epsilon_{\\rm front}=0.0013\\). The area covered by louver blades is \\(A_{\\rm louv}=0.29\\) m\\({}^{2}\\); the effective emissivity of closed louvers is \\(\\epsilon_{\\rm louv}=0.04\\)[5]. The area that remains, consisting of the sides of the spacecraft and the portion of the rear not covered by louvers is \\(A_{\\rm sides}=A_{\\rm walls}-A_{\\rm front}-A_{\\rm louv}\\).
Using these numbers, we can compute the amount of electrically generated heat radiated through the (closed) louver system as a ratio of total electrical heat generated
Figure 3: A drawing of the Pioneer spacecraft.
inside the spacecraft body:
\\[P_{\\rm{lower}}=\\frac{\\epsilon_{\\rm{low}}A_{\\rm{low}}P_{\\rm{body}}}{\\epsilon_{ \\rm{low}}A_{\\rm{low}}+\\epsilon_{sides}A_{\\rm{sides}}+\\epsilon_{\\rm{front}}A_{\\rm{ front}}}\\]
This result is a function of the electrical power generated inside the spacecraft body. However, we also have in our possession thermal vacuum chamber test results of the Pioneer louver system. These results characterize louver thermal emissions as a function of the temperature of the electronics platform beneath the louvers, with separate tests performed for the 2-blade and 3-blade louver assemblies. To utilize these results, we turn our attention to telemetry words representing electronics platform temperatures.
There are 6 platform temperature sensors (Fig. 5) inside the spacecraft body: 4 are located inside the main compartment, 2 sensors are in the science instrument compartment. The main compartment has a total of 12 2-blade louver blade assemblies; the science compartment has 2 3-blade assemblies.
The thermal vacuum chamber tests provide values for emitted thermal power per louver assembly as a function of the temperature of the electronics platform behind the louver. This allows us to estimate the amount of thermal power leaving the spacecraft body through the louvers, as a function of platform temperatures [6], providing means to estimate the amount of heat radiated by the louver system.
## IV Conclusions
By 2007, the existence of the Pioneer anomaly is no longer in doubt. A steadily growing part of the community has concluded that the anomaly should be subject to further investigation and interpretation. Our continuing effort to process and analyze Pioneer radio-metric and telemetry data is part of a broader strategy (see discussion at [3; 4]).
Based on the information provided by the MDRs, we were able to develop a high accuracy thermal, electrical, and dynamical model of the Pioneer spacecraft. This model will be used to further improve our understanding of the anomalous acceleration and especially to study the contribution from the on-board thermal environment to the anomaly.
It is clear that a thermal model for the Pioneer spacecraft would have to account for all heat radiation produced by the spacecraft. One can use telemetry information to accurately estimate the amount of heat produced by the spacecrafts' major components. The next step is to utilize this result along with information on the spacecrafts' design to estimate the amount of heat radiated in various directions.
This entails, on the one hand, an analysis of all avail
Figure 4: Bottom view of the Pioneer 10/11 vehicle, showing the louver system. A set of 12 2-blade louver assemblies cover the main compartment in a circular pattern; an additional two 3-blade assemblies cover the compartment with science instruments.
Figure 5: Location of thermal sensors in the instrument compartment of Pioneer 10/11 [5]. Temperature sensors are mounted at locations 1 to 6.
able radio-metric data, to characterize the anomalous acceleration beyond the periods that were examined in previous studies. Telemetry, on the other hand, enables us to reconstruct a thermal, electrical, and propulsion system profile of the spacecraft. Soon, we should be able to estimate effects on the motion of the spacecraft due to on-board systematic acceleration sources, expressed as a function of telemetry readings. This provides a new and unique way to refine orbital predictions and may also lead to an unambiguous determination of the origin of the Pioneer anomaly.
###### Acknowledgements.
The work of SGT was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
## References
* (1) Anderson J D, Laing P A, Lau E L, Liu A S, Nieto M M, and Turyshev S G \"Indication, from Pioneer 10/11, Galileo, and Ulysses Data, of an Apparent Anomalous, Weak, Long-Range Acceleration,\" _Phys. Rev. Lett._**81** 2858-2861 (1998) (_Preprint_ gr-qc/9808081)
* (2) Anderson J D, Laing P A, Lau E L, Liu A S, Nieto M M, and Turyshev S G \"Study of the Anomalous Acceleration of Pioneer 10 and 11,\" _Phys. Rev. D_**65** 082004/1-50 (2002) (_Preprint_ gr-qc/0104064)
* (3) Turyshev S G, Nieto M M, and Anderson J D, \"Study of the Pioneer Anomaly: A Problem Set,\" _Amer. J. Phys._**73** 1033-1044 (2005) (_Preprint_ physics/0502123)
* (4) Turyshev S G, Toth V T, Kellogg L R, Lau E L, and Lee K J \"The Study of the Pioneer Anomaly: New Data and Objectives for New Investigation,\" _International Journ. Mod. Phys. D_**15**(1) 1-55 (2006) (_Preprint_ gr-qc/0512121)
* (5) NASA Ames Research Center \"Pioneer F/G: Spacecraft Operational Characteristics\" PC-202 (1971).
* (6) Toth V T and Turyshev S G, \"The Pioneer Anomaly: seeking an explanation in newly recovered data.\" _Canad. J. Physics_**84**(12) (2006) 1063-1087 (_Preprint_ gr-qc/0603016) | The Pioneer 10/11 spacecraft yielded the most precise navigation in deep space to date. However, their radio-metric tracking data received from the distances between 20-70 astronomical units from the Sun has consistently indicated the presence of a small, anomalous, Doppler frequency drift. The drift is a blue frequency shift that can be interpreted as a sunward acceleration of \\(a_{P}=(8.74\\pm 1.33)\\times 10^{-10}\\) m/s\\({}^{2}\\) for each particular spacecraft. This signal has become known as the Pioneer anomaly; the nature of this anomaly remains unexplained.
Recently new Pioneer 10 and 11 radio-metric Doppler and flight telemetry data became available. The newly available Doppler data set is significantly enlarged when compared to the data used in previous investigations and is expected to be the primary source for the investigation of the anomaly. In addition, the flight telemetry files, original project documentation, and newly developed software tools are now used to reconstruct the engineering history of both spacecraft. With the help of this information, a thermal model of the Pioneer vehicles is being developed to study possible contribution of thermal recoil force acting on the two spacecraft. The ultimate goal of these physics engineering efforts is to evaluate the effect of on-board systems on the spacecrafts' trajectories.
Pioneer anomaly, deep-space navigation, thermal modeling. | Condense the content of the following passage. |
arxiv-format/0710_0427v1.md | # Effective Interactions In Neutron-Rich Matter
P. G. Krastev
1Texas A&M University-Commerce, Commerce, TX 75429, U.S.A. 1
F. Sammarruca
2University of Idaho, Moscow, ID 83843, U.S.A.2
Bao-An Li
1Texas A&M University-Commerce, Commerce, TX 75429, U.S.A. 1
A. Worley
1Texas A&M University-Commerce, Commerce, TX 75429, U.S.A. 1
## 1 Introduction
Properties of matter under extreme pressure and density are of great interest in modern physics as they are closely related to numerous important nuclear phenomena in both the terrestrial laboratories and space. These properties depend on the interactions among the constituents of matter and are reflected in the equation of state (EOS) characterizing the medium. At high densities non-nucleonic degrees of freedom appear gradually due to the rapid rise of nucleon chemical potentials [1]. Among these particles are strange hyperons such as \\(\\Lambda^{0}\\) and \\(\\Sigma^{-}\\). At even higher densities matter is expected to undergo a phase transition to quark-gluon plasma [2]. Extracting the transition density from QCD lattice calculations is a formidable problem which is still presently unsolved. These complications introduce great challenges on our way to understanding behavior of matter in terms of interactions among its basic ingredients.
The EOS is important for many key processes in both nuclear physics and astrophysics. It has far-reaching consequences and governs dynamics of supernova explosions, formation of heavy elements, properties and structure of neutron stars, and thetime variations of the gravitational constant \\(G\\). Presently, the detailed knowledge of the EOS is still far from complete mainly due to the very poorly known density dependence of the nuclear symmetry energy, \\(E_{sym}(\\rho)\\). Different many-body theories yield, often, rather controversial predictions for the trend of \\(E_{sym}(\\rho)\\) and thus the EOS. On the other hand, heavy-ion reactions at intermediate energies have already constrained significantly the density dependence of \\(E_{sym}\\) around nuclear saturation density, see e.g. [13; 14; 15; 16; 17; 18]. Consequently, these constraints place also significant limits on the possible configurations of both static [12] and (rapidly) rotating [10] neutron stars, and the possible time variations of \\(G\\)[9]. In this report we review these findings. We also revisit our recent studies on spin-asymmetric neutron matter [8] with a particular emphasis on high densities.
## 2 Spin-polarized neutron matter: high-density regime
Studies of magnetic properties of dense matter are of great current interest in conjunction with studies of pulsars, which are believed to be rotating neutron stars with strong surface magnetic fields. Here we summarize briefly the results of our recent study on properties of spin-polarized pure neutron matter. For a detailed description of the calculation we refer the interested reader to Ref. [8] and the references therein. The computation is microscopic and treats the nucleons in the medium relativistically. The starting point of every microscopic calculation of nuclear structure and reactions is a realistic nucleon-nucleon (NN) free-space interaction. A realistic and quantitative model for the nuclear force with reasonable theoretical foundations is the one-boson-exchange (OBE) model [3]. Our standard framework consists of the Bonn
Figure 1: Left panel: Neutron effective masses used in the DBHF calculations of the EOS. The angular dependence is averaged out; Right panel: Average energy per particle at densities equal to 0.5, 1, 2, 3, 5, 7, 9, and 10 times \\(\\rho_{0}\\) (from lowest to highest curve).
B OBE potential together with the Dirac-Brueckner-Hartree-Fock (DBHF) approach to nuclear matter. A detailed description of applications of the DBHF method to isospin symmetric and asymmetric matter, and neutron-star properties can be found in Refs. [4; 5; 6; 7].
To explore the possibility for a ferromagnetic transition the DBHF calculation has been extended to densities as high as \\(10\\rho_{0}\\) (with \\(\\rho_{0}\\approx 0.16fm^{-3}\\) the density of normal nuclear matter). Here we recall that the onset of ferromagnetic instabilities above a given density would imply that the energy-per-particle of a completely polarized state is lower than the one of unpolarized neutron matter. The same method as the one used in Ref. [7] has been applied to obtain the energy-per-particle where a self-consistent solution cannot be obtained (see Section III of Ref. [7] for details). The (angle-averaged) neutron effective masses for both the unpolarized and fully polarized case are shown in Fig. 1 (left panel) as a function of density. The spin-asymmetry parameter, \\(\\beta=(\\rho^{\\uparrow}-\\rho^{\\downarrow})/(\\rho^{\\uparrow}+\\rho^{\\downarrow})\\), quantifies the degree of asymmetry of the system. It can take values between -1 and +1 with 0 and the limits \\(\\pm 1\\) corresponding to unpolarized and completely polarized matter respectively. \\(\\rho^{\\uparrow}\\) and \\(\\rho^{\\downarrow}\\) are the densities of neutrons with spins up/down. DBHF predictions for the average energy per particle are shown in Fig. 1 (right panel) at densities ranging from \\(\\rho=0.5\\rho_{0}\\) to \\(10\\rho_{0}\\). What we observe is best seen through the density dependence of spin-symmetry energy, \\(S(\\rho)\\), which is the difference between the energies of completely polarized and unpolarized neutron matter
\\[S(\\rho)=\\bar{e}(\\rho,\\beta=1)-\\bar{e}(\\rho,\\beta=0) \\tag{1}\\]
A negative sign of \\(S(\\rho)\\) would signify that a polarized system is more stable than unpolarized one. The spin-symmetry energy is shown as a function of density in Fig 2 (left panel). We see that at high density the energy shift between polarized and unpolarized matter continues to grow, but at a smaller rate, and eventually appear to saturate. For a detailed analysis of the observed behavior of \\(S(\\rho)\\) see Ref. [8].
Figure 2: Left panel: Density dependence of the spin symmetry energy obtained with the DBHF model. Right panel: Density dependence of the ratio \\(\\chi_{F}/\\chi\\).
Here we should mention that although the curvature of the spin-symmetry energy may suggest that ferromagnetic instabilities are in principle possible within the Dirac model, inspection of Fig. 2 reveals that such transition does not take place at least up to \\(10\\rho_{0}\\). Clearly, it would not be appropriate to explore even higher densities without additional considerations, such as transition to a quark phase. In fact, even on the high side of the densities considered here, softening of the equation of state from additional degrees of freedom not included in the present model may be necessary in order to draw a more definite conclusion. In the right panel of Fig. 2 we show the density dependence of magnetic susceptibility, \\(\\chi\\), in terms of \\(\\chi_{F}\\), the magnetic susceptibility of a free Fermi gas. \\(\\chi(\\rho)\\) is directly related to \\(S(\\rho)\\) through
\\[\\chi=\\frac{\\mu^{2}\\rho}{2S(\\rho)}, \\tag{2}\\]
with \\(\\mu\\) the neutron magnetic moment. Clearly, similar observations apply to both left and right frames of Fig 2. (The magnetic susceptibility would show an infinite discontinuity, corresponding to a sign change of \\(S(\\rho)\\), in case of a ferromagnetic instability.)
In summary of this section, the EOSs we obtain with the DBHF model are generally rather repulsive at the larger densities. The energy of the unpolarized system (where all \\(nn\\) partial waves are allowed), grows rapidly at high density with the result that the energy difference between totally polarized and unpolarized neutron matter tends to slow down with density. This may be interpreted as a _precursor_ of spin-separation instabilities, although no such transition is actually seen up to \\(10\\rho_{0}\\).
Constraining a possible time variation of the gravitational constant \\(G\\) with nuclear data from terrestrial laboratories
Testing the constancy of the gravitational constant \\(G\\) is a longstanding fundamental question in natural science. As first suggested by Jofre, Reisenegger and Fernandez [20], Dirac's hypothesis [21] of a decreasing gravitational constant \\(G\\) with time due to the expansion of the Universe would induce changes in the composition of neutron stars, causing dissipation and internal heating. Eventually, neutron stars reach their quasi-stationary states where cooling, due to neutrino and photon emissions, balances the internal heating. The correlation of surface temperatures and radii of some old neutron stars may thus carry useful information about the rate of change of \\(G\\). Using the density dependence of the nuclear symmetry energy, constrained by recent terrestrial laboratory data on isospin diffusion in heavy-ion reactions at intermediate energies [13; 14; 15; 16; 17; 18], and the size of neutron skin in \\({}^{208}Pb\\)[22; 23; 24; 25], within the _gravitochemical heating_ formalism developed by Jofre et al. [20], we obtain an upper limit for the relative time variation \\(|\\dot{G}/G|\\) in the range \\((4.5-21)\\times 10^{-12}yr^{-1}\\). In what follows we briefly review our calculation. For details see Ref. [9].
Recently a new method, called _gravitochemical heating_[20], has been introduced to constrain a hypothetical time variation in \\(G\\), most frequently expressed as \\(|\\dot{G}/G|\\)In Ref. [20] the authors suggested that such a variation of the gravitational constant would perturb the internal composition of a neutron star, producing entropy which is partially released through neutrino emission, while a similar fraction is eventually radiated as thermal photons. A constraint on the time variation of \\(G\\) is achieved via a comparison of the predicted surface temperature with the available empirical value of an old neutron star [26]. The gravitochemical heating formalism is based on the results of Fernandez and Reisenegger [27] (see also [28]) who demonstrated that internal heating could result from spin-down compression in a rotating neutron star (_rotochemical heating_). In both cases (gravito- and rotochemical heatings) predictions rely heavily on the equation of state (EOS) of stellar matter used to calculate the neutron star structure. Accordingly, detailed knowledge of the EOS is critical for setting a reliable constraint on the time variation of \\(G\\).
Currently, theoretical predictions of the EOS of neutron-rich matter diverge widely mainly due to the uncertain density dependence of the nuclear symmetry energy. Consequently, to provide a stringent constraint on the time variation of \\(G\\), one should attempt to reduce the uncertainty due to the \\(E_{sym}(\\rho)\\). Recently available nuclear reaction data allowed us to constrain significantly the density dependence of the symmetry energy mostly in the sub-saturation density region. While high energy radioactive beam facilities under construction will provide a great opportunity to pin
Figure 3: Left panel: Equation of state of stellar matter in \\(\\beta\\)-equilibrium. The upper panel shows the total energy density and lower panel the pressure as function of the baryon number density (in units of \\(\\rho_{0}\\)); Right panel: Neutron star mass, proton fraction, \\(Y_{p}\\), and symmetry energy, \\(e_{sym}\\). The upper frame displays the neutron star mass as a function of baryon number density. The middle frame shows the proton fraction and the lower frame the nuclear symmetry energy as a function of density. (Symmetry energy is shown for the nucleonic EOSs only.) The proton fraction curve of the Hyb EOS is terminated at the beginning of the quark phase. The termination point is denoted by a “cross” character.
down the high density behavior of the nuclear symmetry energy in the future. We apply the gravitochemical method with several EOSs describing matter of purely nucleonic (\\(npe\\mu\\)) as wells as hyperonic and hybrid stars. Among the nucleonic matter EOSs, we pay special attention to the one calculated with the MDI interaction [29]. The symmetry energy \\(E_{sym}(\\rho)\\) of the MDI EOS is constrained in the sub-saturation density region by the available nuclear laboratory data, while in the high-density region we assume a continuous density functional. The EOS of symmetric matter for the MDI interaction is constrained up to about five times the normal nuclear matter density by the available data on collective flow in relativistic heavy-ion reactions.
The EOSs applied with the gravitochemical heating method are shown in Fig. 3 (left panel). For description of these EOSs see Ref. [9] and references therein. The parameter \\(x\\) is introduced in the MDI interaction to reflect the largely uncertain density dependence of the \\(E_{sym}(\\rho)\\) as predicted by various many-body approaches. Since, as demonstrated in Refs. [12; 17], only equations of state with \\(x\\) between -1 and 0 have symmetry energies consistent with the isospin diffusion data and measurements of the skin thickness of \\({}^{208}Rb\\), we thus consider only these two limiting cases. Fig. 3 (right panel) displays the neutron star mass (upper frame), the proton fraction (middle frame) and the nuclear symmetry energy (lower frame). The shaded region in the upper frame corresponds to the mass constraint by Hotan et al. [30].
As shown in Ref. [20] the stationary surface temperature is directly related to the relative changing rate of \\(G\\) via \\(T_{s}^{\\infty}=\\tilde{\\mathcal{D}}\\left|\\frac{\\dot{G}}{G}\\right|^{2/7}\\), where the function \\(\\tilde{\\mathcal{D}}\\) is a quantity depending only on the stellar model and the equation of state. The correlation of surface temperatures and radii of some old neutron stars may thus carry useful information about the changing rate of \\(G\\). Using the constrained symmetry energy with \\(x=0\\) and \\(x=-1\\) shown in Fig. 3 (right panel), within the gravitochemical
Figure 4: Left panel: Neutron star stationary surface temperature for stellar models satisfying the mass constraint by Hotan et al. [30]. The solid lines are the predictions versus the stellar radius for the considered neutron star sequences. Dashed lines correspond to the 68% and 90% confidence contours of the black-body fit of Kargaltsev et al. [26]. The value of \\(|\\dot{G}/G|=4.5\\times 10^{-12}yr^{-1}\\) is chosen so that predictions from the \\(x=0\\) EOS are just above the observational constraints; Right panel: Same as left panel but assuming \\(|\\dot{G}/G|=2.1\\times 10^{-11}yr^{-1}\\).
heating formalism, as shown in Fig. 4, we obtained an upper limit of the relative changing rate of \\(G\\) in the range of \\((4.5-21)\\times 10^{-12}yr^{-1}\\). This is the best available estimate in the literature [9]. For a comparison, results with the EOS from recent DBHF calculations [4; 7] with the Bonn B OBE potential are also shown. Predictions with the DBHF\\(+\\)Bonn B EOS give roughly the same value for the stationary surface temperature, but at slightly larger neutron-star radius relative to the \\(x=0\\) EOS. For the effect of hyperonic and quark phases of matter on the possible time variations of \\(G\\) we refer the reader to our analysis in Ref. [9].
The gravitochemical heating mechanism has the potential to become a powerful tool for constraining gravitational physics. Since the method relies on the detailed neutron star structure, which, in turn, is determined by the EOS of stellar matter, further progress in our understanding of properties of dense, neutron-rich matter will make this approach more effective.
## 4 Constraining properties and structure of rapidly rotating neutron stars
Because of their strong gravitational binding neutron stars can rotate very fast [31]. The first millisecond pulsar PSR1937\\(+\\)214, spinning at \\(\
u=641Hz\\)[32], was discovered in 1982, and during the next decade or so almost every year a new one was reported. In the recent years the situation changed considerably with the discovery of an anomalously large population of millisecond pulsars in globular clusters [2], where the density of stars is roughly 1000 times that in the field of the galaxy and which are therefore very favorable sites for formation of rapidly rotating neutron stars which have been spun up by the means of mass accretion from a binary companion. Presently more than 700 pulsar have been reported, and the detection rate is rather high.
In 2006 Hessels et al. [33] reported the discovery of a very rapid pulsar J1748-2446ad, rotating at \\(\
u=716Hz\\) and thus breaking the previous record (of \\(641Hz\\)). However, even this high rotational frequency is too low to affect the structure of neutron stars with masses above \\(1M_{\\odot}\\)[31]. Such pulsars belong to the slow-rotation regime since their frequencies are considerably lower than the Kepler (mass-shedding) frequency \\(\
u_{k}\\). (The mass-shedding, or Kepler, frequency is the highest possible frequency for a star before it starts to shed mass at the equator.) Neutron stars with masses above \\(1M_{\\odot}\\) enter the rapid-rotation regime if their rotational frequencies are higher than \\(1000Hz\\)[31]. A recent report by Kaaret et al. [34] suggests that the X-ray transient XTE J1739-285 contains the most rapid pulsar ever detected rotating at \\(\
u=1122Hz\\). This discovery has reawaken the interest in building models of rapidly rotating neutron stars [31].
Applying several nucleonic equations of state (see previous section) and the \\(RNS^{3}\\) code developed and made available to the public by Nikolaos Stergioulas [35;36], we construct one-parameter 2-D stationary configurations of rapidly rotating neutron stars (for details see Ref. [10]). The computation solves the hydrostatic and Einstein field equations for mass distributions rotating rigidly under the assumptions of stationary and axial symmetry about the rotational axis, and reflection symmetry about the equatorial plane.
The effect of ultra-fast rotation at the Kepler (mass-shedding) frequency is examined in the left panel of Fig. 5 (see also Table 1) where the stellar gravitational mass is given as a function of the _equatorial_ radius. Predictions are shown for both static (non-rotating) and rapidly rotating stars. We observe that the total gravitational mass supported by a given EOS is increased by rotation up to 17% (see Ref. [10]). At the same time, the circumferential radius is increased by several kilometers while the polar radius (not shown here) is decreased by several kilometers, leading to an overall oblate shape of the rotating star.
The first column identifies the equation of state. The remaining columns exhibit the following quantities for the maximally rotating models with maximum gravitational mass: gravitational mass; its percentage increase over the maximum gravitational mass of static models; central mass energy density; maximum rotational frequency.
\\begin{table}
\\begin{tabular}{l c c c c} EOS & \\(M_{max}(M_{\\odot})\\) & Increase (\\%) & \\(\\epsilon_{c}(\\times 10^{15}g\\ cm^{-3})\\) & \\(\
u_{k}(Hz)\\) \\\\ \\hline \\hline MDI(x=0) & 2.25 & 15 & 2.59 & 1742 \\\\ APR & 2.61 & 17 & 2.53 & 1963 \\\\ MDI(x=-1) & 2.30 & 14 & 2.21 & 1512 \\\\ DBHF+Bonn B & 2.69 & 17 & 2.06 & 1685 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Maximum-mass rapidly rotating models at the Kepler frequency \\(\
u=\
u_{k}\\).
Figure 5: Left panel: Mass-radius relation. Both static (solid lines) and Keplerian (broken lines) sequences are shown. The \\(1-\\sigma\\) error bar corresponds to the measurement of the mass and radius of EXO 0748-676 [37]; Right panel: Gravitational mass versus circumferential radius for neutron stars rotating at \\(\
u=1122Hz\\).
Models of neutron stars rotating at \\(1122Hz\\)[34] are shown in Fig. 5 (right panel). Stability with respect to the mass-shedding from equator implies that at a given gravitational mass the equatorial radius \\(R_{eq}\\) should be smaller than \\(R_{eq}^{max}\\) corresponding to the Keplerian limit [31]. On the other hand, the stellar sequences are terminate at \\(R_{eq}^{min}\\) where the star becomes unstable against axial-symmetric perturbations. In Fig. 5 (right panel) we observe that the range of the allowed masses supported by a given EOS for rapidly rotating neutron stars becomes narrower than the one of static configurations. This effect becomes stronger with increasing frequency and depends upon the EOS. Since predictions from the \\(x=0\\) and \\(x=-1\\) EOSs represents the limits of the neutron star models consistent with the nuclear data from terrestrial laboratories, we conclude that the mass of the neutron star in XTE J1739-285 is between 1.7 and \\(2.1M_{\\odot}\\).
## 5 Summary
We have presented an overview of our recent studies of effective interactions in dense neutron-rich matter and their applications to problems with astrophysical significance. The DBHF calculation of properties of spin-polarized neutron matter has been extended to high densities. Although no transition to a ferromagnetic phase is actually seen up to 10\\(\\rho_{0}\\), the observed behavior of the spin-symmetry energy suggests that such transition may be possible at much higher densities. Applying the gravitochemical heating formalism developed by Jofre et al. [20] and the EOS with constrained symmetry energy, we have provided a limit on the possible time variation of the gravitational constant \\(G\\) in the range \\((4.5-21)\\times 10^{-12}yr^{-1}\\). Our findings also allowed us to constrain the mass of the neutron star in XTE J1739-285 to be between 1.7 and \\(2.1M_{\\odot}\\).
In closing our discussion we would like to emphasize that further progress of our understanding of properties of dense matter can be achieved through coherent efforts of experiment, theory/modeling, and astrophysical observations.
## Acknowledgments
We would like to thank Rodrigo Fernandez and Andreas Reisenegger for helpful discussions and assistance with the numerics of the gravitochemical heating method. We also thank Fiorella Burgio for providing the hyperonic and hybrid EOSs and Wei-Zhou Jiang for helpful discussions. The work of Plamen G. Krastev, Bao-An Li and Aaron Worley was supported by the National Science Foundation under Grant No. PHY0652548 and the Research Corporation under Award No. 7123. The work of Francesca Sammarruca was supported by the U.S. Department of Energy under grant number DE-FG02-03ER41270.
## References
* [1] M. Baldo, G. F. Burgio and H. -J. Schulze, Phys. Rev. C **61**, 055801 (2000).
* [2] F. Weber, _Pulsars as Astrophysical Laboratories for Nuclear and Particle Physics_, Bristol, Great Britain: IOP Publishing (1999).
* [3] R. Machleidt, Adv. Nucl. Phys. **19**, 189 (1989).
* [4] D. Alonso and F. Sammarruca, Phys. Rev. C **67**, 054301 (2003).
* [5] F. Sammarruca, W. Barredo and P. Krastev, Phys. Rev. C **71**, 064306 (2005).
* [6] F. Sammarruca and P. Krastev, Phys. Rev. C **73**, 014001 (2006).
* [7] P. G. Krastev and F. Sammarruca, Phys. Rev. C **74**, 025808 (2006).
* [8] F. Sammarruca and P. G. Krastev Phys. Rev. C **75**, 034315 (2007).
* [9] P. G. Krastev and B. A. Li, Phys. Rev. C (submitted); arXiv:nucl-th/0702080.
* [10] P. G. Krastev, B. A. Li and A. Worley, ApJ (submitted); arXiv:0709.3621 [astro-ph].
* [11] B. A. Li and L. W. Chen, Phys. Rev. C **72**, 064611 (2005).
* [12] B. A. Li and A. W. Steiner, Phys. Lett. B **642**, 436 (2006).
* [13] L. Shi and P. Danielewicz, Phys. Rev. C **68**, 064604 (2003).
* [14] M.B. Tsang et al., Phys. Rev. Lett. **92**, 062701 (2004).
* [15] L. W. Chen, C. M. Ko and B. A. Li, Phys. Rev. Lett. **94**, 032701 (2005).
* [16] A. W. Steiner and B. A. Li, Phys. Rev. C **72**, 041601 (2005).
* [17] B. A. Li and L. W. Chen, Phys. Rev. C **72**, 064611 (2005).
* [18] L. W. Chen, C. M. Ko and B. A. Li, Phys. Rev. C **72**, 064309 (2005).
* [19] P. A. M. Dirac, Nature **139**, 323 (1937).
* [20] P. Jofre, A. Reisenegger and R. Fernandez, Phys. Rev. Lett. **97**, 131102 (2006).
* [21] P. A. M. Dirac, Nature **139**, 323 (1937).
* [22] A. W. Steiner, M. Prakash, J. M. Lattimer and P. J. Ellis, Phys. Rept. **411**, 325 (2005).
* [23] C. J. Horowitz and J. Piekarewicz, Phys. Rev. Lett. **86**, 5647 (2001).
* [24] C. J. Horowitz and J. Piekarewicz, Phys. Rev. C **66**, 055803 (2002).
* [25] B. G. Tod-Rutel and J. Piekarewicz, Phys. Rev. Lett. **95**, 122501 (2005).
* [26] O. Kargaltsev, G. G. Pavlov and R. W. Romani, Astrophys. J. **602**, 327 (2004).
* [27] R. Fernandez and A. Reisenegger, Astrophys. J. **625**, 291 (2005).
* [28] A. Reisenegger, Astrophys. J. **442**, 749 (1995).
* [29] C. B. Das, S. D. Gupta, C. Gale and B. A. Li, Phys. Rev. C **67**, 034611 (2003).
* [30] A. W. Hotan, M. Bailes and S. M. Ord, Mon. Not. R. Astron. Soc. **369**, 15021520 (2006).
* [31] M. Bejger, P. Haensel, and J. L. Zdunik, A&A, 464, L49 (2007).
* [32] D. C. Backer, S. R. Kulkarni, C. Heiles, et al., Nature, 300, 615 (1982).
* [33] J. W. T. Hessels, S. M. Ransom, I. H. Stairs, et al., Science, 311, 1901 (2006).
* [34] P. Kaaret, J. Prieskorn, J. J. M. in't Zand, et al. Astrophys. 657, L97 (2007).
* [35] N. Stergioulas, Living Rev. Rel., 6, 3 (2003).
* [36] N. Stergioulas and J. L. Friedman, Astrophys. J., 444, 306 (1995).
* [37] F. Ozel, Nature, 441, 1115 (2006). | Properties of effective interactions in neutron-rich matter are reflected in the medium's equation of state (EOS), which is a relationship among several state variables. Spin and isospin asymmetries play an important role in the energy balance and could alter the stability conditions of the nuclear EOS. The EOS has far-reaching consequences for numerous nuclear processes in both the terrestrial laboratories and the cosmos. Presently the EOS, especially for neutron-rich matter, is still very uncertain. Heavy-ion reactions provide a unique means to constrain the EOS, particularly the density dependence of the nuclear symmetry energy. On the other hand, microscopic, self-consistent, and parameter-free approaches are ultimately needed for understanding nuclear properties in terms of the fundamental interactions among the basic constituents of nuclear systems. In this talk, after a brief review of our recent studies on spin-polarized neutron matter, we discuss constraining the changing rate of the gravitational constant \\(G\\) and properties of (rapidly) rotating neutron stars by using a nuclear EOS partially constrained by the latest terrestrial nuclear laboratory data. | Give a concise overview of the text below. |
arxiv-format/0710_0562v1.md | # On the Origin of Strong-Field Polarity Inversion Lines
B. T. Welsch1 and Y. Li1
######
Footnote 1: affiliation: Space Sciences Laboratory, University of California, 7 Gauss Way, Berkeley, CA 94720-7450
## 1 Strong gradients across PILs
It has been known for decades that flares and filament eruptions (which form CMEs) originate along polarity inversion lines (PILs) of the radial photospheric magnetic field. In studies using photospheric vector magnetograms, Falconer _et al._ (2003, 2006) reported a strong correlation between active region CME productivity and the total length of PILs with strong potential transverse fields (\\(>\\) 150 G) and strong gradients in the LOS field (greater than 50 G Mm\\({}^{-1}\\)). They used a \\(\\pm\\)2-day temporal window for correlating magnetogram properties with CMEs. Falconer _et al._ (2003) noted that these correlations remained essentially unchanged for \"strong gradient\" thresholds from 25 to 100 G Mm\\({}^{-1}\\). Using more than 2500 MDI (LOS) magnetograms, Schrijver (2007) found a strong correlation between major (X- and M-class) flares and the total unsigned magnetic flux near (within \\(\\sim\\) 15 Mm) strong-field PILs -- defined, in his work, as regions where oppositely signed LOS fields that exceed 150 G lie closer to each other than the instrument's \\(\\sim\\) 2.9 Mm resolution. Schrijver's (2007) effective gradient threshold, \\(\\sim\\) 100 G Mm\\({}^{-1}\\), is stronger than that used by Falconer _et al._ (2003, 2006).
Although these studies were published recently, the association between flares and \\(\\delta\\) sunspots, which posses opposite-sign umbrae within the same penumbra -- and therefore also possess strong-field PILs -- has been well known for some time (Kunzel 1960; Sammis _et al._ 2000). In particular, \\(\\beta\\gamma\\delta\\) spot groups are most likely to flare (Sammis _et al._ 2000). A \\(\\beta\\gamma\\) designation means no obvious north-south PIL is present in an active region (Zirin 1988).
We note that Cui _et al._ (2006) found that the occurrence of flares is correlated with the maximum magnitude of the horizontal gradient in active region LOS magnetograms -- not just near PILs -- and that the correlation increases strongly for gradients stronger than \\(\\sim\\) 400 G Mm\\({}^{-1}\\).
One would expect the measures of CME- and flare- productivity developed by both Falconer _et al._ (2003,2006) Schrijver (2007) to be larger for larger active regions. Importantly, however, both studies showed that their measures of flux near strong-field PILs is a better predictor of flare productivity than total unsigned magnetic flux. Evidently, more flux is not, by itself, as significant a predictor of flares as more flux near strong-field PILs.
These intriguing results naturally raise the question, \"How do strong-field PILs form?\" For brevity, we hereafter refer to strong-field PILs as SPILs.
Schrijver (2007) contends that large SPILs form primarily, if not solely, by emergence. But he also noted that flux emergence, by itself, does not necessarily lead to the formation of SPILs. Rather, a particular type of magnetic structure must emerge, one containing a long SPIL at its emergence. He suggests such structures are horizontally oriented, filamentary currents.
Beyond the \"intact emergence\" scenario presented by Schrijver (2007), other mechanisms can generate SPILs. When new flux emerges in close proximity to old flux -- a common occurrence (Harvey and Zwaan 1993) -- SPILs can form along the boundaries between old and new flux systems. Converging motions in flux that has already emerged can also generate SPILs. If the convergence leads to flux cancellation by some mechanism -- emergence of U loops, submergence of inverse-U loops, or reconnective cancellation (Welsch 2006) -- then the total unsigned flux in the neighborhood of the SPIL might decrease as the SPIL forms. We note that, while cancellation in already-emerged fields can occur via flux emergence (from upward moving U-loops), the emergence of a new flux system across the photosphere must increase the total unsigned flux that threads the photosphere.
If the emergence of new flux were primarily responsible for SPILs, then a straightforward prediction would be that an increase in total unsigned flux should be correlated with an increase in the amount of unsigned flux near SPILs. Hence, observations showing that increases in the unsigned flux near SPILs frequently occur without a corresponding increase in total unsigned flux would rule out new flux emergence as the sole cause of these strong field gradients.
Our goal is to investigate the relationship between increases in the amount of unsigned flux near SPILs with changes in unsigned flux in the active regions containing the SPILs, to determine, if possible, which processes generate SPILs.
## 2 Data
From days-long time series of deprojected, 96-minute, full-disk MDI magnetograms for \\(N_{AR}=64\\) active regions, we computed the rates of change of unsigned flux near SPILs, following the method described by Schrijver (2007). Wealso computed the rates of change of total unsigned line-of-sight magnetic flux these active regions.
Our active region sample was chosen for use in a separate study of the relationships between surface flows derived from magnetograms and CMEs. For the purposes of that study, we typically selected active regions with a single, well defined PIL, for ease in identifying the presence of shearing and/or converging flows some CME models employ (Antiochos _et al._ 1999; Linker _et al._ 2001). The sample used here includes regions from 1996 - 1998, and includes regions that did and did not produce CMEs. Some of our magnetograms image the same active region as it rotated back onto the disk one or more times. Also, some of our selected regions are so decayed that they lack spots, and therefore have no NOAA designation.
Here, we analyze \\(N_{\\rm mag}=4062\\) magnetograms. Pixels more that \\(45^{\\circ}\\) from disk center were ignored. To convert the LOS field, \\(B_{\\rm LOS}\\), to an estimated radial field, \\(B_{R}\\), cosine corrections were used, \\(B_{R}=B_{\\rm LOS}/\\cos(\\Theta)\\), where \\(\\Theta\\) is the angle from disk center.
Triangulation was used to interpolate the \\(B_{R}\\) data -- regularly gridded in the plane-of-sky, but irregularly gridded in spherical coordinates \\((\\theta,\\phi)\\) on the solar surface -- onto points \\((\\theta^{\\prime},\\phi^{\\prime})\\) corresponding to a regularly gridded, Mercator projection of the spherical surface. This projection was adopted because it is conformal (locally shape-preserving), necessary to ensure displacements measured in the tracking study mentioned above were not biased in direction. For computing gradients, a conformal projection is also appropriate. The background grayscale in Figure 1 is a typical reprojected magnetogram. We note that the price of preserving shapes in the deprojection is distortion of scales; but this can be easily corrected.
Each active region was tracked over 3 - 5 days, and cropped with a moving window. A list of tracked active regions and mpeg movies of the active regions, are online, at [http://sprg.ssl.berkeley.edu/](http://sprg.ssl.berkeley.edu/)\\(\\sim\\)yanli/lct/.
## 3 Analysis methods
To identify SPILs, we used the gradient identification technique of Schrijver (2007). For a magnetogram at time \\(t_{i}\\), binary positive/negative strong-field masks -- where \\(B_{R}>150\\) G and \\(B_{R}<-150\\) G, respectively -- were constructed, then dilated by a (3x3) kernel to create dilated positive and negative bitmaps, \\(M_{\\pm}\\). These procedures are illustrated in Figure 1. Regions of overlap, where \\(M_{\\rm OL}=M_{+}M_{-}=1\\), were identified as SPILs. In Figure 1, \\(M_{\\rm OL}\
eq 0\\) for a single pixel, at \\((x,y)\\) = (112,91).
To quantitatively define neighborhoods around SPILs, \\(M_{\\rm OL}\\) is convolved with a normalized Gaussian,
\\[G(u,v)=G_{0}^{-1}\\exp(-[u^{2}+v^{2}]/2\\sigma^{2}) \\tag{1}\\]
where \\(G_{0}=\\int du\\int dv\\exp(-[u^{2}+v^{2}]/2\\sigma^{2})\\), and \\(\\sigma=9\\) pixels (corresponding to a FWHM \\(\\simeq\\) 15 Mm at disk center), to create \"weighting maps,\" \\(C_{MG}\\), where
\\[C_{MG}(x,y)={\\rm convol}(M_{\\rm OL}(x,y),G). \\tag{2}\\]Figure 2 shows the product of \\(B_{R}\\) with such a weighting map. Following Schrijver (2007), we totaled the unsigned magnetic field over the weighting map to determine \\(R\\), a measure of the unsigned flux near SPILs,
\\[R=\\sum\\left|B_{R}\\right|C_{MG}. \\tag{3}\\]
From a sample of more than 2500 MDI magnetograms, Schrijver (2007) showed that \\(R\\) is correlated with major (X- and M-class) flares.
Figure 1: The background grayscale is typical, reprojected magnetogram; white is positive flux, black is negative flux. The inner black and white contours enclose signed, strong-field masks — regions where \\(B_{R}>150\\) G and \\(B_{R}<-150\\) G (respectively). The outer black and white contours show the outlines of \\(M_{\\pm}\\), dilated bitmaps of the strong-field masks with a 3 \\(\\times\\) 3 kernel function. For this magnetogram, the dilated bitmaps overlap at a single pixel, at \\((x,y)=\\) (112,91).
For each of the \\(N_{R}=\\)1621 magnetograms with \\(R\
eq 0\\), we summed the weighted absolute magnetic field in the previous magnetogram, \\(B_{R}(t_{i-1})\\), using the weighting map from \\(t_{i}\\), to compute the backwards-difference \\(\\Delta R\\),
\\[\\Delta R=\\sum(|B_{R}(t_{i})|-|B_{R}(t_{i-1})|)\\,C_{MG}. \\tag{4}\\]
We also computed the change in summed, unsigned field,
\\[\\Delta{\\cal B}=\\sum(|B_{R}(t_{i})|-|B_{R}(t_{i-1})|)\\, \\tag{5}\\]
to determine if new flux is emerging or if flux is canceling. If new flux is emerging, we expect \\(\\Delta{\\cal B}>0\\). If flux is canceling, we expect \\(\\Delta{\\cal B}<0\\). Like Schrijver (2007), we have opted to keep \\(R\\) in units of flux density; for simplicity, we also keep \\(\\Delta{\\cal B}\\) in these same units.
Figure 2: \\(B_{R}\\) multiplied by a weighting map, \\(C_{MG}\\). \\(R\\) is the total unsigned field the window, \\(\\sum|B_{R}|\\,C_{MG}\\).
When the overlap map \\(M_{\\rm OL}\\) for \\(B_{R}(t_{i})\\) is identically zero, \\(R\\) is also zero, and \\(\\Delta R\\) and \\(\\Delta{\\cal B}\\) are not computed.
In SS1. we discussed processes that can cause changes in \\(R\\). What processes can lead to \\(\\Delta{\\cal B}\
eq 0\\)? Emergence of new flux or cancellation (both only happen at PILs) can make \\(\\Delta{\\cal B}\
eq 0\\), and these processes are probably related to evolution in \\(R\\). Flux can also cross into or out of the cropping window. Since our cropping windows were selected to include essentially all of each tracked active region's flux, systematic errors arising in this way are expected to be small. A more severe effect is the \"unipolar appearance\" phenomenon characterized by Lamb _et al._ (2007), who found that the majority of newly detected flux in the quiet sun is due to coalescence of previously existing, but unresolved, single-polarity flux into concentrations large and strong enough to detect. While it is unclear if the conclusion reached by Lamb _et al._ (2007) for the quiet sun also applies in active regions, this is plausible. Moreover, much as flux can \"appear,\" flux can also disappear, via dispersive photospheric flows or perhaps even molecular diffusivity. Also, simultaneous emergence of new flux and cancellation of existing flux can occur within the same active region, masking the effects of both processes. Practically, therefore, we can only refer to increases in unsigned flux as \"possible new flux emergence,\" and to decreases in unsigned flux as \"possible cancellation.\"
## 4 Results and conclusions
In Figure 3, we show a scatter plot of changes in \\(R\\) as a function of changes in \\({\\cal B}\\). The plot does not show the full range in \\(\\Delta{\\cal B}\\), but the \\(\\Delta R\\) for outliers on the horizontal axes are near zero. One striking feature of the plot is its flatness, i.e., that most changes in \\({\\cal B}\\) are not associated with any change in \\(R\\). In Table 1, we tabulated the data points in each quadrant of this plot. Clearly, increases in \\(R\\), the unsigned flux near SPILs, usually occur simultaneously with increases in the unsigned flux over the entire active region. Increases in \\(R\\) only occur less frequently when flux is decreasing, i.e., during cancellation.
We set out to answer the question, \"How do strong-field PILs form?\" We related changes in total, unsigned flux over whole active regions with changes in total, unsigned flux in subwindows of the same active regions -- defined by weighting maps. One might expect, therefore, that these quantities should be correlated, casting doubt about our ability to discriminate between changes in total flux in active regions and in subwindows. If the two were strongly correlated, the excess of events with \\(\\Delta R>0\\) and \\(\\Delta{\\cal B}>0\\) might not be very meaningful. In fact, however, \\(\\Delta R\\) and \\(\\Delta{\\cal B}\\) are poorly correlated: the two have a linear correlation coefficient \\(r=0.29\\), and a rank-order coefficient of 0.36. This suggests that the relationship between increases in \\(R\\) and increases in total, unsigned active region flux is not an artifact of our approach.
\\begin{table}
\\begin{tabular}{c|c|c} & \\(\\Delta{\\cal B}<0\\) & \\(\\Delta{\\cal B}>0\\) \\\\ \\hline \\(\\Delta R>0\\) & 215 & 671 \\\\ \\hline \\(\\Delta R<0\\) & 363 & 371 \\\\ \\end{tabular}
\\end{table}
Table 1: Breakdown of Flux ChangesNonetheless, our active region sample is not ideally suited to address the origin of SPILs, generally. Our sample was not unbiased with respect to active region morphology; we selected regions with well-defined PILs. In addition, our sample included some decayed active regions that NOAA AR designations. Consequently, we believe that a follow-up study, with a much larger, unbiased sample of active regions, is warranted.
With caveats, therefore, our study supports Schrijver's (2007) contention that the emergence of new flux creates the strong-field polarity inversion lines that he found to be correlated with flares.
We acknowledge the support of NSF Grant NSF-ATM 04-51438.
Antiochos, S. K., DeVore, C. R., and Klimchuk, J. A. 1999,, 510, 485.
Cui, Y., Li, R., Zhang, L., He, Y., and Wang, H. 2006,, 237, 45.
Falconer, D. A., Moore, R. L., and Gary, G. A. 2003, Journal of Geophysical Research (Space Physics) 108(A10), 11.
Falconer, D. A., Moore, R. L., and Gary, G. A. 2006,, 644, 1258.
Figure 3: A scatter plot of changes in \\(R\\) as a function of changes in \\(\\mathcal{B}\\). Increases in \\(R\\), the unsigned flux near SPILs, usually occur simultaneously with increases in the unsigned flux over the entire active region. Increases in \\(R\\) only occur rarely when flux is decreasing, i.e., during cancellation. For a breakdown of the data points in each quadrant, see Table 1.
Harvey, K. L. and Zwaan, C. 1993, Solar Phys.148, 85.
* [1960] Kunzel, H. 1960, Astronomische Nachrichten 285, 271.
* [2007] Lamb, D. A., DeForest, C. E., Hagenaar, H. J., Parnell, C. E., and Welsch, B. T. 2007, ApJ, submitted.
* [2001] Linker, J. A., Lionello, R., Mikic, Z., and Amari, T. 2001, JGR 106, 25165.
* [2000] Sammis, I., Tang, F., and Zirin, H. 2000, ApJ540, 583.
* [2007] Schrijver, C. J. 2007, ApJ655, L117.
* [2006] Welsch, B. T. 2006, ApJ 638, 1101.
* [1988] Zirin, H. 1988, _Astrophysics of the Sun_, Cambridge Univ. Press, Cambridge. | Several studies have correlated observations of impulsive solar activity -- flares and coronal mass ejections (CMEs) -- with the amount of magnetic flux near strong-field polarity inversion lines (PILs) in active regions' photospheric magnetic fields, as measured in line-of-sight (LOS) magnetograms. Practically, this empirical correlation holds promise as a space weather forecasting tool. Scientifically, however, the mechanisms that generate strong gradients in photospheric magnetic fields remain unknown. Hypotheses include: the (1) emergence of highly twisted or kinked flux ropes, which possess strong, opposite-polarity fields in close proximity; (2) emergence of new flux in close proximity to old flux; and (3) flux cancellation driven by photospheric flows acting fields that have already emerged. If such concentrations of flux near strong gradients are formed by emergence, then increases in unsigned flux near strong gradients should be correlated with increases in total unsigned magnetic flux -- a signature of emergence. Here, we analyze time series of MDI line-of-sight (LOS) magnetograms from several dozen active regions, and conclude that increases in unsigned flux near strong gradients tend to occur during emergence, though strong gradients can arise without flux emergence. We acknowledge support from NSF-ATM 04-51438. | Give a concise overview of the text below. |
arxiv-format/0710_1556v2.md | # Hidden in the Light: Magnetically Induced Afterglow from Trapped Chameleon Fields
Holger Gies
Institut fur Theoretische Physik, Universitat Heidelberg, D-69120 Heidelberg, Germany
David F. Mota
Institut fur Theoretische Physik, Universitat Heidelberg, D-69120 Heidelberg, Germany
Douglas J. Shaw
Astronomy Unit, School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London E1 4NS, United Kingdom DAMTP, Centre for Mathematical Sciences, University of Cambridge, Cambridge CB2 0WA, United Kingdom
## I Introduction
Light scalar fields populate theories of both cosmology and physics beyond the Standard Model. In generic models, these fields can couple to matter and hence potentially mediate a new (or 'fifth') force between bodies. However, no such new force has been detected [1]. Any force associated with such light scalar fields must therefore be considerably weaker than gravity over the scales, and under the conditions, that have been experimentally probed. The fields must either interact with matter far more weakly than gravity does, or be sufficiently massive so as to have remained hidden thus far. There is however an assumption when deriving the above conclusions: the mass, \\(m\\), of the scalar field is taken to be a constant. It has recently been shown that the most stringent experimental limits on the properties of light scalar fields can be exponentially relaxed if the scalar field theory in question possesses a _chameleon mechanism_[2; 3]. The chameleon mechanism provides a way to suppress the forces mediated by these scalar fields via non-linear field self-interactions. A direct result of these self-interactions is that the mass of the field is no longer fixed but depends on, amongst other things, the ambient density of matter. The properties of these scalar fields therefore change depending on the environment; it is for this reason that such fields have been dubbed _chameleon fields_.1. Chameleon fields could potentially also be responsible for the observed late-time acceleration of the Universe [5; 6]. In the longer term, it has been shown that future experimental measurements of the Casimir force will be able to detect or rule out many, if not all, of the most interesting chameleon field models for dark energy [3; 7].
Footnote 1: This is actually a misnomer: despite popular belief, chameleons (the lizards) cannot change color to their surroundings. Instead, changing color is an expression of their physical and physiological condition [4].
It was recently shown that some strongly-coupled (i.e., compared to gravity) chameleon fields would alter light propagation through the vacuum in the presence of a magnetic field in a polarization-dependent manner [8; 9]; the resultant birefringence and dichroism could be detected in laboratory searches such as the polarization experiments PVLAS [10; 11], Q&A [12], and BMV [13] that are sensitive to new hypothetical particles with a light mass and a weak coupling to photons. Popular candidates for these searches are the axion [14], or more generally an axion-like particle (ALP), minicharged particles (MCPs) [15; 16], or paraphotons [15]. These particle candidates may be viewed as low-energy effective degrees of freedom of more involved microscopic theories. In this sense, chameleons could be classified as ALPs as far as optical experiments are concerned, but give rise to specific optical signatures to be discussed in detail in this work.
In fact, a variety of further experiments such as ALPS [17], LIPPS [18], OSQAR [19], and GammeV [20] have been proposed and are currently built up or are even already taking data. They look for anomalous optical signatures from light propagation in a modified quantum vacuum. This rapidly evolving field has been triggered by the fact that optical experiments provide a rather unique laboratory tool, because photons can be manipulated and detected with a great precision. Since laboratory set-ups aim at both production and detection of new particles under fully controlled experimental conditions, they are complementary to astrophysical considerations such as those based onstellar energy loss. The latter can indeed give rather strong bounds, for instance, on axion models [21]. But for models with particle candidates which exhibit very different properties in the stellar plasma as compared with the laboratory environment, purely laboratory-based experiments are indispensable [3; 22]. The chameleon is exactly of this type, therefore being an ideal particle candidate for high-precision laboratory-based searches.
This work is devoted to an investigation of possible optical signatures which can specifically be attributed to chameleon fields, thereby representing a smoking gun for this particle candidate. The standard optical signatures in polarization experiments are induced ellipticity and rotation for a propagating laser beam interacting with a strong magnetic field [23]; these exist for a chameleon [8; 9], but occur similarly for ALPs [24], MCPs [25] or models involving also paraphotons [26]. In the case of a positive signal, the various scenarios can be distinguished from each other by analyzing the signal dependence on the experimental parameters such as magnetic field strength, laser frequency, length of the interaction region [28]. ALP and paraphoton models can specifically be tested by light-shining-through-walls experiments [26; 27]; MCPs can leave a decisive trace in MCP production experiments in the form of a dark current [29].
In this work, we propose an _afterglow_ phenomenon as a unique chameleon trace in an optical experiment.2 The existence of this afterglow is directly linked with the environment dependence of the chameleon mass parameter. In particular, the mass dependence on the ambient matter density causes the trapping of chameleons inside the vacuum chamber where they have been produced, e.g., by a laser pulse interacting with a strong magnetic field. As detailed below, the trapping can be so efficient that the re-conversion of chameleons into photons in a magnetic field causes an afterglow over macroscopic time scales. Most importantly, our afterglow estimates clearly indicate that the new-physics parameter range accessible to current technology is substantial and can become comparable to scales familiar from astrophysical considerations. Afterglow searches therefore represent a tool to probe physics halfway up to the Planck scale.
Footnote 2: A similar proposal can be found in [30].
This paper is organized as follows: In Sect. II, we review aspects of the chameleon model which are relevant to the optical phenomena discussed in this work. In Sect. III, we solve the equations of motion for the coupled photon-chameleon system, paying particular attention to the boundary conditions which give rise to the afterglow phenomenon. Signatures of the afterglow are exemplified in Sect. IV. The chameleonic afterglow is compared with other background sources and systematic effects in Sect. V. Our conclusions are given in Sect. VI. In the appendix, we discuss another option for afterglow detection based on chameleon resonances in the vacuum chamber.
## II Chameleon theories
As was mentioned above, chameleon theories are essentially scalar field theories with a self-interaction potential and a coupling to matter; they are specified by the action
\\[S = \\int d^{4}x\\sqrt{-g}\\left(\\frac{1}{2\\kappa_{4}^{2}}R-g^{\\mu\
u} \\partial_{\\mu}\\phi\\partial_{\
u}\\phi-V(\\phi)\\right) \\tag{1}\\] \\[+S_{m}(e^{\\phi/M_{i}}g_{\\mu\
u},\\psi_{m})-\\frac{e^{\\phi/M}}{4}F_{ \\mu\
u}F^{\\mu\
u},\\]
where \\(\\phi\\) is the chameleon field with a self-interaction potential \\(V(\\phi)\\). \\(S_{m}\\) denotes the matter action and \\(\\psi_{m}\\) are the matter fields, and we have also explicitly listed the coupling to photons.
The strength of the interaction between \\(\\phi\\) and the matter fields is determined by the one or more mass scales \\(M_{i}\\). In general, we expect different particle species to couple with different strengths to the chameleon field, i.e., a different \\(M_{i}\\) for each \\(\\psi_{m}\\). Such a differential coupling would lead to violations of the weak equivalence principle (WEP hereafter). It has been shown that \\(V(\\phi)\\) can be chosen so that any violations of WEP are too small to have been detected thus far [3]. Even though the \\(M_{i}\\) are generally different for different species, if \\(M_{i}\
eq 0\\), we generally expect \\(M_{i}\\sim{\\cal O}(M)\\), with \\(M\\) being some mass scale associated with the theory. Provided this is the case, the differential nature of the coupling will have very little effect on our predictions for the experiments considered here. In this paper, we therefore simplify the analysis by assuming a universal coupling \\(M_{i}=M\\).
Note that if the matter fields are non-relativistic. The scalar field, \\(\\phi\\), obeys:
\\[\\Box\\phi=V^{\\prime}(\\phi)+\\frac{e^{\\phi/M}\\rho}{M}, \\tag{2}\\]where \\(\\rho\\) is the background density of matter. The coupling to matter implies that particle masses in the Einstein frame depend on the value of \\(\\phi\\)
\\[m(\\phi)=e^{\\phi/M}m_{0}, \\tag{3}\\]
where \\(m_{0}=\\text{const}\\) is the bare mass.
We parametrize the strength of the chameleon to matter coupling by \\(\\beta\\) where
\\[\\beta=\\frac{M_{\\text{Pl}}}{M}, \\tag{4}\\]
and \\(M_{\\text{Pl}}=1/\\sqrt{8\\pi G}\\approx 2.4\\times 10^{18}\\,\\text{GeV}\\). On microscopic scales (and over sufficiently short distances), the chameleon force between two particles is then \\(2\\beta^{2}\\) times the strength of their mutual gravitational attraction.
If the mass, \\(m_{\\phi}\\equiv\\sqrt{V^{\\prime\\prime}(\\phi)}\\), of \\(\\phi\\) is a constant then one must either require that \\(m_{\\phi}\\gtrsim 1\\,\\text{meV}\\) or \\(\\beta\\ll 1\\) for such a theory not to have been already ruled out by experimental tests of gravity [1]. If, however, the mass of the scalar field grows with the background density of matter, then a much wider range of scenarios have been shown to be possible [2; 3; 5]. In high-density regions, \\(m_{\\phi}\\) can then be large enough so as to satisfy the constraints coming from tests of gravity. At the same time, the mass of the field can be small enough in low density regions to produce detectable and potentially important alterations to standard physical laws. Assuming \\(\\text{d}\\ln m(\\phi)/\\text{d}\\phi\\geq 0\\) as it is above, a scalar field theory possesses a chameleon mechanism if, for some range of \\(\\phi\\), the self-interaction potential, \\(V(\\phi)\\), has the following properties:
\\[V^{\\prime}(\\phi)<0,\\quad V^{\\prime\\prime}>0,\\quad V^{\\prime\\prime\\prime}(\\phi) <0, \\tag{5}\\]
where \\(V^{\\prime}=\\text{d}V/\\text{d}\\phi\\). The evolution of the chameleon field in the presence of ambient matter with density \\(\\rho_{\\text{matter}}\\) is then determined by the effective potential:
\\[V_{\\text{eff}}(\\phi)=V(\\phi)+\\rho_{\\text{matter}}e^{\\phi/M}. \\tag{6}\\]
As a result, even though \\(V\\) might have a runaway form, the conditions on \\(V(\\phi)\\) ensure that the effective potential has a minimum at \\(\\phi=\\phi_{\\text{min}}(\\rho_{\\text{matter}})\\) where
\\[V^{\\prime}_{\\text{eff}}(\\phi_{\\text{min}})=0=V^{\\prime}(\\phi_{\\text{min}})+ \\frac{\\rho_{\\text{matter}}}{M}e^{\\phi_{\\text{min}}/M}. \\tag{7}\\]
Whether or not the chameleon mechanism is both active and strong enough to evade current experimental constraints depends partially on the details of the theory, i.e. \\(V(\\phi)\\) and \\(M\\), and partially on the initial conditions (see Refs. [2; 3; 5] for a more detailed discussion). For exponential matter couplings and a potential of the form
\\[V(\\phi)=\\Lambda^{4}\\exp(\\Lambda^{n}/\\phi^{n})\\approx\\Lambda^{4}+\\frac{\\Lambda ^{4+n}}{\\phi^{n}}, \\tag{8}\\]
the chameleon mechanism can in principle hide the field such that there is no conflict with current laboratory experiments, solar system or cosmological observations [2; 5]. Importantly, for a large range of values of \\(\\Lambda\\), the chameleon mechanism is strong enough in such theories to allow even strongly coupled theories with \\(M\\ll M_{Pl}\\) to have remained undetected [3]. The first term in \\(V(\\phi)\\) corresponds to an effective cosmological constant whilst the second term is a Ratra-Peebles inverse power-law potential [31]. If one assumes that \\(\\phi\\) is additionally responsible for late-time acceleration of the universe then one must require \\(\\Lambda\\approx\\Lambda_{c}\\equiv(2.4\\pm 0.1)\\times 10^{-12}\\,\\text{GeV}\\).
Throughout the rest of this paper, it is our aim to remain as general as possible and assume as little about the precise form of \\(V(\\phi)\\) as is necessary. However, when we come to more detailed discussions and make specific numerical predictions, it will be necessary to choose a particular form for \\(V(\\phi)\\). In these situations, we assume that \\(V(\\phi)\\) has the following form:
\\[V(\\phi)=\\Lambda_{c}^{4}\\left(1+\\frac{\\Lambda^{n}}{\\phi^{n}}\\right).\\]
We do this not because this power-law form of \\(V\\) is in any way preferred or to be expected, but merely as it has been the most widely studied in the literature and because is the simplest with which to perform analytical calculations. The power-law form is also useful as an example as it displays, for different values of the \\(n\\), many of the features that we expect to see in more general chameleon theories. We also note how the predictions of a theory with \\(V(\\phi)=\\Lambda_{c}^{4}\\exp(\\Lambda^{n}/\\phi^{n})\\) differ from those of a theory with a true power-law potential. With this choice of potential, the constant term in \\(V(\\phi)\\) is responsible for the late time acceleration of the Universe.
## III Chameleon trapping and photon afterglow
For a chameleon-like scalar field, the classical field equations following from Eq. (1) are
\\[\\square\\mathbf{a} = \\frac{\
abla\\phi\\times\\mathbf{B}}{M}, \\tag{9}\\] \\[\\square\\phi-m^{2}\\phi = \\frac{\\mathbf{B}\\cdot(\
abla\\times\\mathbf{a})}{M}, \\tag{10}\\]
where we have used the Lorenz-gauge condition. We take \\(\\mathbf{B}=B\\mathbf{e}_{x}\\) as the background magnetic field, and \\(\\mathbf{a}=a_{\\parallel}\\mathbf{e}_{x}+a_{\\perp}\\mathbf{e}_{y}\\) as the propagating photon, moving in the positive \\(z\\) direction (\"to the right\"). We perform a Fourier transform with respect to time,
\\[a_{\\perp}(t,z) = \\int\\,\\mathrm{d}\\omega\\,a(\\omega,z)\\,e^{-i\\omega t}, \\tag{11}\\] \\[\\phi(t,z) = -i\\int\\,\\mathrm{d}\\omega\\,\\chi(\\omega,z)\\,e^{-i\\omega t}, \\tag{12}\\]
where we have dropped the label \\(\\perp\\), since the photon component \\(a_{\\parallel}\\) parallel to the magnetic field anyway does not interact with the chameleon at all. The notation \\(\\chi(\\omega,z)=i\\phi(\\omega,z)\\) is introduced here for later convenience. Defining \\(\\tilde{a}(\\omega,k)\\) and \\(\\tilde{\\chi}(\\omega,k)\\) as the Fourier transforms w.r.t. \\(z\\) of \\(a(\\omega,z)\\) and \\(\\chi(\\omega,z)\\), we arrive at
\\[(\\omega^{2}-k^{2})\\tilde{a} = -\\frac{Bk}{M}\\tilde{\\chi}, \\tag{13}\\] \\[(\\omega^{2}-k^{2}-m^{2})\\tilde{\\chi} = -\\frac{Bk}{M}\\tilde{a}. \\tag{14}\\]
Solutions exist if
\\[(\\omega^{2}-k^{2}-m^{2})(\\omega^{2}-k^{2})=\\frac{B^{2}k^{2}}{M^{2}},\\]
the roots of which define the dispersion relations,
\\[k_{\\pm}^{2}=\\omega^{2}-\\left(m^{2}-\\frac{B^{2}}{M^{2}}\\right)\\left(\\frac{\\cos 2 \\theta\\pm 1}{2\\cos 2\\theta}\\right), \\tag{15}\\]
where
\\[\\tan 2\\theta=\\frac{2\\omega B}{M\\left(m^{2}-\\frac{B^{2}}{M^{2}}\\right)}. \\tag{16}\\]
Defining \\(k_{\\pm}=+\\sqrt{k_{\\pm}^{2}}\\), the general solutions \\(a\\) and \\(\\chi\\) for the equations of motion read:
\\[a(\\omega,z) = a_{r}^{-}(\\omega)e^{ik_{-}z}+\\tan^{2}\\theta a_{r}^{+}(\\omega)e^{ ik_{+}z}+a_{l}^{-}(\\omega)e^{-ik_{-}z}+\\tan^{2}\\theta a_{l}^{+}(\\omega)e^{-ik_{+}z}, \\tag{17}\\] \\[\\chi(\\omega,z) = \\frac{\\omega}{k_{-}}\\tan\\theta\\left(a_{r}^{-}(\\omega)e^{ik_{-}z} -a_{l}^{-}(\\omega)e^{-ik_{-}z}\\right)\\] (18) \\[-\\frac{\\omega}{k_{+}}\\tan\\theta\\left(a_{r}^{+}(\\omega)e^{ik_{+}z} -a_{l}^{+}(\\omega)e^{-ik_{+}z}\\right),\\]
where \\(a_{l}(a_{r})\\) is the amplitude of the wave traveling to the left (right). So far, the above equations are very similar to those of a laser interaction with a scalar ALP (or dilaton-like particle) in a magnetic field [24]. The important difference between a scalar ALP and a chameleon is due to the boundary conditions at the ends of the optical vacuum chamber: whereas an ALP is considered to be weakly interacting, the chameleon is reflected at the chamber ends and thus \"trapped\" in the vacuum chamber.
We begin by considering the simplest set-up for an analytic study, wherein the two ends of the vacuum chamber (\"jar\") are located right at edge of the magnetic interaction region, i.e., \\(B>0\\) inside the jar and \\(B=0\\) outside. We also confine ourselves to an experiment where the photon field is not stored in an optical cavity as for ALP searches, but simply enters, passes through and leaves the interaction region. The chameleon field is however trapped between two optical windows of the vacuum chamber.
The chameleon field is taken to reflect perfectly off the walls of the jar which are located at \\(z=L\\) and \\(z=0\\), whereas the photons only enter the jar at \\(z=0\\) and pass straight through. The reflection of the chameleon field implies that \\(\\partial_{z}\\chi=0\\) at \\(z=0\\) and \\(z=L\\). This gives
\\[a_{r}^{-}+a_{l}^{-} = a_{r}^{+}+a_{l}^{+}, \\tag{19}\\] \\[a_{r}^{-}e^{ik_{-}L}+a_{l}^{-}e^{-ik_{-}L} = a_{r}^{+}e^{ik_{+}L}+a_{l}^{+}e^{-ik_{+}L}. \\tag{20}\\]
For the photon boundary conditions, it is useful to introduce the operators \\({\\cal R}=\\omega-i\\partial_{z}\\) and \\({\\cal L}=\\omega+i\\partial_{z}\\) which project onto right- and left-moving photon components in vacuum. The condition that no photons enter the jar on the right side at \\(z=L\\), \\({\\cal L}a(\\omega,z)|_{z=L}=0\\), gives:
\\[a_{r}^{-}\\left(1-\\frac{k_{-}}{\\omega}\\right)e^{ik_{-}L}+\\tan^{2}\\theta a_{r}^{ +}\\left(1-\\frac{k_{+}}{\\omega}\\right)e^{ik_{+}L}+a_{l}^{-}\\left(1+\\frac{k_{-}} {\\omega}\\right)e^{-ik_{-}L}+\\tan^{2}\\theta a_{l}^{+}\\left(1+\\frac{k_{+}}{ \\omega}\\right)e^{-ik_{+}L}=0. \\tag{21}\\]
Let as assume that the photon field entering the jar at \\(z=0\\) has the form
\\[a_{\\rm in}(\\omega,z\\leq 0)=\\alpha(\\omega)e^{ikz}, \\tag{22}\\]
with the vacuum dispersion relation \\(k=\\omega\\). Then, the photon boundary condition at \\(z=0\\) is given by \\({\\cal R}a_{\\rm in}(\\omega,z)|_{z=0}={\\cal R}a(\\omega,z)|_{z=0}\\), yielding
\\[2\\alpha=a_{r}^{-}\\left(1+\\frac{k_{-}}{\\omega}\\right)+\\tan^{2}\\theta a_{r}^{+} \\left(1+\\frac{k_{+}}{\\omega}\\right)+a_{l}^{-}\\left(1-\\frac{k_{-}}{\\omega} \\right)+\\tan^{2}\\theta a_{l}^{+}\\left(1-\\frac{k_{+}}{\\omega}\\right). \\tag{23}\\]
Equations (19)-(21) and (23) determine the photon amplitudes \\(a_{l,r}^{\\pm}\\) completely, and a full solution is straightforward. The physical signature of the chameleon field is encoded in the outgoing photon that leaves the jar at \\(z=L\\) and which we parametrize as
\\[a_{\\rm out}(\\omega,z\\geq L)=\\beta(\\omega)e^{ikz},\\]
again with the vacuum dispersion \\(k=\\omega\\). The form of the wave packet \\(\\beta(\\omega)\\) as a function of the amplitudes \\(a_{r,l}^{\\pm}\\) is determined by the matching condition \\({\\cal R}a(\\omega,z)|_{z=L}={\\cal R}a_{\\rm out}(\\omega,z)|_{z=L}\\), implying
\\[2\\beta=a_{r}^{-}\\left(1+\\frac{k_{-}}{\\omega}\\right)e^{ik_{-}L}+\\tan^{2}\\theta a _{r}^{+}\\left(1+\\frac{k_{+}}{\\omega}\\right)e^{ik_{+}L}+a_{l}^{-}\\left(1-\\frac {k_{-}}{\\omega}\\right)e^{-ik_{-}L}+\\tan^{2}\\theta a_{l}^{+}\\left(1-\\frac{k_{+} }{\\omega}\\right)e^{-ik_{+}L}. \\tag{24}\\]
Since we expect \\(M\\) to be a scale beyond the particle-physics standard model, and \\(m={\\cal O}(1~{}{\\rm meV})\\), the dimensionless combination \\(B/(mM)\\) can be considered as a very small parameter for all presently conceivable laboratory field strengths. For typical laboratory laser frequencies \\(\\omega\\), the whole right-hand side of Eq. (16) is small, implying that
\\[\\theta\\simeq\\frac{\\omega B}{m^{2}M} \\tag{25}\\]
is a small expansion parameter for the present problem. We also assume that the laser frequency is larger than the vacuum chameleon mass, \\(m^{2}/\\omega^{2}\\ll 1\\), but the combination \\(m^{2}L/\\omega\\) can still be a sizable number owing to the length of the jar. In these limits, where \\(k_{+}\\approx\\omega-m^{2}/2\\omega\\) and \\(k_{-}\\approx\\omega\\), the outgoing wave packet reduces to
\\[\\beta(\\omega)\\approx e^{i\\omega L}\\alpha(\\omega)\\left[1+2i\\theta^{2}\\left(\\frac {m^{2}L}{4\\omega}-\\frac{\\sin\\left(\\frac{m^{2}L}{4\\omega}\\right)\\sin\\left(\\omega L -\\frac{m^{2}L}{4\\omega}\\right)}{\\sin\\left(\\omega L-\\frac{m^{2}L}{2\\omega} \\right)}\\right)\\right]. \\tag{26}\\]
Let us study the quotient
\\[Q\\equiv-2i\\frac{\\sin\\left(\\frac{m^{2}L}{4\\omega}\\right)\\sin\\left(\\omega L- \\frac{m^{2}L}{4\\omega}\\right)}{\\sin\\left(\\omega L-\\frac{m^{2}L}{2\\omega} \\right)}=\\left(e^{-i\\frac{m^{2}L}{2\\omega}}-1\\right)-2\\left(1-\\cos(\\frac{m^{ 2}L}{2\\omega})\\right)\\sum_{n=1}^{\\infty}e^{2in(\\omega L-\\frac{m^{2}L}{2\\omega} )}.\\]
We assume that the photon wave packet that is sent into the jar \\(\\alpha(\\omega)\\) is strongly peaked about \\(\\omega=\\bar{\\omega}\\). Close to \\(\\bar{\\omega}\\), we then have
\\[Q \\approx \\left(e^{-\\frac{im^{2}L}{2\\omega}}e^{i\\omega\\frac{m^{2}L}{2\\omega ^{2}}}-1\\right)\\] \\[+\\sum_{n=1}^{\\infty}\\left[e^{-\\frac{(2n+1)im^{2}L}{\\omega}}e^{i \\omega(2nL+(2n+1)\\frac{m^{2}L}{2\\omega^{2}})}+e^{-\\frac{(2n-1)im^{2}L}{\\omega} }e^{i\\omega(2nL+(2n-1)\\frac{m^{2}L}{2\\omega^{2}})}-2e^{-\\frac{2inm^{2}L}{ \\omega}}e^{i\\omega(2nL+nm^{2}L/\\bar{\\omega}^{2})}\\right].\\]Fourier transforming back to real time, the outgoing photon field can be expressed by the functional form of the ingoing photon as:
\\[a_{\\rm out}(t) \\approx a_{\\rm in}\\left(t-L-\\frac{B^{2}L}{2M^{2}m^{2}}\\right)+\\frac{B^{2} }{M^{2}m^{4}}a_{\\rm in}^{\\prime\\prime}(t-L)-\\frac{B^{2}}{M^{2}m^{4}}e^{-i\\frac {m^{2}L}{\\bar{\\omega}^{2}}}a_{\\rm in}^{\\prime\\prime}\\left(t-L-\\frac{m^{2}L}{2 \\bar{\\omega}^{2}}\\right) \\tag{27}\\] \\[-\\frac{B^{2}}{M^{2}m^{4}}\\sum_{n=1}^{\\infty}\\left[e^{-i\\frac{(2n+ 1)m^{2}L}{\\bar{\\omega}}}a_{\\rm in}^{\\prime\\prime}\\left(t-(2n+1)L-(2n+1)\\frac{m^ {2}L}{2\\bar{\\omega}^{2}}\\right)\\right.\\] \\[\\left.+e^{-i\\frac{(2n-1)m^{2}L}{\\bar{\\omega}}}a_{\\rm in}^{\\prime \\prime}\\left(t-(2n+1)L-(2n-1)\\frac{m^{2}L}{2\\bar{\\omega}^{2}}\\right)\\right.\\] \\[\\left.-2e^{-i\\frac{2m^{2}L}{\\bar{\\omega}}}a_{\\rm in}^{\\prime \\prime}\\left(t-(2n+1)L-\\frac{nm^{2}L}{\\bar{\\omega}^{2}}\\right)\\right]+O(\\theta ^{4}),\\]
where \\(a_{\\rm in}^{\\prime\\prime}={\\rm d}^{2}a_{\\rm in}/{\\rm d}t^{2}\\), and we have suppressed the \\(z\\) dependence which comes in the form of a plane wave \\(e^{ikz}\\). The terms in brackets \\([\\cdot]\\) represent the afterglow effect for the \\(\\perp\\) photon mode. Let us assume that \\(a_{\\rm in}\\propto e^{-i\\bar{\\omega}t}\\) for \\(0<t<T\\) and vanishes otherwise, and define \\(N=T/L\\). It is clear that unless \\(Nm^{2}L/\\bar{\\omega}=Tm^{2}/\\bar{\\omega}\\ll 1\\) the different contributions to the afterglow effect will generally interfere destructively. If \\(m^{2}T/\\bar{\\omega}\\gg 1\\) and \\(N\\gg 1\\), the afterglow effect will scale as \\(1/N\\). If \\(m\\sim O(1\\)meV) then \\(m^{2}L/\\bar{\\omega}\\) can be of order \\(O(1)\\) and so one must ensure that \\(T/L\\sim O(1)\\) or smaller for the afterglow effect not to be affected by interference. With \\(L\\sim O(1\\)m), this requires \\(T\\) to be no greater than a few nanoseconds. The GammeV experiment [20] uses 5ns wide pulses which avoids interference effects. On the other hand, it may be possible to exploit the interference effects for increasing the sensitivity or a determination of the chameleon mass; see the appendix.
Let us generalize the above result to the case, where the jar is longer than the interaction region, as is, for instance, the case for the GammeV experiment [20]. We let \\(z=0\\) label the beginning of the interaction region. The chameleon reflects off the jar at \\(z=-d\\) and \\(z=L\\). If \\(0<z<L\\), the solution to the equation of motions Eqs. (17) and (18) still hold.
Outside the interaction region but inside the jar for \\(-d\\leq z\\leq 0\\), we have:
\\[a(\\omega,-d\\leq z\\leq 0) = a_{r}^{d}(\\omega)e^{i\\omega z}+a_{l}^{d}(\\omega)e^{-i\\omega z}, \\tag{28}\\] \\[\\chi(\\omega,-d\\leq z\\leq 0) = \\tan\\theta\\left(c_{r}(\\omega)e^{ik_{m}z}-c_{l}(\\omega)e^{-ik_{m}z }\\right), \\tag{29}\\]
where \\(k_{m}\\equiv\\sqrt{\\omega^{2}-m^{2}}\\) and the amplitudes \\(a_{l,r}^{d}\\) and \\(c_{l,r}\\) need to be determined by boundary and matching conditions. In analogy to Eq. (22), we specify the boundary condition of the ingoing wave packet as
\\[a_{\\rm in}(\\omega,z\\leq-d)=\\alpha(\\omega)e^{ik(z+d)}, \\tag{30}\\]
where \\(k=\\omega\\) is the vacuum dispersion. Matching the photon amplitudes at the left end of the jar, \\(z=-d\\), where the ingoing wave is purely right-moving, \\({\\cal R}a_{\\rm in}(\\omega,z)|_{z=-d}={\\cal R}a(\\omega,z)|_{z=-d}\\), fixes the amplitude \\(a_{r}^{d}(\\omega)=\\alpha(\\omega)e^{ikd}\\) of Eq. (28). Matching the photon amplitudes of Eq. (28) and Eq. (17) at \\(z=0\\) gives
\\[2e^{ikd}\\alpha=a_{r}^{-}\\left(1+\\frac{k_{-}}{\\omega}\\right)+\\tan^{2}\\theta a _{r}^{+}\\left(1+\\frac{k_{+}}{\\omega}\\right)+a_{l}^{-}\\left(1-\\frac{k_{-}}{ \\omega}\\right)+\\tan^{2}\\theta a_{l}^{+}\\left(1-\\frac{k_{+}}{\\omega}\\right). \\tag{31}\\]
for the right-movers. The corresponding left-mover equation fixes the amplitude \\(a_{l}^{d}(\\omega)\\) of Eq. (28) which contains information about the afterglow effect at the left end of the jar. Here, we concentrate on the afterglow at the right end. For the matching of the chameleon field at \\(z=0\\), we act with the massive left- and right-moving projectors \\({\\cal L}_{m}=(k_{m}+i\\partial_{z})\\), \\({\\cal R}_{m}=(k_{m}-i\\partial_{z})\\) on Eqs. (18) and (29), yielding
\\[c_{r} = \\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}+\\frac{\\omega}{k_{-}}\\right)a _{r}^{-}+\\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}-\\frac{\\omega}{k_{-}}\\right)a_{l} ^{-}-\\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}+\\frac{\\omega}{k_{+}}\\right)a_{r}^{+}- \\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}-\\frac{\\omega}{k_{+}}\\right)a_{l}^{+}, \\tag{32}\\] \\[c_{l} = \\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}-\\frac{\\omega}{k_{-}}\\right)a _{r}^{-}+\\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}+\\frac{\\omega}{k_{-}}\\right)a_{l} ^{-}-\\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}-\\frac{\\omega}{k_{+}}\\right)a_{r}^{+}- \\frac{1}{2}\\left(\\frac{\\omega}{k_{m}}+\\frac{\\omega}{k_{+}}\\right)a_{l}^{+}. \\tag{33}\\]
The reflection of the chameleon field at \\(z=-d\\) is equivalent to
\\[c_{r}=-c_{l}e^{2ik_{m}d}. \\tag{34}\\]
This replaces Eq. (19) whilst Eqs. (20) and (21) still hold. In conclusion, the matching conditions Eqs. (20),(21) and Eq. (34) (together with Eqs. (32) and (33)) and the boundary condition Eq. (31) completely determine the photon amplitudes \\(a_{r,l}^{\\pm}(\\omega)\\). In the limit of small \\(\\theta\\) and small \\(m^{2}/\\omega^{2}\\), the outgoing wave packet \\(\\beta(\\omega)\\), which is still given by Eq. (24), can be expressed as
\\[\\beta(\\omega)\\approx e^{i\\omega(L+d)}\\alpha(\\omega)+2i\\theta^{2}e^{i\\omega(L+d)} \\alpha(\\omega)\\left[\\frac{m^{2}L}{4\\omega}-\\frac{\\sin\\left(\\frac{m^{2}L}{4 \\omega}\\right)\\sin\\left(\\omega(L+d)-\\frac{m^{2}(L+2d)}{4\\omega}\\right)}{\\sin \\left(\\omega(L+d)-\\frac{m^{2}(L+d)}{2\\omega}\\right)}\\right]. \\tag{35}\\]
Again, for \\(a_{\\rm in}(t)\\) being dominated by the frequency \\(\\bar{\\omega}\\) at \\(z=-d\\), the outgoing wave \\(a_{\\rm out}(t)\\) reads
\\[a_{\\rm out}(t) \\approx a_{\\rm in}\\left(t-L-d-\\frac{B^{2}L}{2M^{2}m^{2}}\\right)+\\frac{B^ {2}}{M^{2}m^{4}}a_{\\rm in}^{\\prime\\prime}(t-L-d)-\\frac{B^{2}}{M^{2}m^{4}}e^{-i \\frac{m^{2}L}{\\omega}}a_{\\rm in}^{\\prime\\prime}\\left(t-L-d-\\frac{m^{2}L}{2 \\bar{\\omega}^{2}}\\right) \\tag{36}\\] \\[-\\frac{B^{2}}{M^{2}m^{4}}\\sum_{n=1}^{\\infty}\\left[e^{-i\\frac{m^{ 2}((2n+1)L+2nd)}{\\bar{\\omega}}}a_{\\rm in}^{\\prime\\prime}\\left(t-(2n+1)(L+d)- \\frac{m^{2}((2n+1)L+2nd)}{2\\bar{\\omega}^{2}}\\right)\\right.\\] \\[\\left.\\hskip 14.226378pt+\\left.e^{-i\\frac{m^{2}((2n-1)L+2nd)}{ \\bar{\\omega}}}a_{\\rm in}^{\\prime\\prime}\\left(t-(2n+1)(L+d)-\\frac{m^{2}((2n-1) L+2nd)}{2\\bar{\\omega}^{2}}\\right)\\right.\\right.\\] \\[\\left.\\hskip 14.226378pt-2e^{-i\\frac{2nm^{2}(L+d)}{\\bar{\\omega}}}a_ {\\rm in}^{\\prime\\prime}\\left(t-(2n+1)(L+d)-\\frac{m^{2}n(L+d)}{\\bar{\\omega}^{2 }}\\right)\\right]+O(\\theta^{4}).\\]
It is important that we mention that by ignoring higher-order terms, we have dropped the information about the decay in the afterglow effect at late times. Since only a finite amount of energy initially converted into chameleon particles for a finite laser pulse, the afterglow effect must eventually decay, the time-scale of which can straightforwardly be estimated: The probability that a chameleon particle converts to a photon as it passes through the interaction region is
\\[\\mathcal{P}_{\\varphi\\rightarrow\\gamma}=4\\theta^{2}\\sin^{2}\\left(\\frac{m^{2}L} {4\\omega}\\right). \\tag{37}\\]
After a time \\(t=(2N+1)(L+d)\\), the chameleon field, which had been created by the initial passage of the photons, has moved \\(N\\) times back and forth through the interaction region, i.e., from \\(z=L\\) to \\(z=-d\\) and then back to \\(z=L\\) again. Since any chameleon particle that is converted into photons escapes, the energy in the chameleon field after time \\(t=(2N+1)(L+d)\\) is reduced by a factor
\\[F(N)=(1-\\mathcal{P}_{\\varphi\\rightarrow\\gamma})^{2N}. \\tag{38}\\]
We therefore define the half-life \\(t_{1/2}\\) of the afterglow effect by \\(t_{1/2}=(2N_{1/2}+1)(L+d)\\) where \\(F(N_{1/2})=1/2\\). Given that \\(\\theta^{2}\\ll 1\\), this gives:
\\[t_{1/2}\\approx\\frac{(L+d)\\ln 2}{4\\theta^{2}\\sin^{2}\\left(\\frac{m^{2}L}{4\\omega} \\right)}.\\]
We define \\(N_{\\rm pass}=2N_{1/2}\\) to be the approximate number of complete passes through the interaction region,
\\[N_{\\rm pass}=2N_{1/2}\\approx\\frac{\\ln 2}{4\\theta^{2}\\sin^{2}\\left(\\frac{m^{2}L}{4 \\omega}\\right)}.\\]
For a realistic estimate of the outgoing wave, we should then replace the infinite upper limit of the sums in Eqs. (27) and (36) by \\(N_{\\rm pass}\\). In the limit \\(m^{2}L/4\\omega\\ll 1\\), we obtain
\\[t_{1/2}\\approx\\frac{4M^{2}(L+d)\\ln 2}{B^{2}L^{2}}.\\]
We observe that the dependence on the frequency \\(\\omega\\) and on the chameleon mass \\(m\\) drops out in this limit. For a typical scale \\(M\\approx 10^{6}\\,\\)GeV and experimental parameters \\(B\\approx 5\\,\\)T and \\(L+d\\approx L=6\\,\\)m, we obtain \\(t_{1/2}=63\\,\\)s and \\(N_{\\rm pass}=3\\times 10^{9}\\), corresponding to a time scale that should allow for a high detection efficiency.
Signatures of the afterglow
### General Signatures and Example I: the GammeV set-up
Let us consider the simple case where the pulse length \\(T<2(L+d)\\), such that chameleon and photon afterglow propagation inside the cavity happens in well separated bunches. For a \\(5\\,\\mathrm{ns}\\) pulse, this corresponds to \\((L+d)>0.75\\mathrm{m}\\) which is, for instance, the case in the GammeV experiment [20]. As a consequence, there will be no interference between the chameleonic afterglow and the initial pulse. We assume that for \\(0<t<T\\), \\(a_{\\mathrm{in}}=a_{0}e^{-i\\tilde{\\omega}t}\\), and \\(a_{\\mathrm{in}}\\simeq 0\\) before and after this time period. Henceforth, we set \\(\\tilde{\\omega}=\\omega\\).
Then, the afterglow photons also come in bunches of time duration \\(\\approx T\\). The \\(N\\)th bunch leaves the vacuum chamber to the right at a time \\(t\\) in the interval \\((2N+1)(L+d)\\lesssim t\\lesssim T+(2N+1)(L+d)<(2(N+1)+1)(L+d)\\). For \\(1\\leq N\\ll N_{1/2}\\) and defining \\(\\tau=t-(2N+1)(L+d)\\), the afterglow amplitude reads:
\\[a_{\\mathrm{out}}(t)e^{i\\omega\\tau} = -\\frac{2B^{2}\\omega^{2}}{M^{2}m^{4}}\\left(1-\\cos\\left(\\frac{m^{2} L}{2\\omega}\\right)\\right)e^{-iN\\frac{m^{2}(d+L)}{\\omega}}a_{0},\\] \\[= -\\frac{4B^{2}\\omega^{2}}{M^{2}m^{4}}\\sin^{2}\\left(\\frac{m^{2}L}{ 4\\omega}\\right)e^{-iN\\frac{m^{2}(d+L)}{\\omega}}a_{0}.\\]
We identify the modulus of the probability amplitude for an initial photon to reappear in the \\(N\\)th afterglow pulse as
\\[\\mathcal{P}=\\frac{4B^{2}\\omega^{2}}{M^{2}m^{4}}\\sin^{2}\\left(\\frac{m^{2}L}{4 \\omega}\\right). \\tag{40}\\]
Incidentally, this probability amplitude is identical to the chameleon-photon conversion probability stated in Eq. (37), since the pulse-to-afterglow photon conversion involves a photon-chameleon conversion twice. Thus if each pulse contained \\(n_{\\gamma}^{\\mathrm{pulse}}\\) photons, the number of afterglow photons produced by this pulse after the time \\((2N+1)(L+d)\\lesssim t\\lesssim T+(2N+1)(L+d)<(2(N+1)+1)(L+d)\\) with \\(1\\leq N\\ll N_{1/2}\\) is:
\\[n_{\\gamma}^{\\mathrm{glow}}(N)\\approx\\mathcal{P}^{2}n_{\\gamma}^{\\mathrm{pulse}}.\\]
So far, we have neglected the afterglow decay at late times. For larger values of \\(N\\), we must account for the fact that the chameleon amplitude decreases during the photonic afterglow. This is taken care of by the extinction factor of Eq. (38),
\\[n_{\\gamma}^{\\mathrm{glow}}(N)\\approx\\mathcal{P}^{2}(1-\\mathcal{P})^{2(N-1)}n_ {\\gamma}^{\\mathrm{pulse}}. \\tag{41}\\]
For an initial laser pulse with energy \\(E_{\\mathrm{pulse}}\\) and frequency \\(\\omega\\), the number of photons is \\(n_{\\gamma}^{\\mathrm{pulse}}=E_{\\mathrm{pulse}}/\\omega\\). A characteristic quantity is given by the detection rate of afterglow photons in the \\(N\\)th bunch,
\\[R_{N}=\\mathcal{P}^{2}(1-\\mathcal{P})^{2(N-1)}\\frac{E_{\\mathrm{pulse}}}{\\omega T }\\eta_{\\mathrm{det}}, \\tag{42}\\]
where \\(\\eta_{\\mathrm{det}}\\leq 1\\) is the efficiency of the detector. As an example, let us consider the GammeV experiment [20] with a laser of \\(T=5\\) ns pulse duration, \\(E_{\\mathrm{pulse}}=160\\) mJ pulse energy and wave length \\(\\lambda=532\\) nm. This corresponds to an initial photon rate of \\(E_{\\mathrm{pulse}}/\\omega T=8.8\\times 10^{25}\\,\\mathrm{s}^{-1}\\). With a magnetic field of \\(B=5\\,\\mathrm{T}\\) and length \\(L=6\\,\\mathrm{m}\\) and assuming that \\(m^{2}L/4\\omega\\ll 1\\), the early afterglow bunches (\\(N\\ll N_{1/2}\\)) arrive at a rate of
\\[\\frac{R_{N}}{\\eta_{\\mathrm{det}}}\\approx 2\\left(\\frac{10^{6}\\,\\mathrm{GeV}}{M} \\right)^{4}\\times 10^{-2}\\ \\mathrm{photons\\ pulse}^{-1}.\\]
In the GammeV experiment [20], the half-life, \\(t_{1/2}\\), of the decay of the afterglow effect, in the limit \\(m^{2}L/4\\omega\\ll 1\\) and \\(L+d\\approx L\\), is:
\\[\\mathrm{GammeV}\\mathrm{:}\\quad t_{1/2}\\approx\\frac{4M^{2}(L+d)\\ln 2}{B^{2}L^{2} }=62.9\\,\\mathrm{s}\\left(\\frac{M}{10^{6}\\mathrm{GeV}}\\right)^{2}. \\tag{43}\\]
The total number of photons contained in the afterglow after a single pulse within the first half-life period \\(T<t<T+t_{1/2}\\) is
\\[n_{\\gamma}^{\\mathrm{glow}}(t_{1/2}) = \\sum_{j=1}^{N_{1/2}}n_{\\gamma}^{\\mathrm{glow}}(j)=\\sum_{i=0}^{N_{ 1/2}-1}\\mathcal{P}^{2}(1-\\mathcal{P})^{2i}n_{\\gamma}^{\\mathrm{pulse}} \\tag{44}\\] \\[= \\frac{\\mathcal{P}^{2}(1-(1-\\mathcal{P})^{2N_{1/2}})}{(1-(1-\\mathcal{ P})^{2})}n_{\\gamma}^{\\mathrm{pulse}}\\approx\\frac{\\mathcal{P}}{4}n_{\\gamma}^{ \\mathrm{pulse}},\\]where we have used the definition of \\(N_{1/2}\\) i.e. \\((1-{\\cal P})^{2N_{1/2}}=\\frac{1}{2}\\). In the limit of \\(m^{2}L/4\\omega\\ll 1\\), the number of afterglow photons in the first half-life period, for instance, for the GammeV [20] experiment yields
\\[\\text{GammeV:}\\quad n_{\\gamma}^{\\text{glow}}(t_{1/2})=\\frac{B^{2}L^{2}}{16M^{2} }n_{\\gamma}^{\\text{pulse}}\\approx 2.5\\times 10^{7}\\left(\\frac{10^{6}\\,\\text{GeV}}{M }\\right)^{2}.\\]
With the conservative assumption of a detector efficiency of \\(\\eta_{\\text{det}}\\simeq 0.1\\), the non-observation of any photon afterglow from one laser pulse in the time \\(t<t_{1/2}\\) would correspond to a lower bound on the coupling scale \\(M>1.6\\times 10^{9}\\text{GeV}\\) in the regime of chameleon (vacuum) masses \\(m<0.5\\)meV. However, to actually achieve such a bound one would have to run the experiment for at least a time of \\(t_{1/2}\\) which for \\(M\\sim 10^{9}\\text{GeV}\\) is about 2yrs. It is therefore more practical to consider what lower bound on \\(M\\) would result from the non-detection of any photons due to the afterglow of a single laser pulse after a time \\(t_{\\text{expt}}\\ll t_{1/2}\\). We find:
\\[\\text{GammeV:}\\quad n_{\\gamma}^{\\text{glow}}(t_{\\text{expt}})=\\frac{B^{4}L^{4 }}{16M^{4}}n_{\\gamma}^{\\text{pulse}}\\left(\\frac{t_{\\text{expt}}}{2L}\\right) \\approx 3.1\\times 10^{4}\\left(\\frac{10^{7}\\,\\text{GeV}}{M}\\right)^{4} \\left(\\frac{t_{\\text{expt}}}{1\\,\\text{min}}\\right);\\quad t_{\\text{expt}}\\ll t _{1/2}.\\]
With a minutes worth of measurement, the non-detection of any afterglow from a single pulse of the laser would for \\(m<0.5\\)meV correspond to a lower bound on the coupling of \\(M>7.5\\times 10^{7}\\,\\text{GeV}\\). If data were collected for a day then this constraint could be raised to \\(M>4.6\\times 10^{8}\\,\\text{GeV}\\).
This should be read side by side with the currently best laboratory bound for similar parameters for weakly coupled scalars or pseudo-scalars derived from the PVLAS experiment [11]; from the PVLAS exclusion limits for magnetically-induced rotation at \\(B=2.3\\)Tesla, we infer \\(M\\gtrsim 2\\times 10^{6}\\text{GeV}\\).3 We concluded that afterglow experiments are well suited for exploring unknown regions in the parameter space of chameleon models.
Footnote 3: Similar bounds have also been found by the BMV experiment [13] and by the earlier BFRT experiment [32] from photon regeneration searches; however, these bounds to not apply to chameleon models, since they involve a passage of the hypothetical particles through a solid wall.
As a side remark and a check of the calculation, we note that the total number of photons contained in the full afterglow can be obtained from Eq. (44) by extending the upper bound of the sum to infinity; this yields
\\[n_{\\gamma}^{\\text{glow}}(t=\\infty)=\\frac{1}{2}{\\cal P}n_{\\gamma}^{\\text{pulse}},\\]
which is one half of the total number of photons that would have been initially converted into chameleon particles; the other half creates an afterglow on the other (left) side of the vacuum chamber.
### Example II: An Optimized Experimental set-up
As a further example, let us consider a more optimized experimental set-up that nevertheless involves only parameters which are achievable by current standard means or systems available in the near future. Our optimized experimental set-up consists of an \\(L=14.3\\)m long magnetic field of strength \\(B=9.5\\)T which corresponds exactly to that of the OSQAR experiment at CERN [19]. We consider a laser system delivering a sufficiently short pulse with energy \\(E_{\\text{pulse}}\\simeq 1\\)kJ and wavelength \\(\\lambda=1053\\)nm. These parameters agree with the specifications of the laser used by the BMV collaboration [13] or of the PHELIX laser [34] which is currently being built at the GSI (with possible upgrades to several kJ); its designed pulse duration is \\(1-10\\)ns, corresponding to a required chamber length of \\(L+d>0.15-1.5\\)m. In order to reduce the laser-beam energy deposit on the components of the set-up, the beam diameter may simply be kept comparatively wide (matching the geometry of the detector). Also the pulse duration could be increased while keeping \\(E_{\\text{pulse}}\\) fixed. By increasing the length \\(d\\) of the non-magnetized part of the vacuum chamber, the pulse duration can also be extended; e.g., choosing \\(d\\simeq 100\\)m, the pulse duration could be extended up to \\(1\\mu\\)s without disturbing interference effects. Assuming \\(m^{2}L/4\\omega\\lesssim\\pi/2\\), The number of afterglow photons in the first half-life period for this optimized set-up (i.e. \\(T=1\\mu\\)s, \\(d=100\\)m, \\(B=9.5\\)T and \\(L=14.3\\)m) yields
\\[\\text{optimized set-up:}\\quad n_{\\gamma}^{\\text{glow}}(t_{1/2})\\approx 1.1 \\times 10^{4}\\left(\\frac{10^{10}\\,\\text{GeV}}{M}\\right)^{2}. \\tag{45}\\]
The half-life of the afterglow effect in this set-up is:
\\[\\text{optimized set-up:}\\quad t_{1/2}\\approx 58.5\\,\\text{s}\\left(\\frac{M}{10^{6} \\,\\text{GeV}}\\right)^{2} \\tag{46}\\]Assuming that single photons can be detected in the optimized set-up, \\(\\eta_{\\rm det}=1\\), the non-observation of any photon afterglow during from one pulse in \\(t<t_{1/2}\\) would correspond to a lower bound on the coupling scale \\(M>10^{12}\\)GeV in the regime of chameleon vacuum masses \\(m<0.2\\)meV; however for such large values of \\(M\\), the half-life of the afterglow is exceedingly large \\(t_{1/2}\\sim 10^{6}\\,\\)yrs. It is more reasonable then to ask what lower-bound one could place on \\(M\\) if no photons are detected from the afterglow of a single pulse after a time \\(t_{\\rm expt}\\ll t_{1/2}\\) has passed. We find:
\\[\\text{optimized set-up:}\\quad n_{\\gamma}^{\\rm glow}(t_{\\rm expt})\\approx 8.4 \\left(\\frac{10^{9}\\,{\\rm GeV}}{M}\\right)^{4}\\left(\\frac{t_{\\rm expt}}{1\\,{\\rm min }}\\right);\\quad t_{\\rm expt}\\ll t_{1/2}. \\tag{47}\\]
We see that after one minute of measurements one could bound \\(M>1.7\\times 10^{9}\\,\\)GeV, and if measurements could be conducted constantly over a 24 hour period then this could be extended to \\(M>1.0\\times 10^{10}\\,\\)GeV.
It is interesting to confront these potential afterglow laboratory bounds with typical sensitivity scales obtained from astrophysical arguments: the non-observation of solar axion-like particles by the CAST experiment imposes a bound of \\(M>1.1\\times 10^{10}\\)GeV for \\(m\\lesssim 0.02\\)eV [33]; slightly less stringent bounds follow from energy-loss arguments for HB stars [21]. The similar order of magnitude demonstrates that laboratory measurements of afterglow phenomena can probe scales of new physics that have so far been explored only with astrophysical observations. Of course, we need to stress that these numbers should not literally be compared to each other, since they apply to different theoretical settings; in particular, the astrophysical constraints do not apply to chameleonic ALPs [8; 9], whereas the non-observation of an afterglow would not constrain axion models.
### Example III: The BMV set-up
As a final example, we consider the recent light-shining-through-a-wall experiment performed by the BMV collaboration [13], where a laser with frequency \\(\\omega=1.17\\)eV, \\(T=4.8\\) ns and \\(E_{\\rm pulse}\\gtrsim 1\\) kJ were used. The magnetic field strength was \\(B=12.2\\) T, but the magnetic field remained at its maximum values for about \\(150\\mu\\)s; \\(L=2\\times 0.45\\,\\)m. If this set-up were modified to search for chameleons trapped in a vacuum chamber then (when \\(m^{2}L/4\\omega\\ll 1\\)) we would find:
\\[\\frac{R_{N}}{\\eta_{\\rm det}}\\approx 4.53\\left(\\frac{10^{6}\\,{\\rm GeV}}{M} \\right)^{4}\\ \\ \\text{photons pulse}^{-1}.\\]
After \\(150\\,\\mu\\)s the chameleons will have made \\(\\approx 25000\\) complete passes through the vacuum chamber. If \\(\\theta\\lesssim 10^{-3}\\) then \\(t_{1/2}>150\\,\\mu\\)s, and so the duration of the afterglow effect would be limited by the length of the magnetic pulse. For \\(\\theta\\lesssim 10^{-3}\\), the total number of afterglow photons produced by each pulse that could potentially be detected in the time interval \\(T<t<150\\,\\mu\\)s is (given \\(\\eta_{\\rm det}\\approx 0.5\\)[13]):
\\[n_{\\gamma}^{\\rm glow}(150\\,\\mu\\text{s})\\approx 5.7\\times\\left(\\frac{10^{7}\\,{ \\rm GeV}}{M}\\right)^{2}\\ \\text{pulse}^{-1}.\\]
We conclude that also the BMV apparatus could search for a coupling parameter beyond \\(M\\simeq 10^{7}\\)GeV. Of course, all experimental set-ups considered here can push the detection limits even further by accumulating statistics.
### Experimental Bounds on Chameleon Models
The potential experimental bounds on chameleon models derived from the three examples discussed above are shown in Fig. 1. For the full analysis, we retained the dependence on \\(m\\) by taking the full dependence of \\(\\mathcal{P}\\) on \\(m^{2}L/(4\\omega)\\) into account, cf. Eq. (40). In the case of weak coupling, i.e., large values of \\(M\\), typical values for the half-life of the afterglow are much longer than an experimentally feasible time of measurement. The achievable sensitivity therefore depends on the time duration \\(t_{\\rm expt}\\) of the afterglow measurement; for instance, the maximal sensitivity for \\(M\\) scales like \\(M_{\\rm max}\\sim t^{1/4}\\). In Fig. 1, we show sensitivity bounds for three different measurement durations, \\(t_{\\rm expt}=1\\)s, \\(1\\)min, \\(1\\)day.
For small \\(m\\), we rediscover the \\(m\\)-independent sensitivity limits which have been discussed analytically in the preceding subsections. For larger \\(m\\), we observe the typical oscillation pattern with sensitivity holes which correspond to full \\(2\\pi\\) oscillation phases of the photon-chameleon system.
## V Background and systematic effects
### Standard Reflection of Photons
So far, we have assumed that the optical windows forming the caps of the jar are completely transparent. In a real experiment, this transparency will be less than perfect and there will be some reflection of photons. For simplicity, we assume that all photons incident on the optical windows are either reflected or transmitted, i.e., we ignore any absorption. We also assume, for simplicity, that the photons hit the optical windows perpendicularly. The coefficients of reflection and transmission are then given by
\\[T=\\frac{16n_{\\rm cap}^{2}n_{\\rm vac}^{2}}{(n_{\\rm cap}+n_{\\rm vac})^{4}},\\qquad R =1-T, \\tag{48}\\]
where \\(n_{\\rm cap}\\) and \\(n_{\\rm vac}\\) are the indices of refraction for the optical window and the laboratory vacuum, respectively; typical values are \\(n_{\\rm vac}=1\\) and \\(n_{\\rm cap}=1.5\\) which gives \\(R\\approx 0.078\\). For the time interval \\((2N+1)(L+d)\\lesssim t\\lesssim T+(2N+1)(L+d)<(2(N+1)+1)(L+d)\\), the rate of afterglow photons leaving the jar at \\(z=L\\) is given by Eq. (42). In this time interval, the photons which have been trapped owing to standard reflection leave the jar at
Figure 1: Sensitivity limits for the scale \\(M\\) specifying the inverse chameleon-photon coupling vs. the chameleon mass for the BMV [13] (red dotted line) and GammeV [20] (blue solid line) experiments as well as a hypothetical optimized experimental set-up (green dashed line); The experimental parameter values are detailed in the main text. The top left panel corresponds to an afterglow measurement with a duration \\(t_{\\rm exp}\\) of one day, the top right panel corresponds to one minute and the lower figure to one second. The gain in sensitivity is less than an order of magnitude by waiting a day rather than a minute.
\\(z=L\\) with a rate
\\[R_{\\rm reflect}(N)=TR^{2(N-1)}\\frac{E_{\\rm pulse}}{\\omega T}\\,\\eta_{\\rm det},\\]
where we have neglected a possible interplay between reflection and chameleon trapping; this is justified as long as the time scales for reflection trapping and chameleon trapping are very different. We define \\(N_{r}\\) by the requirement that \\(R_{\\rm reflect}(N)\\leq R_{\\rm glow}(N)\\) for all \\(N\\geq N_{r}\\). We then find that
\\[N_{r}\\approx 1+\\left[\\frac{\\ln{\\cal P}+R/2}{\\ln R+{\\cal P}}\\right]\\approx 1+ \\frac{\\ln{\\cal P}}{\\ln R},\\]
where we have used \\({\\cal P}\\), \\(R\\ll 1\\). Even if \\({\\cal P}\\approx 10^{-30}\\) we still have \\(N_{r}\\approx 27\\); if \\({\\cal P}\\approx 10^{-12}\\) then \\(N_{r}\\approx 11\\). The effect of the standard reflected photons is negligible compared with the afterglow effect for \\(N>N_{r}\\); moreover, it is clear that generally \\(N_{r}\\ll N_{\\rm pass}\\). For example: with the GammeV parameters (\\(T=5\\) ns and \\(L+d\\approx 6\\) m), the afterglow dominates after 440 ns if \\({\\cal P}\\approx 10^{-12}\\) or after about 1100 ns if \\({\\cal P}\\approx 10^{-30}\\); both of these time scales are generally far smaller than both the half-life of the afterglow effect and the run-time of the experiments. We conclude that reflection of photons off the caps of the vacuum chamber will not effect the potential of such experiments to detect any chameleonic afterglow.
### Extinction of Chameleon Particles by a Medium
Before we can be sure that the half-life of the chameleon afterglow is, as calculated above, accurate we must consider whether scattering and absorption of chameleon particles by the matter in the laboratory vacuum inside the jar places an important upper limit on the number of passes that the chameleon field makes through the jar. Roughly speaking, if light couples to matter consisting of particles with mass \\(m_{\\rm p}\\) with a strength \\(e^{2}\\), then the chameleon field couples to it with a strength \\(m_{\\rm p}^{2}/M^{2}\\). As a beam of light with frequency \\(\\omega\\) travels a distance \\(L+d\\) through a medium of free particles with charge \\(\\pm e\\), mass \\(m_{\\rm p}\\gg\\omega\\) and number density \\(n_{\\rm p}\\), Thompson scattering reduces its intensity by a factor \\(\\exp(-\\Gamma_{\\gamma}(L+d))\\) where:
\\[\\Gamma_{\\gamma}=\\frac{8\\pi n_{\\rm p}e^{4}}{3m_{\\rm p}^{2}}.\\]
Analogously, the intensity of the beam of chameleon particles traveling within a medium of free particles with mass \\(m_{\\rm p}\\gg\\omega\\) and number density \\(n_{\\rm p}\\) is reduced by a factor \\(\\exp(-\\Gamma_{\\phi}(L+d))\\), where
\\[\\Gamma_{\\phi}=\\frac{8\\pi n_{\\rm p}m_{\\rm p}^{2}}{3M^{4}}.\\]
Let us consider \\(\\Gamma_{\\phi}\\) for a chameleon field propagating through a laboratory vacuum with pressure \\(P_{\\rm vac}\\). For simplicity, we assume that the vacuum contains only N\\({}_{2}\\) molecules, which gives
\\[\\Gamma_{\\phi}=1.97\\times 10^{-30}\\left(\\frac{10^{6}\\,{\\rm GeV}}{M}\\right)^{4} \\left(\\frac{P_{\\rm vac}}{\\rm torr}\\right)\\,{\\rm m}^{-1}.\\]
After a time \\((2N+1)(L+d)\\lesssim t\\lesssim T+(2N+1)(L+d)<(2(N+1)+1)(L+d)\\), the chameleon particles that were created by the initial laser pulse have traveled on a distance \\(2N(L+d)\\) and so scattering has reduced the intensity of the chameleon particles by \\(F_{\\rm scat}(N)=\\exp(-2N\\Gamma_{\\phi}(L+d))\\). We define \\(N_{\\rm scat}\\) by \\(F_{\\rm scat}(N_{\\rm scat})=1/2\\), i.e.,
\\[N_{\\rm scat}=\\frac{\\ln 2}{2\\Gamma_{\\phi}(L+d)}\\approx 1.8\\times 10^{29}\\left( \\frac{M}{10^{6}\\,{\\rm GeV}}\\right)^{4}\\left(\\frac{\\rm torr}{P_{\\rm vac}} \\right)\\left(\\frac{\\rm m}{L+d}\\right).\\]
The corresponding half-life due to scattering \\(t_{1/2}^{\\rm scat}\\) is thus given by
\\[t_{1/2}^{\\rm scat}=(2N_{\\rm scat}+1)(L+d)\\approx 5.9\\times 10^{20}\\,{\\rm s} \\left(\\frac{M}{10^{6}\\,{\\rm GeV}}\\right)^{4}\\left(\\frac{\\rm torr}{P_{\\rm vac }}\\right).\\]
For \\(m(L+d)/4\\omega\\lesssim O(1)\\), \\(P\\lesssim O(1)\\,{\\rm torr}\\), \\(L+d\\sim O(1)\\,{\\rm m}\\) and \\(M\\gtrsim 10^{6}\\,{\\rm GeV}\\), is it clear that
\\[t_{1/2}^{\\rm scat}\\gg t_{1/2}.\\]
We conclude that the scattering of chameleon particles by atoms in the laboratory vacuum is negligible over the time scales of interest, \\(t\\lesssim t_{1/2}\\).
### Quality of the Vacuum
We have found that the mass of the chameleon particles inside the interaction region plays a role in determining the rate of afterglow photon emission. In \\((2N+1)(L+d)\\lesssim t\\lesssim T+(2N+1)(L+d)<2((N+1)+1)(L+d)\\) this rate was found to be
\\[{\\cal P}^{2}(1-{\\cal P})^{2(N-1)}\\frac{E_{\\rm pulse}}{\\omega T},\\]
and that the half-life for this effect is:
\\[t_{1/2}=\\frac{(L+d)\\ln 2}{{\\cal P}},\\quad{\\rm where}\\quad{\\cal P}=\\frac{4B^{2} \\omega^{2}}{M^{2}m^{4}}\\sin^{2}\\frac{m^{2}L}{4\\omega}. \\tag{49}\\]
One of the key properties of chameleon fields is that the mass, \\(m\\), is not fixed but depends on the density environment, which is in turn determined by the quality or pressure, \\(P_{\\rm vac}\\), of the vacuum. From Eq.(49), it is clear that the probability \\({\\cal P}\\), and hence the afterglow photon rate is greatly suppressed, if \\(m^{2}L/4\\omega\\gg\\pi/2\\). Additionally, if \\(m^{2}L/4\\omega\\ll\\pi/2\\) then \\({\\cal P}\\) is almost independent of \\(m\\). The effective local matter density to which the chameleon couples is \\(\\rho_{\\rm eff}=\\rho_{\\rm vac}+B^{2}/2\\) and only \\(\\rho_{\\rm vac}\\) depends on \\(P_{\\rm vac}\\). Thus decreasing \\(\\rho_{\\rm vac}\\) can only reduce \\(\\rho_{\\rm eff}\\) and hence \\(m\\) so far. In all experiments, the smaller vacuum pressure always increases the range of potentials and parameter space than can be detected or ruled out; the downside is that making \\(P_{\\rm vac}\\) smaller inevitably increases costs.
Once \\(V(\\phi)\\) and \\(M\\) are specified, it is always possible to find some value of \\(\\rho_{\\rm eff}\\) such that \\(m^{2}L/4\\omega=\\pi/2\\); we denote the special value of the effective ambient density by \\(\\bar{\\rho}_{\\rm eff}\\). If \\(\\bar{\\rho}_{\\rm eff}-B^{2}/2>0\\), then \\(\\rho=\\bar{\\rho}_{\\rm eff}\\) can be realized by setting \\(\\rho_{\\rm vac}=\\bar{\\rho}_{\\rm vac}=\\bar{\\rho}_{\\rm eff}-B^{2}/2\\). Since \\(m\\) depends on \\(\\rho_{\\rm vac}\\) only through \\(\\rho_{\\rm eff}=\\rho_{\\rm vac}+B^{2}/2\\), there is little point in making \\(\\rho_{\\rm vac}\\ll B^{2}/2\\) as opposed to say \\(\\rho_{\\rm vac}\\approx 0.1B^{2}/2\\), as it will increase costs but result in no great chance in the value of \\(\\rho_{\\rm eff}\\). We therefore define the critical value of the vacuum density as so:
\\[\\rho_{\\rm vac}^{\\rm crit}(M,B)=\\max\\left(B^{2}/2,\\bar{\\rho}_{\\rm eff}-B^{2}/2 \\right).\\]
The value of \\(\\rho_{\\rm vac}^{\\rm crit}\\) is significant as the afterglow rate for \\(\\rho_{\\rm vac}\\gg\\rho_{\\rm vac}^{\\rm crit}(M,B)\\) is greatly suppressed compared to the rate for \\(\\rho_{\\rm vac}\\lesssim\\rho_{\\rm vac}^{\\rm crit}(M,B)\\). There is also relatively little gain in sensitivity in having \\(\\rho_{\\rm vac}\\ll\\rho_{\\rm vac}^{\\rm crit}(M,B)\\) as opposed to be \\(\\rho_{\\rm vac}\\sim\\rho_{\\rm vac}^{\\rm crit}(M,B)\\). If one is interested in searching for chameleon fields with \\(M\\gtrsim M_{\\rm min}\\), where \\(M_{\\rm min}\\) is the smallest value of \\(M\\) in which one is interested (e.g. the smallest value of \\(M\\) that is not already ruled out by other experiments), the optimal choice for the density of the vacuum, both in terms of experimental sensitively and cost, is \\(\\rho_{\\rm vac}\\sim\\rho_{\\rm vac}^{\\rm crit}(M_{\\rm min},B)\\).
For definiteness we consider a chameleon theory with power-law potential:
\\[V(\\phi)=\\Lambda_{c}^{4}(1+\\Lambda^{n}/\\phi^{n})\\]
for some \\(n\\) and with \\(\\Lambda=\\Lambda_{c}=2.4\\times 10^{-3}\\,{\\rm eV}\\). The scale of \\(\\Lambda\\) could be seen as 'natural' if the chameleon field is additionally responsible for the late time acceleration of the Universe. We use parameters for the GammeV experiment [20] to provide an example. In this set-up \\(\\omega=2.3\\,{\\rm eV}\\), \\(\\Lambda=2.3\\times 10^{-3}\\,{\\rm eV}\\), \\(B=5\\) T and \\(L=6\\,{\\rm m}\\). The most recent PVLAS results imply that \\(M\\gtrsim 10^{6}\\,{\\rm GeV}\\)[8]. Taking \\(M_{\\rm min}=10^{6}\\,{\\rm GeV}\\), we find that with \\(B=5\\,{\\rm T}\\):
\\[\\rho_{\\rm vac}^{\\rm crit}(n=1/2,M_{\\rm min},B) = 3.2\\times 10^{-10}\\,{\\rm kg}\\,{\\rm m}^{-3},\\] \\[\\rho_{\\rm vac}^{\\rm crit}(n=1,M_{\\rm min},B) = 2.8\\times 10^{-10}\\,{\\rm kg}\\,{\\rm m}^{-3},\\] \\[\\rho_{\\rm vac}^{\\rm crit}(n=4,M_{\\rm min},B) = 2.6\\times 10^{-11}\\,{\\rm kg}\\,{\\rm m}^{-3}.\\]
If \\(\\rho_{\\rm vac}\\lesssim 10^{-10}\\,{\\rm kg}\\,{\\rm m}^{-3}\\) then in this set-up, \\(m^{2}L/4\\omega<\\pi/2\\) for \\(0.053<n<2.6\\), and if \\(\\rho_{\\rm vac}\\lesssim 10^{-11}\\,{\\rm kg}\\,{\\rm m}^{-3}\\) then we have \\(m^{2}L/4\\omega<\\pi/2\\) for \\(0.017<n<4.4\\). For models with a power-law potential, the best trade-off between cost and sensitivity for GammeV set-up, would therefore be to use \\(\\rho_{\\rm vac}\\approx 10^{-11}-10^{-10}\\,{\\rm kg}\\,{\\rm m}^{-3}\\). This corresponds to an optimal vacuum pressure: \\(P_{\\rm vac}\\lesssim 10^{-8}-10^{-7}\\,{\\rm torr}\\). A vacuum of this quality was used in the recent PVLAS axion search [10; 11]. If one could rule out theories with \\(M<10^{8}\\,{\\rm GeV}\\) by some other means then very little would be gained in terms of sensitivity to models with \\(n\\sim O(1)\\) by making the vacuum pressure much smaller than: \\(P_{\\rm vac}^{\\rm crit}\\sim 10^{-5}-10^{-4}\\,{\\rm torr}\\).
If we consider instead the \"optimal set-up\" that we described in Section IV, then for \\(M_{\\rm min}=10^{8}\\,{\\rm GeV}\\) and \\(n\\sim O(1)\\), the optimal choice for the vacuum pressure is \\(P_{\\rm vac}^{\\rm crit}\\sim 10^{-6}-10^{-5}\\,{\\rm torr}\\). In the context of chameleon models with a power-law potential with \\(n\\sim O(1)\\) and \\(\\Lambda\\approx\\Lambda_{c}=2.4\\times 10^{-3}\\,{\\rm eV}\\), relatively little would be gained in terms of sensitivity by lowering \\(P_{\\rm vac}\\) further than this. However, lower values of \\(P_{\\rm vac}\\) would increase the ranges of values of \\(\\Lambda\\) and \\(n\\) that could be detected.
Notice that, since one can vary the chameleon mass, \\(m_{\\phi}\\), by changing the local density, \\(\\rho_{\\rm vac}\\), it would in principle be possible to measure \\(m_{\\phi}\\) using these experiments as a function of \\(\\rho_{\\rm eff}\\) by making use of the relation \\(\\rho_{eff}=-MV^{\\prime}(\\phi)\\). One would then know \\(V^{\\prime\\prime}(\\phi)\\equiv m_{\\phi}^{2}\\) as a function of \\(V^{\\prime}(\\phi)\\) and hence also \\(\\phi(V^{\\prime}(\\phi))\\) up to a constant. From this one could reconstruct \\(V^{\\prime}(\\phi+{\\rm const})\\). By integrating, one would then arrive at \\(V(\\phi+const)+{\\rm const}\\), for some range of \\(\\phi\\). However, since the minimum size of \\(\\rho_{\\rm eff}\\) is limited by \\(B^{2}/2\\approx 10^{-10}{\\rm kg\\;\\;m^{-3}}\\) for \\(B=5T\\), one cannot probe \\(V(\\phi)\\) in the region that is cosmologically interesting today, i.e. \\(\\rho\\approx 10^{-27}{\\rm kg\\;\\;m^{-3}}\\). It must be stressed though that the sensitivity of ALP experiments to \\(m\\) is strongly peaked about values for which \\(m^{2}L/4\\omega\\approx\\pi/2\\), and so it is not the most ideal probe of \\(V(\\phi)\\); ALP experiments are best suited to measuring or placing lower-bounds on \\(M\\). Casimir force measurements has recently be shown in Ref. [7] to provide a potentially much more direct probe of \\(V(\\phi)\\) but conversely say little about the chameleon to matter coupling, \\(M\\).
## VI Conclusions
In this paper, we have investigated the possibility of using an _afterglow_ phenomenon as a unique chameleon trace in optical experiments. The existence of this afterglow is directly linked with the environment dependence of the chameleon mass parameter. The latter causes the trapping of chameleons inside the vacuum chamber where they have been produced, e.g., by a laser pulse interacting with a strong magnetic field. The afterglow itself is linearly polarized perpendicularly to the magnetic field.
We find that the trapping can be so efficient that the re-conversion of chameleons into photons in a magnetic field causes an afterglow over macroscopic time scales. For instance, for values of the inverse chameleon-photon coupling \\(M\\) slightly above the current detection limit \\(M\\sim 10^{6}\\)GeV [11] and magnetic fields of a few Tesla and few meters long, the half-life of the afterglow is on the order of minutes. Current experiments such as ALPS, BMV, GammeV and OSQAR can improve the current limit on \\(M\\) by a few orders of magnitude. With present-day technology, even a parameter range of chameleon-photon couplings appears accessible which is comparable in magnitude, e.g., to bounds on the axion-photon coupling derived from astrophysical considerations, i.e., \\(M\\sim 10^{10}\\)GeV.
In the present work, we mainly considered the afterglow from an initial short laser pulse, the associated length of which fits into the optical path length within the vacuum chamber. From a technical viewpoint, this choice avoids unwanted interference effects, but it also has an experimental advantage: the resulting afterglow photons all arrive in pulses of the same duration as the initial pulse and are separated by a known time (i.e., \\(2(L+d)\\)). This can be useful in extracting the signal from any background noise which should not be correlated in this way. Also the polarization dependence of the afterglow can be used to distinguish a possible signal from noise.
As discussed in the appendix, also long laser pulses or continuous laser light could be used in an experiment, if the resulting interference can be controlled to some extent such that a chameleon-photon resonance exists at least in some part of the apparatus. This resonance phenomenon is particularly sensitive to smaller chameleon masses. Unfortunately, a full control of the chameleon resonance may experimentally be very difficult; but if it were possible, the gain in afterglow photons, and subsequently in sensitivity to a chameleonic sector, could be very significant.
We would like to stress again that the afterglow phenomenon in the experiments considered here is a smoking gun for a chameleonic particle. From the viewpoint of optical experiments, a number of further mechanisms have been proposed that could induce optical signatures in laboratory experiments, but still evade astrophysical constraints [35; 36; 37; 38]. Distinguishing between the various scenarios in the case of a positive signal for, say, ellipticity and dichroism, may not always be possible with the current laboratory set-ups. But the observation of an afterglow would strongly point to a chameleon mechanism.
In this sense, it appears worthwhile to reconsider other concepts on using optical signatures to deduce information about the underlying particle-physics content, e.g., using strong laser fields [40; 41; 42] or astronomical observations [43; 44; 45; 46; 47], in the light of a chameleon field.
Finally, it is interesting to notice that it could, in principle, be possible to measure the varying chameleon mass \\(m\\) by varying the vacuum pressure in the experiment and thereby extract information about the scalar potential \\(V(\\phi)\\). The reconstruction of \\(V(\\phi)\\) from this afterglow experiment assumes however that the value of \\(M\\) that one detects from afterglow experiments (i.e, the photon-chameleon coupling) is the same as the matter-chameleon coupling. This, of course, need not be the case; although we might expect them to be of the same magnitude. By contrast, the reconstruction of \\(V(\\phi)\\) from the Casimir force tests [7] does not run into this problem, since these experiments would effectively measure \\(V(\\phi)\\) directly and provide only weak constraints on \\(M\\). If chameleon fields were to be detected, one could, by comparing the reconstructions of \\(V(\\phi)\\) from afterglow and Casimir experiments, actually measure not only the chameleon-photon coupling but also the chameleon-matter coupling.
In resume, our afterglow estimates clearly indicate that the new-physics parameter range accessible even with current technology is substantial. Afterglow searches therefore represent a powerful tool to probe physics halfway up to the Planck scale.
###### Acknowledgements.
We thank P. Brax and C. van de Bruck for useful discussions. HG acknowledges support by the DFG under contract Gi 328/1-4 (Emmy-Noether program). DFM is supported by the Alexander von Humboldt Foundation. DJS is supported by STFC.
## Appendix A Interference effects
In this appendix, we discuss the possible use of interference effects for the afterglow phenomenon. Whereas the short-pulse experiments considered in the main text provide for a clean signal, the occurrence of interference depends more strongly on the details and the precision of the experiment.
Let us consider a long pulse of duration \\(T\\gg 2(L+d)\\) with frequency \\(\\omega\\) and a Gaussian envelope,
\\[a_{\\rm in}(t,z)=a_{0}\\,e^{-i\\omega t+ikz}\\,e^{-\\frac{1}{2}\\frac{(t-z)^{2}}{T^{2 }}}. \\tag{10}\\]
In this case, the afterglow amplitude in Eq. (27) or Eq. (36) for a given time \\(t\\) at position \\(z\\) (say, at a time \\(t>T\\) after the initial pulse has passed the detector at \\(z\\simeq L+d\\)) receives contributions from many terms in the \\(n\\) sum; typically, the Gaussian profile picks up a wide range of large \\(n\\) values.
Extending the summation range of \\(n\\) from \\(-\\infty\\) to \\(\\infty\\) (instead of 1 to \\(\\infty\\)) introduces only an exponentially small error owing to the Gaussian envelope, but allows to perform a Poisson resummation of the \\(n\\) sum; for instance, the term in the second line of Eq. (36) yields (\\(\\bar{\\omega}=\\omega\\))
\\[\\sum_{n=1}^{\\infty}e^{-i\\frac{m^{2}((2n+1)L+2nd)}{\\omega}}a_{\\rm in }^{\\prime\\prime}\\left(t-(2n+1)(L+d)-\\frac{m^{2}((2n+1)L+2nd)}{2\\omega^{2}}\\right)\\] \\[\\rightarrow-\\sqrt{\\frac{\\pi}{2}}a_{0}\\,\\omega^{2}e^{-i\\omega t+ ikz+i\\frac{m^{2}}{2\\omega}d+i\\omega(L+d)f_{-}}\\frac{T}{(L+d)f_{+}}e^{i\\omega \\frac{f_{-}}{f_{+}}(t-z+\\frac{1}{2}\\frac{m^{2}}{\\omega^{2}}d-(L+d)f_{+})}\\] \\[\\quad\\times\\sum_{m=-\\infty}^{\\infty}e^{i\\frac{m}{(L+d)f_{+}}(t-z +\\frac{1}{2}\\frac{m^{2}}{\\omega^{2}}d-(L+d)f_{+})}e^{-\\frac{1}{2}\\frac{T^{2}} {4(L+d)^{2}f_{+}^{2}}[2\\omega(L+d)f_{-}-2\\pi m]^{2}}, \\tag{11}\\]
where \\(f_{\\pm}=1\\pm\\frac{1}{2}\\frac{m^{2}}{\\omega^{2}}\\), and we have dropped terms of order \\(t/(T^{2}\\omega)\\) and \\(1/(T\\omega)^{2}\\). The Gaussian factor in the last term causes a strong exponential suppression, since \\(T\\gg 2(L+d)\\) and \\((L+d)\\omega\\gg 1\\) by assumption, and typically \\(f_{\\pm}={\\cal O}(1)\\). Therefore, this factor essentially picks out one term in the \\(m\\) sum that comes closest to the resonance condition
\\[m=\\frac{\\omega(L+d)f_{-}}{\\pi}. \\tag{12}\\]
Let us denote the integer that is closest to this resonance condition with \\(m_{\\rm res}\\). Then the afterglow amplitude reads
\\[a_{\\rm glow}(t,z)=-a_{0}\\sqrt{\\frac{\\pi}{2}}\\frac{4B^{2}\\omega^{2}}{M^{2}m^{4} }\\frac{T}{(L+d)f_{+}}\\sin^{2}\\left[\\frac{m^{2}L}{4\\omega^{2}}\\left(\\omega\\frac {f_{-}}{f_{+}}+\\pi m_{\\rm res}\\right)\\right]e^{-\\frac{t}{2}\\frac{T^{2}}{4(L+d )f_{-}}(2\\omega(L+d)f_{-}-2\\pi m_{\\rm res})^{2}}\\,e^{i\\varphi(t,z)}, \\tag{13}\\]
where \\(\\varphi(t,z)\\) summarizes all phases of the amplitude. Whether or not the resonance condition can be met is a delicate experimental issue. The above formula suggests that the experimental parameters require an extraordinary fine-tuning, since the width of the resonance is extremely small. However, in a real experiment, systematic uncertainties may invalidate the idealized scenario from the very beginning; for instance, the laser beam has a finite cross section, and the length of the vacuum chamber \\(L+d\\) may vary across this cross section a little bit. Variations on the order of the laser wave length lead to uncertainties of order 1 on the right-hand side of Eq. (12). Therefore, the resonance condition might be satisfied for some part of the beam only.
For a simplified estimate, let us assume that the beam cross section is a circular disc. The center of the disc is assumed to satisfy the resonance condition exactly, but the length \\(L+d\\) varies linearly from the center to the edge of the disc by a bit less than half a laser wavelength \\(\\lesssim\\lambda/2\\). Averaging over the Gaussian resonance factor in Eq. (13) then corresponds to integrating radially over the resonance peak. This procedure leads to the replacement
\\[e^{-\\frac{1}{2}\\frac{\\tau^{2}}{4(L+d)^{2}f_{+}^{2}}(2\\omega(L+d)f_{-}-2\\pi m _{\\rm res})^{2}}\\rightarrow\\frac{8}{(T\\omega)^{2}}\\left(\\frac{f_{+}}{f_{-}} \\right)^{2}\\frac{(L+d)^{2}}{\\lambda^{2}}=\\frac{2}{\\pi^{2}}\\frac{(L+d)^{2}}{T^ {2}}\\left(\\frac{f_{+}}{f_{-}}\\right)^{2}. \\tag{14}\\](More resonance points in the beam cross section would give a sum of similar terms on the right-hand side of Eq. (100).) In this case, we can read off the probability amplitude for a photon in the Gaussian pulse to reappear in the afterglow at time \\(t>T\\gg(L+d)\\) averaged over the resonance,
\\[{\\cal P}_{\\rm average} \\simeq \\frac{8}{\\pi^{2}}\\sqrt{\\frac{\\pi}{2}}\\frac{B^{2}\\omega^{2}}{M^{2} m^{4}}\\frac{(L+d)}{T}\\frac{f_{+}}{f_{-}^{2}}\\sin^{2}\\left[\\frac{m^{2}L}{4 \\omega^{2}}\\left(\\omega\\frac{f_{-}}{f_{+}}+\\pi m_{\\rm res}\\right)\\right] \\tag{101}\\] \\[\\simeq \\frac{4}{\\pi^{2}}\\sqrt{\\frac{\\pi}{2}}\\frac{B^{2}\\omega^{2}}{M^{2} m^{4}}\\frac{(L+d)}{T}\\frac{1+\\frac{1}{2}\\frac{m^{2}}{\\omega^{2}}}{(1-\\frac{1}{2} \\frac{m^{2}}{\\omega^{2}})^{2}},\\]
where we have approximated the \\(\\sin^{2}\\) by its phase average \\(1/2\\), since the phase is large and may also vary spatially or over the measurement period.
Let us compare the resulting probability for a long pulse \\(T=10\\) s with that of a short-pulse set-up for \\(B=5\\)T, \\(L\\simeq L+d=6\\)m, \\(\\omega=2\\pi/(532{\\rm nm})\\) in the region of small masses \\(m\\lesssim 0.1\\)meV,
\\[{\\cal P}_{\\rm short\\ pulse}^{2} = \\left[\\frac{4B^{2}\\omega^{2}}{M^{2}m^{4}}\\sin^{2}\\left(\\frac{m^{ 2}L}{4\\omega}\\right)\\right]^{2}\\simeq\\frac{B^{4}L^{4}}{16M^{4}}\\simeq 3.1 \\times 10^{-21}\\left(\\frac{10^{6}{\\rm GeV}}{M}\\right)^{4},\\] \\[{\\cal P}_{\\rm average}^{2} = \\left[\\frac{4}{\\pi^{2}}\\sqrt{\\frac{\\pi}{2}}\\frac{B^{2}\\omega^{2} }{M^{2}m^{4}}\\frac{(L+d)}{T}\\frac{1+\\frac{1}{2}\\frac{m^{2}}{\\omega^{2}}}{(1- \\frac{1}{2}\\frac{m^{2}}{\\omega^{2}})^{2}}\\right]^{2}\\simeq 2.8\\times 10^{-33} \\left(\\frac{10^{6}{\\rm GeV}}{M}\\right)^{4}\\left(\\frac{0.1{\\rm meV}}{m} \\right)^{8}. \\tag{102}\\]
Here it is important to stress that the calculation has been performed in the limit \\(\\theta\\simeq\\omega B/(m^{2}M)\\ll 1\\), corresponding to
\\[1\\gg\\theta\\simeq 2.2\\times 10^{-4}\\left(\\frac{10^{6}{\\rm GeV}}{M}\\right)\\left( \\frac{0.1{\\rm meV}}{m}\\right)^{2}. \\tag{103}\\]
In other words, the validity range of Eqs. (101) and (102) does not extend to arbitrarily small masses. Still, for small chameleon masses \\(m\\) and larger values of \\(M\\) (such that Eq. (103) holds), the averaged resonance probability can exceed that of the short-pulse case in the present example.
Using, for instance, a 10s long pulse from a 100Watt laser, the number of photons in the afterglow will be
\\[n_{\\rm glow}\\simeq 2\\times 10^{-4}\\left(\\frac{10^{6}{\\rm GeV}}{M}\\right)^{4} \\left(\\frac{0.1{\\rm meV}}{m}\\right)^{8}\\left(\\frac{t_{\\rm expt}}{1{\\rm s}} \\right), \\tag{104}\\]
where \\(t_{\\rm expt}\\) denotes the measurement time of the afterglow. We conclude that such an experiment can become sensitive to the coupling scales of order \\(M\\sim 10^{10}\\)GeV for chameleon masses in the sub \\(\\mu\\)eV range. We stress again that the precise sensitivity limits strongly depend on the experimental details and the feasibility to exploit a chameleon resonance in the vacuum cavity. If such a resonance can be created in an experiment the interference measurement can provide for a handle on a measurement of the chameleon mass, since the dependence on the latter is rather strong.
If a fully controlled resonance could be built up in an experiment, the suppression factor on the right-hand side of Eq. (100) arising from averaging would be replaced by 1; this would correspond to an enhancement factor of \\(10^{18}\\) for the probability amplitude in the example given above. From an experimental viewpoint, controlling the resonance a priori seems, of course, rather difficult. From the non-observation of an afterglow in a long-pulse experiment, it would be difficult to conclude whether this points to rather strong bounds on \\(M\\) or to the fact that the resonance condition is not sufficiently met. We therefore recommend short-pulse experiments as a much cleaner and well controllable set-up.
## References
* (1) For a review of experimental tests of the Equivalence Principle and General Relativity, see C.M. Will, _Theory and Experiment in Gravitational Physics_, 2nd Ed., (Basic Books/Perseus Group, New York, 1993); C.M. Will, Living Rev. Rel. **9**, 3 (2006).
* (2) J. Khoury and A. Weltman, Phys. Rev. Lett. **93**, 171104 (2004); Phys. Rev. D **69**, 044026 (2004).
* (3) D. F. Mota and D. J. Shaw, Phys. Rev. D.**75**, 063501 (2007); Phys. Rev. Lett. **97** (2006) 151102.
* (4) See, for instance, [http://en.wikipedia.org/wiki/Chameleon](http://en.wikipedia.org/wiki/Chameleon).
* (5) Ph. Brax, C. van de Bruck, A.-C. Davis, J. Khoury and A. Weltman, Phys. Rev. D **70**, 123518 (2004).
* (6) Ph. Brax, C. van de Bruck, A. C. Davis and A. M. Green, Phys. Lett. B **633**, 441 (2006).
* (7) P. Brax, C. van de Bruck, A. C. Davis, D. F. Mota and D. J. Shaw, arXiv:0709.2075 [hep-ph].
* (8) Ph. Brax, C. van de Bruck, A. C. Davis, [arXiv:hep-ph/0703243]
* (9) P. Brax, C. van de Bruck, A. C. Davis, D. F. Mota and D. J. Shaw, arXiv:0707.2801 [hep-ph].
* (10) E. Zavattini _et al._ [PVLAS Collaboration], Phys. Rev. Lett. **96**, 110406 (2006) [arXiv:hep-ex/0507107];
* (11) E. Zavattini _et al._ [PVLAS Collaboration], arXiv:0706.3419 [hep-ex].
* (12) S. J. Chen, H. H. Mei and W. T. Ni [Q& A Collaboration], hep-ex/0611050.
* (13) C. Robilliard, R. Battesti, M. Fouche, J. Mauchain, A. M. Sautivet, F. Amiranoff and C. Rizzo, arXiv:0707.1296 [hep-ex].
* (14) R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. **38**, 1440 (1977); Phys. Rev. D **16**, 1791 (1977); S. Weinberg, Phys. Rev. Lett. **40**, 223 (1978); F. Wilczek, Phys. Rev. Lett. **40**, 279 (1978).
* (15) L. B. Okun, Sov. Phys. JETP **56**, 502 (1982);
* (16) B. Holdom, Phys. Lett. B **166** (1986) 196.
* (17) K. Ehret _et al._, arXiv:hep-ex/0702023.
* (18) A. V. Afanasev, O. K. Baker and K. W. McFarlane, arXiv:hep-ph/0605250.
* (19) P. Pugnat _et al._, CERN-SPSC-2006-035; see [http://graybook.cern.ch/programmes/experiments/OSQAR.html](http://graybook.cern.ch/programmes/experiments/OSQAR.html).
* (20) see [http://gammev.fnal.gov/](http://gammev.fnal.gov/).
* (21) G. G. Raffelt, arXiv:hep-ph/0611350.
* (22) E. Masso and J. Redondo, JCAP **0509**, 015 (2005) [arXiv:hep-ph/0504202]; J. Jaeckel, E. Masso, J. Redondo, A. Ringwald and F. Takahashi, Phys. Rev. D **75**, 013004 (2007) [arXiv:hep-ph/0610203].
* (23) for a review, see W. Dittrich and H. Gies, Springer Tracts Mod. Phys. **166**, 1 (2000).
* (24) L. Maiani, R. Petronzio and E. Zavattini, Phys. Lett. B **175**, 359 (1986); G. Raffelt and L. Stodolsky, Phys. Rev. D **37**, 1237 (1988).
* (25) H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. Lett. **97**, 140402 (2006) [arXiv:hep-ph/0607118].
* (26) M. Ahlers, H. Gies, J. Jaeckel, J. Redondo and A. Ringwald, arXiv:0706.2836 [hep-ph].
* (27) P. Sikivie, Phys. Rev. Lett. **51** (1983) 1415 [Erratum-ibid. **52** (1984) 695]; A. A. Anselm, Yad. Fiz. **42** (1985) 1480; M. Gasperini, Phys. Rev. Lett. **59** (1987) 396; K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, A. Kerman, and H. N. Nelson, Phys. Rev. Lett. **59**, 759 (1987).
* (28) M. Ahlers, H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. D **75**, 035011 (2007) [arXiv:hep-ph/0612098].
* (29) H. Gies, J. Jaeckel and A. Ringwald, Europhys. Lett. **76**, 794 (2006) [arXiv:hep-ph/0608238].
* (30) M. Ahlers, A. Lindner, A. Ringwald, L. Schrempp, and C. Weniger, arXiv:0710.1555 [hep-ph].
* (31) B. Ratra and P. J. E. Peebles, Phys. Rev. D **37**, 3406 (1988).
* (32) R. Cameron _et al._ [BFRT Collaboration], Phys. Rev. D **47** (1993) 3707.
* (33) S. Andriamonje _et al._ [CAST Collaboration], JCAP **0704**, 010 (2007) [arXiv:hep-ex/0702006].
* (34) see [http://www.gsi.de/forschung/phelix/](http://www.gsi.de/forschung/phelix/)
* (35) E. Masso and J. Redondo, Phys. Rev. Lett. **97**, 151802 (2006) [arXiv:hep-ph/0606163].
* (36) R. N. Mohapatra and S. Nasri, Phys. Rev. Lett. **98** (2007) 050402 [arXiv:hep-ph/0610068].
* (37) P. Jain and S. Stokes, arXiv:hep-ph/0611006.
* (38) R. Foot and A. Kobakhidze, Phys. Lett. B **650** (2007) 46 [arXiv:hep-ph/0702125].
* (39) I. Antoniadis, A. Boyarsky and O. Ruchayskiy, arXiv:0708.3001 [hep-ph].
* (40) T. Heinl, B. Liesfeld, K. U. Amthor, H. Schwoerer, R. Sauerbrey and A. Wipf, Opt. Commun. **267**, 318 (2006) [arXiv:hep-ph/0601076].
* (41) A. Di Piazza, K. Z. Hatsagortsyan and C. H. Keitel, Phys. Rev. Lett. **97**, 083603 (2006) [arXiv:hep-ph/0602039].
* (42) M. Marklund and P. K. Shukla, Rev. Mod. Phys. **78**, 591 (2006) [arXiv:hep-ph/0602123].
* (43) A. Dupays, C. Rizzo, M. Roncadelli and G. F. Bignami, Phys. Rev. Lett. **95**, 211302 (2005) [arXiv:astro-ph/0510324].
* (44) T. Koivisto and D. F. Mota, Phys. Lett. B **644**, 104 (2007)
* (45) T. Koivisto and D. F. Mota, Phys. Rev. D **75**, 023518 (2007)
* (46) A. Mirizzi, G. G. Raffelt and P. D. Serpico, arXiv:0704.3044 [astro-ph].
* (47) A. De Angelis, O. Mansutti and M. Roncadelli, arXiv:0707.2695 [astro-ph]. | We propose an _afterglow_ phenomenon as a unique trace of chameleon fields in optical experiments. The vacuum interaction of a laser pulse with a magnetic field can lead to a production and subsequent trapping of chameleons in the vacuum chamber, owing to their mass dependence on the ambient matter density. Magnetically induced re-conversion of the trapped chameleons into photons creates an afterglow over macroscopic timescales that can conveniently be searched for by current optical experiments. We show that the chameleon parameter range accessible to available laboratory technology is comparable to scales familiar from astrophysical stellar energy loss arguments. We analyze quantitatively the afterglow properties for various experimental scenarios and discuss the role of potential background and systematic effects. We conclude that afterglow searches represent an ideal tool to aim at the production and detection of cosmologically relevant scalar fields in the laboratory.
pacs: 14.80.-j, 12.20.Fv | Condense the content of the following passage. |
arxiv-format/0710_4143v2.md | # Lensing and Supernovae: Quantifying the Bias on the Dark Energy Equation of State
Devedep Sarkar1, Alexandre Amblard1, Daniel E. Holz11, and Asantha Cooray1
1Department of Physics and Astronomy, University of California, Irvine, CA 92617
1
Footnote 1: affiliation: Department of Physics and Astronomy, University of California, Irvine, CA 92617
2
Footnote 2: affiliation: Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545
3
Footnote 3: affiliation: Kavli Institute for Cosmological Physics and Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637
## 1. Introduction
Since the discovery of the accelerating expansion of the universe (Riess et al., 1998; Perlmutter et al., 1999; Knop et al., 2003; Riess et al., 2004), the quest to understand the physics responsible for this acceleration has been one of the major challenges of cosmology. At present the dominant explanation entails an additional energy density to the universe called dark energy. The physics of dark energy is generally described in terms of its equation of state (EOS), the ratio of its pressure to density. In some models this quantity can vary with redshift. While there exist a variety of probes to explore the nature of dark energy, one of the most compelling entails the use of type Ia supernovae to map the Hubble diagram, and thereby directly determine the expansion history of the universe. With increasing sample sizes, SN distances can potentially provide multiple independent estimates of the EOS when binned in redshift (Huterer & Cooray, 2005; Sullivan, Cooray, & Holz, 2007; Sullivan et al., 2007). Several present and future SN surveys, such as SNLS (Astier et al., 2006) and the Joint Dark Energy Mission (JDEM), are aimed at constraining the value of the dark energy EOS to better than 10%.
Although SNe have been shown to be good standardizable candles, the distance estimate to a given SN is degraded due to gravitational lensing of its flux (Frieman, 1997; Wambsganss et al., 1997; Holz & Wald, 1998). The lensing becomes more prominent as we observe SNe out to higher redshift, with the extra dispersion induced by lensing becoming comparable to the intrinsic dispersion (of \\(\\sim 0.1\\) magnitudes) at \\(z\\gtrsim 1.2\\)(Holz & Linder, 2005). In addition to this dispersion, which leads to an increase in the error associated with distance estimate to each individual supernova, lensing also correlates distance estimates of nearby SNe on the sky, since the lines-of-sight pass through correlated foreground large-scale structure (Cooray, Huterer, & Holz, 2006; Hui & Greene, 2006). Although this correlation error cannot be statistically eliminated by increasing the number of SNe in the Hubble diagram, the errors can be controlled by conducting sufficiently wide-area (\\(>5\\) deg\\({}^{2}\\)) searches for SNe (in lieu of small-area pencil-beam surveys).
In addition to the statistical covariance of SN distance estimates, gravitational lensing also introduces systematic uncertainties in the Hubble diagram by introducing a non-Gaussian dispersion in the observed luminosities of distant SNe. Since lensing conserves the total number of photons, this systematic bias averages away if sufficiently large numbers of SNe per redshift bin are observed. In this case the average flux of the many magnified and demagnified SNe converges on the unlensed value (Holz & Linder, 2005). Nonetheless, even with thousands of SNe in the total sample it is possible that the averaging remains insufficient, given that one may need to bin the Hubble diagram at very small redshift intervals to improve sensitivity to the EOS. Furthermore, SNe at higher redshifts are more likely to be significantly lensed. If \"obvious\" outliers to the Hubble diagram are removed from the sample, this introduces an important bias in cosmological parameter determination, and can lead to systematic errors in the determination of the dark energy EOS.
In this paper we quantify the bias introduced in the estimation of the dark energy EOS due to weak lensing of supernova flux. We consider the effects due to the non-Gaussian nature of the lensing magnification distributions (Wang, Holz, & Munshi, 2002), performing Monte-Carlo simulations by creating mock datasets for future JDEM-like surveys. The paper is organized as follows: In SS2.1 we discuss our parameterization of the dark energy EOS, SS2.2 discusses gravitational lensing, and SS3 is an in-depth description of our methodology. We present our results in SS4.
## 2. Background
For a SN with intrinsic luminosity \\(\\mathcal{L}\\), the distance modulus is given by
\\[m-M=-2.5\\log_{10}\\left[\\frac{\\mathcal{L}/4\\pi{d_{L}}^{2}}{\\mathcal{L}/4\\pi(10{ \\rm pc})^{2}}\\right]=5\\log_{10}\\left(\\frac{d_{L}}{{\\rm Mpc}}\\right)+25, \\tag{1}\\]
where, in the framework of FRW cosmologies, the luminosity distance \\(d_{L}\\) is a function of the cosmological parameters and the redshift. We consider the two-parameter, time-varying dark energy EOS (Chevalier & Polarski, 2001; Linder, 2003) adopted by the Dark Energy Task Force (DETF) (Albrecht et al., 2006). We take a flat universe described by the cosmological parameters \\(\\Theta=\\{h,\\Omega_{m},w_{0},w_{a}\\}\\), with \\(h\\) the dimensionless Hubble constant, \\(\\Omega_{m}\\) the dimensionless matter density, and \\(w_{0}\\) and \\(w_{a}\\) the parameters describing the dark energy EOS \\(w(z)=w_{0}+w_{a}(1-a)\\). One can then express \\(d_{L}\\) as
\\[d_{L}(z)=(1+z)\\int_{0}^{z}\\frac{cdz^{\\prime}}{H(z^{\\prime})}, \\tag{2}\\]
where \\(H(z)\\) is the Hubble parameter at redshift \\(z\\):
\\[H(z)=H_{0}\\left[\\Omega_{m}(1+z)^{3}+(1-\\Omega_{m})(1+z)^{3(1+w_{0}+w_{a})}e^{ -\\frac{3w_{a}z}{(1+z)}}\\right]^{1/2}. \\tag{3}\\]
### Weak Lensing of Supernova Flux
Light from a distant SN passes through the intervening large-scale structure of the universe. This causes a modification of the observed flux due to gravitational lensing:
\\[\\mathcal{F}^{\\rm obs,\\rm lensed}(z,\\hat{\\bf n})=\\mu(z,\\hat{\\bf n})\\,\\mathcal{F }^{\\rm obs,true}, \\tag{4}\\]
where \\(\\mu(z,\\hat{\\bf n})\\) is the lensing induced magnification at redshift \\(z\\) in the direction of the SN on the sky, \\(\\hat{\\bf n}\\), and \\(\\mathcal{F}^{\\rm obs,true}\\) is the flux that would have been observed in the absence of lensing. The magnification \\(\\mu\\) can be either greater than (magnified) or less than (demagnified) one, with \\(\\mu=1\\) corresponding to the (unlensed) pure FRW scenario.
This magnification (or demagnification) of the observed flux leads to an error in the distance modulus, which can be expressed as
\\[\\left[\\Delta(m-M)\\right]_{\\rm lensing}=-2.5\\log_{10}\\left(\\frac{\\mathcal{F}^{ \\rm obs,\\rm lensed}}{\\mathcal{F}^{\\rm obs,true}}\\right)=-2.5\\log_{10}\\left(\\mu \\right), \\tag{5}\\]
where we have written \\(\\mu(z,\\hat{\\bf n})\\) as \\(\\mu\\) for brevity.
Even in the absence of lensing the measured distance modulus to a Type Ia SN suffers an intrinsic error, since the supernovae are not perfect standard candles. This error is typically taken to be a Gaussian distribution in either flux or magnitude, with a redshift-independent standard deviation of \\(\\sigma_{\\rm int}\\). On the other hand, as lensing intrinsically depends on the optical depth, which increases with redshift, the scatter (or variance) due to lensing also increases with redshift (Holz & Linder, 2005). The probability distribution function (PDF) for lensing magnification, \\(P(\\mu,z)\\), of a background source at redshift \\(z\\), depends on both the underlying cosmology and on the nature of the foreground structure responsible for lensing. We make use of an analytic form for \\(P(\\mu)\\) that was calibrated with numerical simulations (Wang, Holz, & Munshi, 2002), which is valid for events that are not strongly lensed (\\(\\mu\\) is less than a few).
The generic gravitational lensing PDF peaks at a demagnified value, with a long tail to high magnification (Wang, Holz, & Munshi, 2002; Sereno, Piedipalumbo, & Sazhin, 2002). However, since the total number of photons is conserved by lensing, all lensing distributions preserve the mean: \\(\\langle\\mu\\rangle=\\int\\mu P(\\mu)\\,d\\mu=1\\). To ensure that this criterion is met (to an accuracy of one part in a million), we re-normalize the magnification PDF in a slightly different way than what was originally suggested in Wang, Holz, & Munshi (2002). \\(P(\\mu)\\) is related, at a given redshift, to the probability distribution for the reduced convergence, \\(\\eta\\), through \\(P(\\mu)=P(\\eta)/(2|\\kappa_{\\rm min}|)\\) (see Wang, Holz, & Munshi (2002); Eq. (6)), where \\(\\kappa_{\\rm min}\\) is the minimum convergence. The free parameter in normalizing \\(P(\\eta)\\), and hence, \\(P(\\mu)\\), is \\(\\eta_{\\rm max}\\), the maximum value for the reduced convergence. We determine the unique value of \\(\\eta_{max}\\) which yields both \\(\\int P(\\mu)\\,d\\mu=1\\) and \\(\\langle\\mu\\rangle=\\int\\mu P(\\mu)\\,d\\mu=1\\) (to better than \\(10^{-6}\\)). Since the mode is not equal to the mean, the distribution is manifestly non-Gaussian. The majority of distant supernovae are slightly demagnified, and the inferred luminosity distances are skewed to higher values. The SN redshifts, on the other hand, remain unaffected, since gravitational lensing is achromatic. Although lensing magnification is insignificant at low redshifts, at redshifts above one both weak and strong lensing become more prominent. For small SN samples the overall bias is towards a larger cosmological acceleration (e.g., too large a value of \\(\\Omega_{\\Lambda}\\) for \\(\\Lambda\\)CDM models). The lensing also adds a systematic bias to estimates of the dark energy EOS, shifting to a more negative value of the EOS (e.g., less than \\(-1.0\\) for a universe with a cosmological constant). Along with a large fraction of demagnified SNe, lensing produces a small number of highly magnified sources. If this high-magnification tail can be well sampled, the average of the lensing distribution is expected to converge on the true mean, and the bias can be eliminated (Wang, 2000; Holz & Linder, 2005).
## 3. Methodology
We generate mock SN samples, and quantify the bias in the estimation of the dark energy EOS due to gravitational lensing. We pick as our fiducial cosmology a flat universe with \\(\\Omega_{m}=0.3\\), \\(w_{0}=-1\\), \\(w_{a}=0\\), and \\(h=0.732\\) with a Gaussian prior of \\(\\sigma(h)=0.032\\). We consider a range of possible
Figure 1.— _Top panel_: The Hubble diagram with a linear redshift scale showing the distance modulus with redshift of a subset of a 2000 SN mock dataset. _Bottom panel_: The residuals of the data relative to the fiducial model of a flat universe with \\(\\Omega_{m}=0.3\\), \\(w_{0}=-1\\), \\(w_{a}=0\\), and \\(h=0.732\\).
upcoming surveys, and Monte-Carlo SN samples of varying sizes distributed uniformly in the redshift range \\(0.1<z<1.7\\). If the SN intrinsic errors are Gaussian in flux, then it makes sense to analyze the data via flux averaging. In this case both the intrinsic errors and the lensing errors will average away for sufficiently large SN samples. However, if the intrinsic errors are Gaussian in magnitude, then a magnitude analysis may be more appropriate, although this will lead to a bias due to lensing (which is only bias-free for flux analysis). In what follows we perform both analyses.
### Magnitude Analysis
For each SN at a redshift \\(z\\) we draw a value of \\(\\mu\\) at random from the re-normalized lensing PDF, \\(P(\\mu,z)\\), obtained from Wang, Holz, & Munshi (2002), and use Eq. (5) to evaluate \\([\\Delta(m-M)]_{\\rm lensing}\\). To model the intrinsic scatter we draw a number at random from a Gaussian distribution of zero mean and standard deviation given by \\(\\sigma_{\\rm int}=0.1\\) mag (Astier et al. (2006) find a scatter at the level of 0.13 mag). We combine the intrinsic and lensing noise with the \"true\" underlying distance modulus to get the observed distance modulus:
\\[(m-M)^{\\rm data}(z)=(m-M)^{\\rm fid}(z)+[\\Delta(m-M)]_{\\rm int}+[\\Delta(m-M)]_ {\\rm lensing}. \\tag{6}\\]
By repeating the above procedure for each of the SNe in a sample, we generate a mock Hubble diagram. We create mock surveys with different numbers of SNe: \\(N=300\\), 2000, & 10,000; and then for each fixed number of SNe we generate at least 10,000 independent mock samples to properly sample the distributions. In Figure 1 we show an example of one such dataset with a SNe sample size of 2000. The upper panel shows the Hubble diagram and the lower panel shows the residuals of the distance moduli relative to that of the fiducial cosmological model.
For each mock Hubble diagram we compute the likelihood of a parameter set \\(\\mathbf{p}\\) by evaluating the \\(\\chi^{2}\\)-statistic:
\\[\\chi^{2}(\\mathbf{p})=\\sum_{i=1}^{N}\\frac{[(m-M)^{\\rm data}(z_{i})-(m-M)^{\\rm fid }(z_{i})]^{2}}{\\sigma^{2}(z_{i})}\\,, \\tag{7}\\]
where \\(\\sigma(z_{i})\\) is the error bar for the distance modulus of the \\(i\\)th supernova, taken to be 0.1 mag throughout. The projected bias in the estimation of \\(w_{0}\\) can now be computed by marginalizing over \\(w_{a}\\) and \\(h\\) (keeping \\(\\Omega_{m}\\) fixed).
### Flux-Averaging Analysis
Wang (2000) argues that averaging the flux, as opposed to the magnitudes, of observed SNe naturally removes the lensing bias, since the mean of the lensing distributions are equal to one in flux, but not in magnitude. We thus also analyze the mock data sets in flux. The flux averaging is done such that for each supernova we first use Eq. (2) to calculate the value of \\(d_{L}(z)\\), and then evaluate the fiducial flux using
\\[\\mathcal{F}^{\\rm fid}(z)=\\frac{\\mathcal{L}}{4\\pi[d_{L}^{\\rm fid}(z)]^{2}}\\,, \\tag{8}\\]
where we can take any _a priori_ fixed value of \\(\\mathcal{L}\\). We take the lensing PDF, \\(P(\\mu)\\), and convolve it with a Gaussian distribution having zero mean and dispersion 0.1 (this corresponds to \\(\\sim 5\\%\\) error in distance estimates). We then randomly draw from the convolved distribution, and multiply this number by \\(\\mathcal{F}^{\\rm fid}(z)\\) to get the observed flux for a given supernova. We repeat this process for each of the \\(N\\) SNe to obtain a mock data set. We then follow the flux averaging recipe of Wang & Mukherjee (2004) and perform the likelihood analysis to quantify the bias. As before we perform the above simulation a large (\\(\\sim\\) 10,000) number of times to get a distribution for the bias in parameter estimation.
We further assume that our SNe samples are complete in the sense that there is no Malmquist bias or any bias due to detection effects over the redshift range considered. This is a reasonable assumption since we consider SNe out to a redshift of 1.7 and such a complete catalog is expected from the near-infrared imaging capabilities of the JDEM program.
## 4. Results and Discussion
We first present our results for the magnitude case, as described in SS3.1. The left panel of Figure 2 shows histograms
Figure 2.— _Left panel:_ Histograms showing the distribution (after 10,000 realizations) of the values obtained for \\(w_{0}\\) using the magnitude analysis (§ 3.1), after marginalizing over \\(w_{a}\\) and \\(h\\), for the different sample sizes. The un-filled histogram depicts the case of the 300 SN sample, the shaded histogram shows the distribution for a sample size of 2000, and the hatched histogram shows the case for 10,000 SN sample. The vertical lines at -1,0095 (dashed and dotted), -1,003 (dashed), and -1,002 (solid) show the average \\(w_{0}\\) values for the 300 SN, 2000 SN, and 10,000 SN cases, respectively. The width of the distributions is due primarily to the intrinsic noise of the SNe, while the shifted mode is due to finite sampling of the gravitational lensing PDF. _Right panel:_ Histograms showing the distribution (after 10,000 realizations) of the values obtained for \\(w_{0}\\) using the flux-averaging technique (§ 3.2), after marginalizing over \\(w_{a}\\) and \\(h\\), for three different sample sizes. The un-filled histogram depicts the case of the 300 SN sample, the shaded histogram shows the distribution for a sample size of 2000, and the hatched histogram shows the case for 10,000 SN sample. The vertical lines at -1.007 (dashed and dotted), -1.002 (solid) show the \\(w_{0}\\) values that were obtained after averaging over 10,000 realizations for the 300, 2000, and 10,000 sample sizes, respectively.
of the best-fit values of \\(w_{0}\\) from the likelihood analysis, after marginalizing over \\(w_{a}\\) and \\(h\\). The empty histogram, which peaks at -1.009 (marked with a vertical dot-dashed line), is for the model with 300 SNe. The shaded histogram, representing the 2,000 SN case, peaks at -1.003 (vertical dashed line), while the hatched histogram representing the 10,000 SN case has its peak at -1.002 (vertical solid line). These distributions have 1\\(\\sigma\\) widths of 0.016, 0.006, and 0.003, respectively. This scatter is primarily due to the intrinsic uncertainty associated with absolute calibration, and is not dominated by lensing. Without the inclusion of lensing, however, the distributions peak at exactly -1, and show no bias. The shifted mode gives us a rough idea of the bias to be expected, on average, due to lensing. We find that 68% of the time a random sample of 300 SNe will have an estimated value for \\(w_{0}\\) within 3% of its fiducial value, and this drops to 0.5% when a sample size of 10,000 SNe is considered.
The right panel of Figure 2 shows the same distributions as the left panel, but this time using the flux-averaging technique instead of averaging over magnitudes. The empty, shaded, and hatched histograms peak at -1.007, -1.003, and -1.001, respectively, showing the mean bias for the 300, 2,000 and 10,000 SN cases (marked with dot-dashed, dashed, and solid vertical lines). With flux-averaging, we expect that 68% of the time a random sample of 300 SNe will yield a value of \\(w_{0}\\) within 2.5% of the fiducial value, and within 0.5% for a sample of 10,000 SNe.
The 1\\(\\sigma\\) parameter uncertainty on \\(w_{0}\\) ranges from the 20% level (for 300 SNe) to less than 5% (for 10,000 SNe), dwarfing the bias due to lensing. Thus, we need not be concerned about lensing degradation of dark energy parameter estimation for future _JDEM_-like surveys. We note, however, that our estimated bias on the EOS is larger than the lensing bias of \\(w<0.001\\) quoted in Table 7 of Wood-Vasey et al. (2007). This is not surprising, given their use of the simple Gaussian approximation to lensing from Holz & Linder (2005), which is less effective for low statistics. Nonetheless, we agree with their conclusion that lensing is negligible. A similar conclusion was also reached by Martel & Premadi (2007) who used a compilation of 230 Type Ia SNe (Tonry et al., 2003) in the redshift range \\(0<z<1.8\\) to show that the lensing errors are small compared to the intrinsic SNe errors.
We now discuss the bias which arises if anomalous SNe are removed from the sample. Gravitational lensing causes some SNe to be highly magnified, and it is conceivable that these \"obvious\" outliers are subsequently removed from the analysis. In this case the mean of the sample will be shifted away from the true underlying Hubble diagram, and a bias will be introduced in the best-fit parameters. To quantify this effect, we remove SNe which deviate from the expected mean luminosity-distance relation in the Hubble diagram by more than 25% (corresponding roughly to a 2.5\\(\\sigma\\) outlier). The SN scatter is a result of the convolution of the intrinsic error (Gaussian in flux of width 0.1) and the lensing PDF, and the outlier cutoff leads to a removal of \\(\\sim 50\\) SNe out of the 2,000 SNe. These outliers are preferentially magnified, due to the strong lensing tail of the magnification distributions. The demagnification tail is cut off by the empty-beam lensing limit, and therefore isn't as prominent. The hatched histogram in Figure 3 shows the distribution when events with convolved error greater than 2.5\\(\\sigma\\) are removed. The vertical dot-dashed line at -1.0075 shows the average value of \\(w_{0}\\) obtained in this case, representing a bias in the estimate of \\(w_{0}\\) roughly three times larger than when the full 2,000 SNe are analyzed (shown by shaded histogram). This bias is a result of cutting off the high magnification tail of the distribution, and thus shifting the data towards a net dimming of observed SNe, leading to a more negative value of \\(w_{0}\\).
We also apply a cutoff at 3\\(\\sigma\\), in addition to the 2.5\\(\\sigma\\) discussed above. This results in a removal of \\(\\sim 20\\) SNe on average, for each 2,000 SN sample, and leads to a bias of \\(\\sim 0.6\\%\\). Any arbitrary cut on the (non-Gaussian) convolved (lensing + intrinsic) sample leads to a net bias in the distance relation, and even for large outliers and large SN samples, this can lead to percent-level bias in the best-fit values for \\(w_{0}\\).
To summarize, we have quantified the effect of weak gravitational lensing on the estimation of dark energy EOS from type Ia supernova observations. With generated mock samples of 2,000 SNe distributed uniformly in redshift up to z\\(\\sim\\)1.7 (as expected in future surveys like _JDEM_), we have shown that the bias in parameter estimation due to lensing is less than 1% (which is well within the 1\\(\\sigma\\) uncertainty expected for these missions). Analyzing the data in flux or magnitude does not alter this result. If lensed supernovae that are highly magnified (such that the convolved error is more than 25% from the underlying Hubble diagram) are systematically removed from the sample, we find that the bias increases by a factor of almost three. Thus, so long as all observed SNe are used in the Hubble diagram, including ones that are highly magnified, the bias due to lensing in the estimate of the dark energy EOS will be significantly less than the 1\\(\\sigma\\) uncertainty. Even for a post-_JDEM_ program with 10,000 SNe, lensing bias can be safely ignored.
We thank Yun Wang for useful discussions. AC acknowledges support from NSF CAREER AST-0645427. DEH acknowledges a Richard P. Feynman Fellowship from Los Alamos National Laboratory. DS, AC, and DEH are partially supported by the DOE at LANL and UC Irvine through IGPP Grant Astro-1603-07. AA acknowledges a McCue Fellowship at UC Irvine.
Figure 3.— Histograms showing the distribution of the values obtained for \\(w_{0}\\) after marginalizing over \\(w_{a}\\) and \\(h\\) for the 2,000 SN case (using the flux-averaging technique). The shaded histogram assumes that the full sample of 2,000 SNe is used for parameter estimation. The hatched histogram shows the shift when outliers (SNe that are shifted above or below the Hubble diagram by more than 25% on either side) are removed from the sample. The bias in the distribution is due to the removal of highly-magnified lensing events from the sample.
## References
* Aldering et al. (2004) Aldering, G. et al. 2004, PASP, submitted (astro-ph/0405232)
* Astier et al. (2006) Astier, P. et al., 2006, A & A, 447, 31
* Chevalier and Polarski (2001) Chevalier and D. Polarski, Int. J. Mod. Phys. D **10**, 213 (2001).
* Cooray et al. (2006) Cooray, A., Huterer, D., Holz, D. 2006, PRL, 96, 021301
* Frieman (1997) Frieman, J. A. 1997, Comments Astrophys., 18, 323
* Albrecht et al. (2006) Albrecht, A. et al. 2006, arXiv:astro-ph/0609591
* Holz & Linder (2005) Holz, D. E. & Linder, E. V. 2005, ApJ, 631, 678
* Holz & Wald (1998) Holz, D. E. & Wald, R. M. 1998, PRL, 98, 063501
* Hui & Greene (2006) Hui, L. & Greene, P. B. 2006, PRD, 73, 123526
* Huterer & Cooray (2005) Huterer, D. & Cooray, A. 2005, PRD, 71, 023506
* Knop et al. (2003) Knop, R. A. et al. 2003, ApJ, 598, 102
* Linder (2003) Linder, E. 2003, PRL, 90, 091301
* Martel & Premadi (2007) Martel, H. & Premadi, P. 2007, arXiv:0710.5452
* Perlmutter et al. (1999) Perlmutter, S., et al. 1999, ApJ, 517, 565
* Riess et al. (1998) Riess, A. G. et al. 1998, AJ, 116, 1009
* Riess et al. (2004) Riess, A. G. et al. 2004, ApJ, 607, 665
* Sereno et al. (2002) Sereno, M., Piedipalumbo, E., Sazhin, M.V. 2002, MNRAS, 335, 1061
* Sullivan et al. (2007) Sullivan, S., Cooray, A., & Holz, D. E. 2007, JCAP, 09, 004
* Sarkar et al. (2007) Sarkar, D., Sullivan, S., Joudaki, S., Amblard, A., Holz, D. E., & Cooray, A. 2007, arXiv:astro-ph/0709.1150
* Tonry et al. (2003) Tonry, J. L. et al. 2003, ApJ, 594, 1
* Wambsganss et al. (1997) Wambsganss, J., Cen, R., Xu, G., & Ostriker, J. P. 1997,ApJ, 475, L81
* Wang (2000) Wang, Y. 2000, ApJ, 536, 531
* Wang et al. (2002) Wang, Y., Holz, D. E., Munshi, D. 2002, ApJ, 572, L15
* Wang & Mukherjee (2004) Wang, Y. & Mukherjee, P. 2004, ApJ, 606, 654
* W. M. Wood-Vasey _et al._ (2007) W. M. Wood-Vasey _et al._, arXiv:astro-ph/0701041. | The gravitational magnification and demagnification of Type Ia supernovae (SNe) modify their positions on the Hubble diagram, shifting the distance estimates from the underlying luminosity-distance relation. This can introduce a systematic uncertainty in the dark energy equation of state (EOS) estimated from SNe, although this systematic is expected to average away for sufficiently large data sets. Using mock SN samples over the redshift range \\(0<z\\leq 1.7\\) we quantify the lensing bias. We find that the bias on the dark energy EOS is less than half a percent for large datasets (\\(\\gtrsim 2\\),000 SNe). However, if highly magnified events (SNe deviating by more than \\(2.5\\sigma\\)) are systematically removed from the analysis, the bias increases to \\(\\sim 0.8\\%\\). Given that the EOS parameters measured from such a sample have a \\(1\\sigma\\) uncertainty of 10%, the systematic bias related to lensing in SN data out to \\(z\\sim 1.7\\) can be safely ignored in future cosmological measurements.
Subject headings: cosmology: observations -- cosmology: theory -- supernova -- parameter estimation -- gravitational lensing | Condense the content of the following passage. |
arxiv-format/0710_4372v1.md | # Prediction Space Weather Using an Asymmetric Cone Model for Halo CMEs
G. Michalek\\({}^{1}\\)
N. Gopalswamy\\({}^{2}\\)
S. Yashiro\\({}^{3}\\)
\\({}^{1}\\) Astronomical Observatory of Jagiellonian University, Cracow, Poland ([email protected]) \\({}^{2}\\) Solar System Exploration Division, NASA GSFC, Greenbelt, Maryland \\({}^{3}\\) Center for Solar and Space Weather, Catholic University of America
Received ; accepted
## 1 Introduction
Halo coronal mass ejections (HCMEs) originating from regions close to the central meridian of the Sun and directed toward Earth cause the most severe geomagnetic storms (Gopalswamy, Yashiro, and Akiyama, 2007 and references therein). Therefore, it is very important to determine the kinetic and geometric parameters describing HCMEs. One of the most important parameter is the space speed of CMEs used as input to CME and shock arrival models. Unfortunately coronagraphic observations from the Sun-Earth line are subjected to projection effects (_e.g._ Kahler, 1992; Webb, _et al._, 2000; St. Cyr _et al._, 2000, Gopalswamy, Lara, and Yashiro, 2003; Gopalswamy _et al._, 2001; Gopalswamy, 2004; Gopalswamy, Yashiro, and Akiyama, 2007; Yashiro _et al._, 2004). There have been several attempts to obtain space speeds and other parameters of CMEs (Zhao, Plunkett, and Liu (ZPL), 2002; Michalek, Gopalswamy, and Yashiro (MGY), 2003; Xie, Ofman and Lawrence (XOL), 2004). These techniques need special measurements in the Large Angle Spectroscopic Coronagraph (LASCO; Brueckner _et al._, 1995) field of view. These models assume that CMEs have cone shapes and propagate with constant speeds. Recently, Michalek (2006a) determined the spaceparameters of HCMEs with an asymmetric cone model using the projected speeds obtained at different position angles around the occulting disk. In the present study we use this technique to get the space characteristics of all front-sided HCMEs observed by LASCO in a period of time form 2001 until the end of 2002. Next, we use these parameters to obtain the travel times (TT) of CMEs to Earth vicinity and the magnitudes of the geomagnetic disturbances (\\(D_{ST}\\) index). The paper is organized as follows: Section 2 describes the method used to determine the space parameters presented here. In Section 3, we use the improved parameters for space weather forecasting. Finally, conclusions are presented in Section 4.
## 2 Determination of the space parameters of HCMEs
Michalek (2006a) implemented a cone model to obtain the space parameters free from projection effects. The model assumes that the shape of HCMEs is an asymmetric cone and that they propagate with constant angular widths and speeds, at least in their early phase of propagation. We can determine the following HCME parameters: the longitude of the cone axis (\\(\\varphi\\)), the latitude of the cone axis (\\(\\lambda\\)), the angular width \\(\\alpha\\) (cone angle =0.5\\(\\alpha\\)) and the space velocity \\(V_{space}\\). CMEs often have a flux-rope geometry (_e.g._, Chen _et al._, 1997; Dere _et al._, 1999; Chen _et al._, 2000; Plunket _et al._, 2000; Forbes 2000; Krall _et al._, 2001; Chen and Krall 2003), which encouraged us to introduce the asymmetric cone model: the shape of CMEs is a cone but the cone cross section is an ellipse. The eccentricity and orientation of the ellipse are two additional parameters of the model. They are not important for the space weather applications so we neglect them in the present study. The following procedure was carried out to obtain the parameters characterizing HCMEs. First, using the height-time plots the projected speeds at different position angles (every 15\\({}^{\\circ}\\)) were determined. This allowed us to obtain 24 projected velocities for a given HCME, which are required for the fitting procedure. Second, using numerical simulation to minimize the root mean square error, the cone model parameters were obtained. Details of the numerical simulation and the equation used can be found in Michalek (2006a). To save time, the simulation procedure was performed with constraints on the cone model parameters. We assumed that the space speed is not smaller than the maximal measured projected velocity for a given event. Second, using the Extreme ultraviolet Image Telescope (EIT) (Delaboudiniere _et al._, 1995) and Solar Geophysical Data we determine the associated eruptive phenomena (coronal dimmings, erupting filaments and H\\(\\alpha\\) flares) which are coincident with the LASCO CME onset time. This allows us to estimate source regions of HCMEs on the solar disk and recognize front-sided events. The second assumption on the cone model parameters is that the cone model axis is localized in a quadrant of the Sun where the associated phenomena appear. To check these assumptions, for some events we performed the simulation for a wider range of the cone model parameters. Always, the best fit cone model parameters fulfilled the above constraints. Our numerical procedure allows us to place the apex of the cone at the center of the Sun or on the solar surface. In the previous paper (Michalek, 2006a), we found that the better fits were obtained when the apex of a cone is placed at the center of the Sun, which we use in this paper.
## 3 Data
The list of HCMEs studied in this paper is displayed in Table 1. We considered only front-sided full HCMEs during the period of time from the beginning of 2001 until the end of 2002. We select this limited period of time to get a representative sample of HCMEs which could be use to test our new cone model. In the SOHO/LASCO catalog 115 HCMEs are listed, 70 of which were front-sided. One of them was too faint to perform necessary measurements. For the remaining 69 events height-time plots were obtained at different position angles (every \\(15^{\\circ}\\)). The projected speeds from the height-time plots were then used for the fitting procedure to obtain the space parameters of HCMEs. Using data from the World Data Center ([http://swdcdb.kugi.kyoto-u.ac.jp](http://swdcdb.kugi.kyoto-u.ac.jp)) geomagnetic disturbances caused by these events were identified. In order to find a relationship between HCMEs and magnetic disturbances a two step procedure was performed. First, we found all geomagnetic disturbances, in the considered period of time (2001-2002), with \\(D_{ST}\\) index \\(\\leq-30nT\\). This very high limit (\\(-30nT\\)) was chosen following Michalek _et al._ (2006b). Such \\(D_{ST}\\) values (\\(-30nT\\)) could occur whether or not a CME hits Earth. We assume that the associated magnetic disturbance should start no latter than 120 hours after the first appearance of a given event in LASCO field of view and no sooner than the necessary travel time of a given CME to Earth calculated from the measured maximal projected velocity. We related a given disturbance with a HCME if they were within the specified time range. Unfortunately we were not able to follow CMEs during their entire trip to Earth, so there is some ambiguity in associating the magnetic storms with CMEs. During high solar activity there are frequently more than one CME that could be associated with a given magnetic disturbance. In our list there are some magnetic disturbances associated with two different halo CMEs. If we consider all CMEs included in the SOHO/LASCO catalog (not only HCMEs) a number of multiple magnetic storms could be found. Further study into this association can be found in Gopalswamy, Yashiro, and Akyama (2007).
20 events from our list were not geoeffective (\\(D_{ST}>-30nT\\)). These HCMEs were slow or originated closer to the solar limb. By examining solar wind plasma data (from Solar Wind Experiment, Ogilvie _et al._, 1995) and interplanetary magnetic field data (from Magnetic Field Investigation (Wind/MFI) instrument, Lepping _et al._, 1995), we identified interplanetary shocks driven by respective interplanetary CMEs (ICMEs). Measuring the time when a HCME first appears in the LASCOs field of view and the arrival time of the corresponding shock at Earth the travel time (TT) can be determined (_e.g._ Manoharan _et al.,_ 2004). The results of our study are displayed in Table 1. The first two columns are from the SOHO/LASCO catalog and give the date of the first appearance in the LASCO field of view and the projected speeds (V). The width and space speeds (\\(V_{space}\\)) estimated from the cone model are shown in columns (3) and (4), respectively. In column (5) the r.m.s error (in km s\\({}^{-1}\\)) for the best fits are given. The parameters \\(\\gamma\\) and source locations are shown in columns (6) and (7), respectively. In column (8) the minimal values of \\(D_{ST}\\) indices for geomagnetic disturbances caused by HCMEs are presented. Finally, in column (9) the travel time (TT) of magnetic clouds to Earth are given.
## 4 Implication for space weather forecast
For the space weather forecast it is crucial to predict, with good accuracy, onsets (TT) and magnitudes (\\(D_{ST}\\)) of magnetic storms. In the next two subsections, we consider these isues using the determined space velocities.
### Predictions of onsets of geomagnetic disturbances
Figure 1 shows the scatter plots of the plane of the sky speeds (from SOHO/LASCO catalog) versus the travel times. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The dashed line is a polynomial fit to data points (it is use third degree polynomial function). Correlation coefficients are: 0.68 for the western and 0.49 for the eastern events. The standard error in determination of the travel time (TT) is \\(\\pm 16\\) hours.
For comparison, we present in Figure 2 (left panel) similar plot except for the space speeds. The figure clearly shows that the space speeds are strongly correlated with the TT. Now the correlations coefficients are more significant: 0.71 for the western and 0.75 for the eastern events. The standard error in determination of the travel time is only \\(\\pm 10\\) hours. In Figure 2 (right panel) we show also similar plot but for the space speeds projected in the Earth direction. To illustrate that our considerations are consistent with previous results we compare them with the ESA model (the continuous line,Gopalswamy _et al._ 2005b). For these plots we used only the 49 geoeffective (\\(D_{ST}\\leq-30nT\\)) events. For comparison, in Figure 2 (right panel) we added the three events (2000/07/14, 2003/10/28, 2003/10/29) of historical importance, represented by the dark diamonds.
### Magnitudes of geomagnetic storms
Magnitudes of geomagnetic disturbances depend not only on the velocities of CMEs but also on the location of source region on the solar disk (_e.g._ Gopalswamy, Yashiro, and Akiyama, 2007). For our cone model positions of the source regions are characterized by the parameter \\(\\gamma\\), which is the angular distance of the CME from the plane of the sky. This parameter decides which part of a HCME hits Earth. Events with small \\(\\gamma\\) strike Earth with their flanks while those with large \\(\\gamma\\) hit Earth with their central parts. Figure 3 shows the scatter plot of the plane of the sky speeds multiplied by \\(\\gamma\\) versus \\(D_{ST}\\) index. The parameter \\(\\gamma\\) was determined from the location of the associated flares. There is a slight correlation between \\(V*\\gamma\\) and \\(D_{ST}\\). Correlation coefficients are: \\(\\sim\\)0.49 for the western and \\(\\sim\\)0.30 for the eastern events, respectively.
Figure 1: The scatter plot of the sky-plane speed versus the HCME travel time (TT). Diamond and cross symbols represent events originating from the western and eastern hemispheres, respectively. The dot-dashed line is a polynomial fit to all the data points.
For comparison, Figure 4 show \\(V*\\gamma\\) plot but for the space parameters. Now the parameters (\\(V_{space}\\),\\(\\gamma\\)) were estimated from the model (see Michalek 2006a). From the inspection of the figure it is clear that the correlation between (\\(V_{space}*\\gamma\\)) and \\(D_{ST}\\) is more significant. Correlation coefficients are: \\(\\sim\\)0.85 for the western and \\(\\sim\\)0.58 for the eastern events, respectively. It is clear that the space parameters, determined from the asymmetric cone model, could be very useful for space weather applications. Correlation coefficients are almost two times larger than those obtained from the projected speeds. For these plots (Figure 3 and Figure 4), we used all HCMEs from Table 1, even the non-geoeffective ones. These events generate false alarms. Non-geoeffective HCMEs are slow (V\\(<\\)900km s\\({}^{-1}\\)) or have source region closer to the solar limb. The limb HCMEs appear as halo events only due to compression of pre-existing coronal plasma. The investigation confirms that the western events are more geoeffective than the eastern ones (_e.g._ Zhang _et al._, 2003). Our investigation suggests that the severest geomagnetic storms (with \\(D_{ST}<-200nT\\)) were generated by the western events, although east-hemisphere CMEs are capable of causing such kind of storms as well (Gopalswamy, _et al._, 2005a; Dal Lago _et al._, 2006).
Figure 2: The scatter plots of the space (left panel) and Earth directed velocities versus the HCME travel time (TT). Diamond and cross symbols represent events originating from the western and eastern hemispheres, respectively. The dot-dashed line (left panel) is a polynomial fit to all the data points. The continuous line (right panel) is the ESA model representation. The three additional dark diamonds (only on the right panel) show the HCMEs (2000/07/14, 2003/10/28 and 2003/10/29) of historical importance.
## 5 Summary
The prediction of the magnitudes and onsets of geomagnetic storms is crucial for space weather forecasting. Unfortunately, parameters characterizing HCMEs, due to the projection effect, are poorly correlated with geomagnetic disturbances. In the present paper, we applied the asymmetric cone model (Michalek, 2006a) to obtain space speeds and source locations of all front-sided HCMEs observed by SOHO/LASCO in the period of time from the beginning of 2001 until the end of 2002. These parameters were used for prediction of the strength (\\(D_{ST}\\)) and onsets (TT) of geomagnetic storms (Figure 2 and Figure 4). The results are very promising. Correlation coefficients between the space speeds and parameters characterizing geomagnetic storms (TT and \\(D_{ST}\\)) are very significant and almost two times larger in comparison with results for the projected speeds. The standard error in the prediction of the travel time is equal to \\(\\sim\\)10 hours, almost 60% lower than for the projected speeds. It is interesting to compare ours results to other cone models. Xie et al. (2006) calculated absolute differences between
Figure 3: The scatter plot of the sky-plane speeds multiplied by \\(\\gamma\\) versus \\(D_{ST}\\) index. Diamond and cross symbols represent events originating from the western and eastern hemispheres, respectively. The solid line is a linear fit to all the data points, the dot-dashed line is a linear fit to the eastern events, and the dashed line is a linear fit to the western events.
predicted (using the ESA model, Gopalswamy _et al._, 2005b) and observed shock travel times for the previous cone models (XOL, MGY, ZPL). They found that the mean errors for those models were: 6.5, 12.8 and 9.2 hours, respectively. In the present considerations, the mean difference between predicted (using polynomial fit from Figure 2) and observed shock travel times is 8.4 hours, four hours less than in our previous cone model (MGY). Many authors considered relation between speeds and geoeffectiveness of CMEs [_e.g._ Tsurutani and Gonzales, 1998; Lindsay _et al._, 1999; Cane, Richardson, and St.Cyr, 2000; Wu and Lepping, 2002; Srivastava and Venkatakrishnan, 2002; Yurchyshyn, Wang, and Abramenko, 2004). Those studies demonstrated that the initial speeds of CMEs are correlated with the \\(D_{ST}\\) index but because they applied the plane of the sky speeds correlation coefficient were not significant. Recently, Michalek _et al._ (2006b) showed that the correlation between the space speed of HCMEs and \\(D_{ST}\\) index could be much more significant (correlation coefficient was \\(\\sim\\)0.60). In the present study we considered the correlation between \\(V_{space}*\\gamma\\) and \\(D_{ST}\\) index. We found that this corelation could be very significant ( for the western events it is \\(\\sim\\)0.85). This confirms previous results that geoeffectiveness of
Figure 4: The scatter plot of \\(V_{space}*\\gamma\\) versus \\(D_{ST}\\) index. Diamond symbols represent events originating from the western and eastern hemispheres, respectively. The solid line is a linear fit to all the data points, the dot-dashed line is a linear fit to the eastern events, and the dashed line is a linear fit to the western events.
HCMEs depends not only on the HCMEs speeds but also on the direction of their propagation (Moon _et al._, 2005, Michalek _et al._, 2006b; Gopalswamy, Yashiro, and Akyama, 2007). The present study shows that the asymmetric cone model could be very useful for the space weather forecast. There are two important advantages of this method. First, using our asymmetric cone model can help predict space weather with good accuracy. Second, to predict space weather we need observational data from one instrument only (a coronagraph along the Sun-Earth line such as the LASCO coronograph). The method has also some limitations. Faint HCMEs could not be used for this study because it is difficult to get the height-time plots around the entire occulting disk. Fortunately such poor events are generally not geoeffective so they are not of immediate concern (we missed only one front-sided HCME). We consider a flat cone model (not an ice-cream cone model) so in some cases the measured projected velocities, and as a consequence, the space speeds could be slightly overestimated. We need to keep in mind that the magnetic field direction at the front of magnetic cloud (or ICME) determines to a large degree the geoeffectiveness of events. Unfortunately this in-situ measurement can only be recorded at Earth's vicinity and it cannot be used for the space weather forecasting due to time constraints. When considering the asymmetric cone model, it is important to note that CMEs have more complicated 3D structures (Cremades and Bothmer, 2004) and more factors need to be determined to have a better understanding of what produces the geomagnetic storms at Earth.
## Acknowledgements
Work done by Grzegorz Michalek was supported by _MNiSW_ through the grant N203 023 31/3055 and NASA (NNG05GR03G).
## References
* Brueckner _et al._ (1995) Brueckner, G.E., Howard, R.A., Koomen, M.J, Korendyk, C.M., Michels, D.J., Moses, J.D., _et al._: 1995, _Solar Phys._**162**, 357.
* Cane _et al._ (2000) Cane, H.V., Richardson, I.G., St. Cyr, O.C.: 2000, _Geophys. Res. Lett._**27**, 3591.
* Chen _et al._ (2003) Chen, J., Krall, J.: 2003, _J. Geophys. Res._**108**, 1410.
* Chen _et al._ (1997) Chen, J., Howard, R.A., Brueckner, G.E., Santoro, R., Krall, J., Paswaters, S.E., _et al._: 1997, _Astrophys. J._**490**, L191.
* Chen _et al._ (2000) Chen, J., Santoro, R.A., Krall, J., Howard, R.A., Duffin, R., Moses, J.D., _et al._: 2000, _Astrophys. J._**533**, 481.
* Cremades _et al._ (2004) Cremades, H., Bothmer, V.: 2004, _Astron. Astrophys._**422**, 307.
* Dal Lago _et al._ (2006) Dal Lago, A., Gonzales, W.D., Balmaceda, L.A., Vieira, L.E., Echer, E., Guarnieri, F.L., _et al._: 2006, _J. Geophys. Res._**111**, A07S14.
Delaboudiniere, J.-P., Artzner, G.E., Brunaud, J., Gabriel, A.H., Hochedez, J.F., Miller, F., _et al._: 1995, _Solar Phys._**162**, 291.
* [] Dere, K. P., Brueckner, G. E., Howard, R. A., Michels, D. J., Delaboudiniere, J. P.: 1999, _Astrophys. J._**516**, 465.
* [] Forbes, T. G.: 2000, _J. Geophys. Res._**105**, 23165.
* [] Gopalswamy, N.: 2004, _ASSL series_, ed. G. Poletto and S. Suess, KLUWER/Boston, p.201.
* [] Gopalswamy, N., Lara, A., Yashiro, S.: 2003, _Astrophys. J._**598**,L63
* [] Gopalswamy, N., Yashiro, S., Akiyama, S.: 2007, _J. Geophys. Res._ **112**, A06112.
* [] Gopalswamy, N., Lara, A., Yashiro, S., Kaiser, M.L., Howard, R.: 2001, _J. Geophys. Res._**106**, 29207.
* [] Gopalswamy, N., Yashiro, S., Michalek, G., Xie, H., Lepping, R.P., Howard, R.A.: 2005a, _Geophys. Res. Lett._**32**, L12509.
* [] Gopalswamy, N., Yashiro, S., Liu, Y., Michalek, G., Vourlidas, A., Kaiser, M.L., _et al._: 2005b, _J. Geophys. Res._**110**, A09S15.
* [] Kahler S.W.: 1992, _Annu. Rev. Astron. Astrophys._**30**,113.
* [] Krall, J., Chen, J., Duffin, R.T., Howard, R.A., Thompson, B.J.: 2001, _Astrophys. J._**562**, 1045.
* [] Lepping, R.P., Acuna, M.H., Burlaga, L.F., Farrell, W.M., Slavin, J.A., Schatten, K.H., _et al._: 1995, _Space Science Rev._**71**, 207.
* [] Lindsay, G.M., Luhmann, J.G., Russell, C.T., Gosling, J.T.; 1999, _J. Geophys. Res._**104**, 12515.
* [] Manoharan, P.K., Gopalswamy, N., Yashiro, S., Lara, A., Michalek, G., Howard, R.A.: 2004 _J. Geophys. Res._**109**, A06109.
* [] Michalek, G.: 2006a, _Solar Phys._**237**, 101.
* [] Michalek, G., Gopalswamy, N., Yashiro, S.: 2003, _Astrophys. J._**584**, 472.
* [] Michalek, G., Gopalswamy, N., Lara, A., Yashiro, S.: 2006b, _Space Weather J._**4**, S10003.
* [] Moon, Y.-J., Cho, K.-S., Dryer, M., Kim, Y.-H., Bong, S., Chae, J., _et al._: 2005, _Astrophys. J._ **624**, 414.
* [] Ogilvie, K.W., Chornay, D.J., Fritzenzreiter, R.J., Hunsaker, F., Keller, J., Lobell, J., _et al._: 1995, _Space Science Rev._**71**, 55.
* [] Plunkett, S. P., Vourlidas, A., Simberova, S., Karlicky, M., Kotrc, P., Heinzel, P., _et al._: 2000, _Solar Phys._**194**, 371.
* [] Srivastava, N., Venkatakrishnan, P.: 2002, _Geophys. Res. Lett._**29**, 1287.
* [] St. Cyr, O.C., Howard, R.A., Sheeley, N.R., Plunkett, S.P., Michels, D.J., Paswaters, S.E., _et al._: 2000, _J. Geophys. Res._**105**, 18169.
* [] Tsurutani, B.T., Gonzales, W.D.: 1998, _in Magnetic Storms, Geophys. Monogr. Ser._, edited by B.T.Tsurutani, Washington DC., **98**, 77.
* [] Webb, D.F., Cliver, R.W.,Crooker, N.U., Cyr, O.C., Thompson, B.J.: 2000, _J. Geophys. Res._**105**, 7491.
* [] Wu, C., Lepping, R.P.: 2002, _J. Geophys. Res._**105**, 7491.
* [] Xie, H., Ofman, L., Lawrence, G.: 2004, _J. Geophys. Res._**109**, A03109.
* [] Xie, H., Gopalswamy, N., Ofman, L., St. Cyr, O.C., Michalek, G., Lara, A., _et al._: 2006 _Space. Weather. J._**4**, S10002.
* [] Yashiro, S., Gopalswamy, N., Michalek, G., St. Cyr, O.C., Plunkett, S.P., Rich, N.B., _et al._: 2004, _J. Geophys. Res._**109**, A07106.
* [] Yurchyshyn, V., Wang, H., Abramenko, V.: 2004, _Space Weather_**2**, S02001.
* [] Zhang,J., Dere, K., Howard, R.A., Bothmer, V.: 2003, _Astrophys. J._ **582**, 520.
| Halo coronal mass ejections (HCMEs) are responsible of the most severe geomagnetic storms. A prediction of their geoeffectiveness and travel time to Earth's vicinity is crucial to forecast space weather. Unfortunately coronagraphic observations are subjected to projection effects and do not provide true characteristics of CMEs. Recently, Michalek (2006, _Solar Phys._, **237**, 101) developed an asymmetric cone model to obtain the space speed, width and source location of HCMEs. We applied this technique to obtain the parameters of all front-sided HCMEs observed by the SOHO/LASCO experiment during a period from the beginning of 2001 until the end of 2002 ( solar cycle 23). These parameters were applied for the space weather forecast. Our study determined that the space speeds are strongly correlated with the travel times of HCMEs within Earth's vicinity and with the magnitudes related to geomagnetic disturbances.
Sun: solar activity, Sun: coronal mass ejections, Sun: space weather | Write a summary of the passage below. |
arxiv-format/0710_4473v2.md | # Geothermal Casimir Phenomena
Klaus Klingmuller\\({}^{1,2}\\) and Holger Gies\\({}^{1}\\)
\\({}^{1}\\) Institute for Theoretical Physics, Heidelberg University, Philosophenweg 16, D-69120 Heidelberg, Germany \\({}^{2}\\) Institut fur Theoretische Physik E, RWTH Aachen, D-52056 Aachen, Germany [email protected], [email protected]
November 3, 2021
## 1 Introduction
The Casimir effect [1] is a paradigm for fluctuation-induced phenomena. Casimir forces between mesoscopic or even macroscopic objects which result from fluctuations of the ubiquitous radiation field or of the charge distribution on the objects inspire many branches of physics, ranging from mathematical to applied physics, see [2] for reviews. Since fluctuations usually occur on all momentum or length scales, they encode both local as well as global properties of a given system. In the case of the Casimir effect, the resulting force is influenced by localized properties of the involved objects such as surface roughness as well as by the global geometry of a given configuration. From a technical perspective, localized properties can often be taken into account by perturbative methods owing to a separation of scales: e.g., the corrugation wavelength and amplitude are usually much smaller than the object's separation distance. But global properties such as geometry or curvature dependencies generally require a full understanding of the fluctuation spectrum in a given configuration.
Recent years have witnessed the development of a variety of new field-theoretical methods for understanding and computing fluctuation phenomena. So far, only phenomenological recipes had early been developed for more complex Casimir geometries, such as the proximity force approximation (PFA) [3]. For the special caseof Casimir forces between compact objects, field-theoretic results in asymptotic limits had been worked out [4, 5]. A first field-theoretic study of the experimentally important configuration of a sphere above a plate [6] was performed in [7] based on a semiclassical expansion. A constrained functional-integral approach, as first introduced in [8] for the parallel-plate case, was further developed for corrugated surfaces in [9].
The sphere-plate as well as cylinder-plate configuration [10] was also used as a first example for the worldline approach to the Casimir effect [11], which is based on a mapping of field-theoretic fluctuation averages onto quantum-mechanical path integrals. This technique is rooted in the string-inspired approach to quantum field theory which is particularly powerful for the computation of amplitudes and effective actions in background fields [12]. For arbitrary backgrounds, the path integral over the worldlines representing the spacetime trajectories of the quantum fluctuations can straightforwardly be computed by Monte Carlo methods, as first demonstrated in [13]. The particular advantage of the approach arises from the fact that the computational algorithm can be formulated independently of the background. This makes the approach so valuable for Casimir problems, where a given surface geometry constitutes the background for the fluctuations. The resulting technical simplifications become particularly transparent for fluctuations obeying Dirichlet boundary conditions (b.c.), where high-precision computations have been performed, e.g., for the sphere-plate and cylinder-plate case [14, 15, 16].
A number of further first-principles approaches for arbitrary Casimir geometries have been developed and successfully applied in recent years. The constraint functional-integral approach has been extended to general dispersive forces between deformed media [17]. In particular, approaches based on scattering theory have proved most successful, starting with an exact study of the sphere-plate configuration with Dirichlet b.c. [18]. Scattering theory also lead to a solution for the cylinder-plate case which, as a waveguide configuration, allowed for a study of the case with real electromagnetic b.c. [19]. These scattering tools have been further developed to facilitate an analytical computation of the important small-curvature expansion [20]. For configurations with compact objects, new scattering formulations have recently been found which separate the problem into the scattering off the single objects on the one hand and a propagation of the fluctuation information between the objects on the other hand [21, 22]; in particular, electromagnetic b.c. for real materials can conveniently be addressed with a new formulation which emphasizes the charge fluctuations on the surfaces [22]. Let us also mention the combination of scattering theory with a perturbative expansion that has recently allowed to study geometry effects beyond the PFA [23]. Scattering theory is also a valuable tool for analyzing Casimir self-energies [24]. Finally, direct mode summation has also successfully been applied to nontrivial geometries [25].
In a real Casimir experiment, further properties such as finite conductivity, surface roughness and finite temperature have to be accounted for in addition to the geometry. Generically, these corrections do not factorize but reveal a nontrivial interplay. For instance, the interplay between dielectric material properties and finite temperature [26] still seems insufficiently understood and has lead to a long-standing controversy [27, 28]. In the present work, we confine ourselves to the ideal Casimir effect where this controversy does not exists; but even in the ideal limit, the interplay between geometry and temperature can be substantial, as demonstrated below. The difference is not only of quantitative nature, but arises from the underlying spectral properties of the fluctuations, as first pointed out by Jaffe and Scardicchio [29]. In the familiar parallel-plate case, the nontrivial part of the spectrum transverse to the plates exhibits a gap of wave number \\(k_{\\rm gap}=\\pi/a\\), where \\(a\\) is the plate separation. At temperatures \\(T\\) smaller than this gap, the relevant fluctuation modes are hardly excited, implying a suppression of the thermal corrections; the leading small-temperature contribution to the parallel-plates Casimir force scales like \\((aT)^{4}\\). Geometries with a gap in the relevant part of the excitation spectrum are called closed. Following the same line of argument, we expect a suppression for thermal effects for all closed geometries.
By contrast, there is no reason for this strong suppression of thermal corrections in open geometries which do not have a gap in the fluctuation spectrum. The sphere-plate or cylinder-plate cases belong to this class. For open geometries, there are always Casimir-relevant modes in the fluctuation spectrum that can be excited at any small value of the temperature. Hence, we expect a much stronger dependence on the temperature, e.g., \\((aT)^{\\alpha}\\) with \\(0<\\alpha<4\\), and thus a potentially much stronger thermal contribution in the experimentally relevant parameter range \\(aT\\sim 0.01\\ldots 0.1\\).
So far, no first-principle calculation has been able to confirm this expectation, since generic asymptotic-limit considerations and standard approximations typically break down in the relevant parameter range, as already emphasized in [29]. For instance, the exact solution for the cylinder-plate case allowed for an explicit temperature study of the limit of small cylinder radius, \\(R\\ll a,\\beta\\), where \\(\\beta=1/T\\). In this limit, a log-modified \\((aT)^{4}\\) correction is obtained for the dominant part of the spectrum with Dirichlet b.c. [19]. This result suggests that the low-lying thermal excitations with long wavelength are not suppressed by a gap but by the smallness of the cylinder radius required by the asymptotic-limit considerations.
Also, the use of recipes such as the PFA can lead to a different scaling, such as a \\((aT)^{3}\\) law for the sphere-plate case [6, 30]. Whereas the PFA at zero temperature is justifiable in the low-curvature limit, \\(a\\ll R\\), [7, 11, 31, 15], PFA-deduced thermal corrections can be problematic: at small temperatures with thermal wavelength much larger than the minimal surface separation \\(aT\\ll 1\\), the thermal excitations can be more sensitive to the curvature radius than the vacuum fluctuations. Even worse, the PFA uses the parallel-plate formula, and hence a gapped spectrum, as an input and thus misses the important difference arising from an open geometry.
In this work, we present first evidence for a strong thermal correction to a Casimir force law for an open geometry using worldline numerics. As a paradigmatic example, we use the configuration of a semi-infinite half-plate perpendicularly above an infinite plate (cf. Fig. 1), imposing Dirichlet b.c. for the fluctuations of a real scalar field. This configuration belongs to a set of cases, revealing a universal force law determined by dimensionality, which has first been investigated in the context of Casimir edge effects [32]. Since the configuration has only one length scale which is the distance \\(a\\) between the edge of the half-plate and the infinite plate, the interplay of the gapless fluctuation spectrum with thermal excitations is not disturbed by other length scales, resulting in a clean thermal signature of an open geometry. Our worldline calculations yield a thermal correction obeying an \\((aT)^{3}\\) force law at low temperature. This implies a substantial increase of the thermal contribution compared to those for a closed geometry.
The fact that geometry and temperature exhibit such a nontrivial interplay in Casimir systems, resulting in \"geothermal\" Casimir phenomena1, is another peculiar feature that should be added to the long list of peculiarities of the Casimir effect; it clearly deserves further investigation.
Footnote 1: We introduce the attribute “geothermal” here, since it directly describes the source and nature of this phenomenon. No link exists between the physics discussed here and, e.g., geothermal heat-pumps etc. dealt with in the geological sciences.
## 2 Worldline approach to the Casimir effect at finite temperature
Let us briefly summarize the worldline approach to the Casimir effect. More detailed descriptions and derivations from first principles can be found in [11, 16]. We consider the Casimir interaction energy, serving as a potential energy for the force, for two rigid objects with surfaces \\(\\Sigma_{1}\\) and \\(\\Sigma_{2}\\). For a massless scalar field with Dirichlet boundaries in \\(D=3+1\\), the worldline representation of the Casimir interaction energy is given by
\\[E_{\\rm Casimir}=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{d \\mathcal{T}}{\\mathcal{T}^{3}}\\int d^{3}x_{\\rm CM}\\ \\langle\\Theta_{\\Sigma}[{\\bf x}(\\tau)]\\rangle\\,. \\tag{1}\\]
Here, the worldline functional \\(\\Theta_{\\Sigma}[{\\bf x}(\\tau)]=1\\) if the path \\({\\bf x}(\\tau)\\) intersects the surface \\(\\Sigma=\\Sigma_{1}\\cup\\Sigma_{2}\\) in both parts \\(\\Sigma_{1}\\) and \\(\\Sigma_{2}\\), and \\(\\Theta_{\\Sigma}[{\\bf x}(\\tau)]=0\\) otherwise.
This compact formula has an intuitive interpretation: the worldlines can be viewed as the spacetime trajectories of the quantum fluctuations of the scalar field. Any worldline that intersects the surfaces does not satisfy Dirichlet boundary conditions.
Figure 1: Left panel: sketch of the parallel-plate configuration (taken from [32]). Right panel: sketch of the finite-temperature spacetime; a worldline can wind around the compactified time dimension.
All worldlines that intersect both surfaces thus should be removed from the ensemble of allowed fluctuations, thereby contributing to the negative Casimir interaction energy. The auxiliary integration parameter \\({\\cal T}\\), the so-called propertime, effectively governs the extent of a worldline in spacetime. Large \\({\\cal T}\\) correspond to IR fluctuations with large worldlines, small \\({\\cal T}\\) to UV fluctuations.
The expectation value in Eq. (1) has to be taken with respect to the ensemble of worldlines that obeys a Gaussian velocity distribution
\\[\\langle\\ldots\\rangle=\\int_{\\mathbf{x}_{\\mathrm{CM}}}{\\cal D}x\\,\\ldots\\,e^{- \\frac{1}{4}\\int_{0}^{\\cal T}d\\tau\\,\\dot{x}^{2}(\\tau)}\\bigg{/}\\int_{\\mathbf{x} _{\\mathrm{CM}}}{\\cal D}xe^{-\\frac{1}{4}\\int_{0}^{\\cal T}d\\tau\\,\\dot{x}^{2}( \\tau)}, \\tag{2}\\]
where the worldlines have a common center of mass \\(x_{\\mathrm{CM}}\\). At zero temperature, the time component of the worldlines cancels out for static objects, hence the straightforward Monte Carlo computation of Eqs. (1) and (2) can be restricted to the spatial part.
Finite temperature can now easily be implemented with the aid of the Matsubara formalism, and also the technical changes of the numerical algorithm are only minor: The Euclidean time, say along the \\(D\\)th direction, is compactified to the interval \\([0,\\beta]\\) with periodic boundary conditions for bosonic fluctuations. As a consequence, the worldlines can also wind around the time dimension, see Fig. 1. It is convenient to write a given loop \\(x(\\tau)\\) with winding number \\(n\\) as sum of a loop with no winding, \\(\\tilde{x}(\\tau)\\), and a translation in time running from zero to \\(n\\beta\\) with constant speed,
\\[x_{\\mu}(\\tau)=\\tilde{x}_{\\mu}(\\tau)+n\\beta\\frac{\\tau}{\\cal T}\\delta_{\\mu D}. \\tag{3}\\]
The path integral over the different winding number sectors labeled by \\(n\\) factorizes for static configurations, yielding
\\[\\int_{x(0)=x({\\cal T})}{\\cal D}x\\ e^{-\\int_{0}^{\\cal T}d\\tau\\frac{\\dot{x}^{2}} {4}}\\ \\cdots\\ =\\sum_{n=-\\infty}^{\\infty}e^{-\\frac{n^{2}\\beta^{2}}{4{\\cal T}}}\\int_{ \\tilde{x}(0)=\\tilde{x}({\\cal T})}{\\cal D}\\tilde{x}\\ e^{-\\int_{0}^{\\cal T}d\\tau \\frac{\\dot{x}^{2}}{4}}\\ \\cdots. \\tag{4}\\]
The worldline representation of the Casimir interaction energy for the Dirichlet scalar at finite temperature thus reads
\\[E_{\\mathrm{Casimir}}=-\\frac{1}{2}\\frac{1}{(4\\pi)^{2}}\\int_{0}^{\\infty}\\frac{d {\\cal T}}{{\\cal T}^{3}}\\Big{(}\\sum_{n=-\\infty}^{\\infty}e^{-\\frac{n^{2}\\beta^{ 2}}{4{\\cal T}}}\\Big{)}\\int d^{3}x_{\\mathrm{CM}}\\ \\langle\\Theta_{\\Sigma}[\\mathbf{x}(\\tau)]\\rangle\\,. \\tag{5}\\]
Whereas the worldline expectation value remains identical to the one at zero temperature, the winding sum re-weights the propertime integrand: larger temperature emphasizes smaller propertimes and vice versa. This confirms the expectation that thermal corrections at low temperature are dominated by long wavelength fluctuations which in our case correspond to worldlines with a large spatial extent.
It is important to note that \\(E_{\\mathrm{Casimir}}\\) is normalized such that \\(E_{\\mathrm{Casimir}}\\to 0\\) for infinite distances \\(a\\to\\infty\\). Hence, \\(E_{\\mathrm{Casimir}}\\) can differ from the thermodynamic free energy by thermal corrections to the self-energies of the single surfaces. The latter is distance-independent and thus does not contribute to the Casimir force.
### Parallel plates
As a test, let us consider the two parallel plates separated by a distance \\(a\\) along the \\(z\\) axis. Interchanging expectation value and \\(z_{\\rm CM}\\) integration in Eq. (5), we encounter
\\[\\int_{-\\infty}^{\\infty}\\,dz_{\\rm CM}\\,\\Theta_{\\Sigma}[{\\bf x}]=(\\sqrt{\\mathcal{T }}l-a)\\,\\theta(\\sqrt{\\mathcal{T}}l-a), \\tag{6}\\]
where \\(l\\) denotes the dimensionless extent of the given worldline in \\(z\\) direction measured in units of \\(\\sqrt{\\mathcal{T}}\\); cf. [16]. Differentiating Eq. (5) by \\(-\\partial/\\partial a\\) yields the Casimir force
\\[F_{\\rm Casimir}=-\\frac{1}{2}\\frac{A}{(4\\pi)^{2}}\\,\\frac{1}{a^{4}}\\left\\langle \\int_{1/l^{2}}^{\\infty}\\frac{d\\mathcal{T}}{\\mathcal{T}^{3}}\\Big{(}\\sum_{n=- \\infty}^{\\infty}e^{-\\frac{n^{2}}{4\\mathcal{T}}\\frac{\\partial^{2}}{a^{2}}} \\Big{)}\\right\\rangle, \\tag{7}\\]
where \\(A\\) is the (infinite) area of the plates. Figure 2 shows the numerical result for the Casimir interaction energy (7), corresponding to the distant-dependent part of the free energy, normalized to the zero-temperature result. For comparison, the analytic result,
\\[F_{\\rm Casimir}=\\frac{\\pi^{2}}{2}\\frac{\\partial}{\\partial a}\\left[\\frac{AT}{ a^{2}}\\sum_{m=1}^{\\infty}\\frac{1}{(2\\pi m)^{3}}\\left(\\coth(2\\pi maT)+2\\pi maT \\,{\\rm csch}^{2}(2\\pi maT)\\right)\\right], \\tag{8}\\]
is also shown, see e.g. [33]1. Both results agree satisfactorily.
Footnote 1: We use half the value of [33] which is derived for the electromagnetic field with two degrees of freedom.
Incidentally, the leading thermal correction can be obtained analytically from the worldline representation (7): for \\((aT)^{2}\\ll 1\\) (and \\(n\
eq 0\\)), the propertime integrand is dominated by large \\(\\mathcal{T}\\), hence the lower bound can safely be set to zero. This results in the well-known leading thermal correction \\(\\Delta F(T)=-(\\pi^{2}/90)AT^{4}\\), which can also be understood as an excluded volume effect: thermal modes are excluded from the region between the two plates which thus does not contribute to the Stefan-Boltzmann law.
Figure 2: Temperature dependence of the Casimir force between two parallel plates. The ratio of the force at temperature \\(T\\) and at zero temperature is plotted versus the temperature in units of the plate distance \\(a\\). The rectangle at low temperature marks the region magnified in Fig. 3. For the worldline numerical result, we have employed 800 loops each with 1 000 000 points.
### Perpendicular plates
We now consider the perpendicular-plate configuration introduced above. Again, we can perform the \\(z_{\\rm CM}\\) integration first, yielding for the force
\\[F_{\\rm Casimir}=-\\frac{L}{2(4\\pi)^{2}}\\,\\frac{1}{a^{3}}\\left\\langle\\int d\\xi\\int_ {1/l(\\xi)^{2}}\\frac{d\\mathcal{T}}{\\mathcal{T}^{5/2}}\\Big{(}\\sum_{n=-\\infty}^{ \\infty}e^{-\\frac{n^{2}}{4\\mathcal{T}}\\frac{\\beta^{2}}{a^{2}}}\\Big{)}\\right\\rangle, \\tag{9}\\]
where \\(L\\) is the (infinite) length of the system along the edge. Here, \\(l(\\xi)\\) denotes the dimensionless extent of the given worldline in \\(z\\) direction as seen by the configuration in units of \\(\\sqrt{\\mathcal{T}}\\); it depends on the position \\(\\xi\\) of the worldline normal to the perpendicular plate which is also measured in units of \\(\\sqrt{\\mathcal{T}}\\); for details, see [34]. Figure 3 compares the resulting temperature correction with that of the parallel-plates case in the small temperature range of Fig. 2. In contrast to the weak \\((aT)^{4}\\) dependence of the parallel-plates result, the Casimir interaction energy for the perpendicular plates shows a strong increase with temperature. For typical experimental values at larger distance \\(a=1.5\\mu\\)m and room temperature, the temperature correction is about 6%. At the same distance and temperature, the temperature effect for the parallel plates is 0.7%. The open geometry therefore exhibits a thermal correction which is an order of magnitude larger than the closed parallel-plates case.
The leading thermal correction can again be computed analytically from the worldline representation (9) by extending the lower bound of the \\(\\mathcal{T}\\) integral to 0 and using \\(\\langle\\int d\\xi\\rangle\\equiv\\langle l\\rangle=\\sqrt{\\pi}\\). Here, \\(l\\) denotes the extension of the
Figure 3: Temperature dependence of the Casimir force for two perpendicular plates compared to the parallel-plates result. The ratio of the force at temperature \\(T\\) and at zero temperature is plotted versus the temperature in units of the plate distance \\(a\\). The plot shows the small-temperature range of Fig. 2. At experimentally relevant large-separation values of \\(Ta\\) (vertical line), the temperature correction for the perpendicular plates is \\(\\simeq 6\\%\\), which should be compared with \\(\\sim 0.7\\%\\) for the parallel plates. For the worldline numeric results we have employed 800 loops each with 1 000 000 points. The error bars represent the statistical error.
semi-infinite plate [16]. We obtain
\\[F_{\\rm Casimir}(T)\\simeq F_{\\rm Casimir,}T=0-\\frac{\\zeta(3)}{4\\pi}\\,\\frac{L}{a^{ 3}}\\,(aT)^{3},\\quad\\mbox{for $(aT)\\ll 1$}, \\tag{10}\\]
which is confirmed by the full numerical result over the whole range of temperatures shown in Fig. 3. We note that the low-temperature scaling of the thermal correction for this open geometry cannot be understood as an excluded-volume effect.
## 3 Conclusions
We have presented analytical as well as numerical results for the nontrivial interplay between geometry and finite temperature for the Casimir effect in open geometries. For the first time, we have shown that the gapless nature of the fluctuation spectrum leads to a strong enhancement of the thermal correction to the Casimir force. Our numerical data for the perpendicular-plate case with Dirichlet b.c. confirms our analytically derived \\((aT)^{3}\\) force law at low temperatures. This should be compared to the weaker \\((aT)^{4}\\) dependence of the parallel-plates case where a gap in the spectrum suppresses the thermal correction. This calls urgently for further first-principles computations for other open geometries such as the sphere-plate case.
It is a pleasure to thank Michael Bordag and his team for the organization of the QFEXT07 workshop and for creating such a stimulating atmosphere. This work was supported by the DFG under Gi 328/1-3 (Emmy-Noether program) and Gi 328/3-2.
## References
* [1] H.B.G. Casimir, Kon. Ned. Akad. Wetensch. Proc. **51**, 793 (1948).
* [2] K. A. Milton, _River Edge, USA: World Scientific (2001)_; M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. **353**, 1 (2001); S. Y. Buhmann and D. G. Welsch, Prog. Quant. Electron. **31**, 51 (2007) [arXiv:quant-ph/0608118].
* [3] B.V. Derjaguin, I.I. Abrikosova, E.M. Lifshitz, Q.Rev. **10**, 295 (1956); J. Blocki, J. Randrup, W.J. Swiatecki, C.F. Tsang, Ann. Phys. (N.Y.) **105**, 427 (1977).
* [4] G. Feinberg and J. Sucher, Phys. Rev. A **2**, 2395 (1970).
* [5] R. Balian and B. Duplantier, Annals Phys. **112**, 165 (1978).
* [6] S. K. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997); U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998); H.B. Chan _et al._, Science 291, 1941 (2001); R.S. Decca _et al._, Phys. Rev. D **68**, 116003 (2003); Phys. Rev. Lett. **94**, 240401 (2005).
* [7] M. Schaden and L. Spruch, Phys. Rev. A **58**, 935 (1998); Phys. Rev. Lett. **84** 459 (2000)
* [8] M. Bordag, D. Robaschik and E. Wieczorek, Annals Phys. **165**, 192 (1985).
* [9] T. Emig, A. Hanke and M. Kardar, Phys. Rev. Lett. **87** (2001) 260402.
* [10] M. Brown-Hayes, D.A.R. Dalvit, F.D. Mazzitelli, W.J. Kim and R. Onofrio, Phys. Rev. A **72**, 052102 (2005).
* [11] H. Gies, K. Langfeld and L. Moyaerts, JHEP **0306**, 018 (2003); arXiv:hep-th/0311168.
* [12] M. G. Schmidt and C. Schubert, Phys. Lett. B **318**, 438 (1993) [arXiv:hep-th/9309055]; for a review, see C. Schubert, Phys. Rept. **355**, 73 (2001).
* [13] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001); Int. J. Mod. Phys. A **17**, 966 (2002).
* [14] H. Gies and K. Klingmuller, J. Phys. A **39** 6415 (2006) [arXiv:hep-th/0511092].
* [15] H. Gies and K. Klingmuller, Phys. Rev. Lett. **96**, 220401 (2006) [arXiv:quant-ph/0601094].
* [16] H. Gies and K. Klingmuller, Phys. Rev. D **74**, 045002 (2006) [arXiv:quant-ph/0605141].
* [17] T. Emig and R. Buscher, Nucl. Phys. B **696**, 468 (2004).
* [18] A. Bulgac, P. Magierski and A. Wirzba, Phys. Rev. D **73**, 025007 (2006) [arXiv:hep-th/0511056]; A. Wirzba, A. Bulgac and P. Magierski, J. Phys. A **39** (2006) 6815 [arXiv:quant-ph/0511057].
* [19] T. Emig, R. L. Jaffe, M. Kardar and A. Scardicchio, Phys. Rev. Lett. **96** (2006) 080403.
* [20] M. Bordag, Phys. Rev. D **73**, 125018 (2006); Phys. Rev. D **75**, 065003 (2007).
* [21] O. Kenneth and I. Klich, Phys. Rev. Lett. **97**, 160401 (2006); arXiv:0707.4017.
* [22] T. Emig, N. Graham, R. L. Jaffe and M. Kardar, arXiv:0707.1862; arXiv:0710.3084.
* [23] R. B. Rodrigues, P. A. Maia Neto, A. Lambrecht and S. Reynaud, Phys. Rev. Lett. **96**, 100402 (2006) [arXiv:quant-ph/0603120]; Phys. Rev. A **75**, 062108 (2007).
* [24] N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, O. Schroeder and H. Weigel, Nucl. Phys. B **677**, 379 (2004) [arXiv:hep-th/0309130];
* [25] F. D. Mazzitelli, D. A. R. Dalvit and F. C. Lombardo, New J. Phys. **8**, 240 (2006); D. A. R. Dalvit, F. C. Lombardo, F. D. Mazzitelli and R. Onofrio, Phys. Rev. A **74**, 020101 (2006).
* [26] M. Bostrom and Bo E. Sernelius, Phys. Rev. Lett. **84**, 4757 (2000).
* [27] V. M. Mostepanenko _et al._, J. Phys. A **39**, 6589 (2006) [arXiv:quant-ph/0512134].
* [28] I. Brevik, S. A. Ellingsen and K. A. Milton, arXiv:quant-ph/0605005.
* [29] A. Scardicchio and R. L. Jaffe, Nucl. Phys. B **743** (2006) 249 [arXiv:quant-ph/0507042].
* [30] M. Bordag, B. Geyer, G. L. Klimchitskaya and V. M. Mostepanenko, Phys. Rev. Lett. **85**, 503 (2000).
* [31] A. Scardicchio and R. L. Jaffe, Nucl. Phys. B **704**, 552 (2005); Phys. Rev. Lett. **92**, 070402 (2004).
* [32] H. Gies and K. Klingmuller, Phys. Rev. Lett. **97**, 220405 (2006) [arXiv:quant-ph/0606235].
* [33] J. Feinberg, A. Mann and M. Revzen, Annals Phys. **288** (2001) 103 [arXiv:hep-th/9908149].
* [34] K. Klingmuller, Dissertation, Heidelberg U. (2007). | We present first worldline analytical and numerical results for the nontrivial interplay between geometry and temperature dependencies of the Casimir effect. We show that the temperature dependence of the Casimir force can be significantly larger for open geometries (e.g., perpendicular plates) than for closed geometries (e.g., parallel plates). For surface separations in the experimentally relevant range, the thermal correction for the perpendicular-plates configuration exhibits a stronger parameter dependence and exceeds that for parallel plates by an order of magnitude at room temperature. This effect can be attributed to the fact that the fluctuation spectrum for closed geometries is gapped, inhibiting the thermal excitation of modes at low temperatures. By contrast, open geometries support a thermal excitation of the low-lying modes in the gapless spectrum already at low temperatures. | Write a summary of the passage below. |
arxiv-format/0710_4526v1.md | # Properties and geoeffectiveness of halo CMEs
G. Michalek
Astronomical Observatory of Jagiellonian University, Cracow, Poland
N. Gopalswamy
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
A. Lara
Instituto de Geofisica, UNAM, Mexico
S. Yashiro
Center for Solar and Space Weather, Catholic University of America
## 1 Introduction
Coronal mass ejections (CMEs) originating from regions close to the central meridian of the Sun and directed toward Earth cause the most severe geomagnetic storms (Gosling, 1993; Kahler, 1992; Webb et al., 2001). Many of these Earth-directed CMEs appear as an enhancement surrounding the occulting disk of coronagraphs. We call them halo CMEs (Howard et al. 1982). The measured properties of CMEs include their occurrence rate, direction of propagation in the plane of the sky, angular width, and speed (e.g. Kahler, 1992; Webb, 2000; St. Cyr et al., 2000, Gopalswamy et al., 2003; Gopalswamy, 2004; Yashiro et al., 2004). It is well known that the geoffec CMEs originate mostly within a latitude \\(\\pm 30^{o}\\) (Gopalswamy et al., 2000a, 2001; Webb et al., 2000; Wang et al., 2002; Zhang et al. 2003). Srivastava and Venkatarishan (2002) showed that the initial speed of the CMEs is correlated with the \\(D_{ST}\\) index strength of the geomagnetic storm, although their conclusion was based only on the study of four events. This tendency was also suggested earlier by Gosling et al. (1990) and Tsurutani & Gonzalez (1998). On the other hand, Zhang et al. (2003) demonstrated that both slow and fast HCMEs can cause major geomagnetic disturbances. They showed that geoeffective CMEs are more likely to originate from the western hemisphere than from the eastern hemisphere. They also demonstrated a lack of correlation between the size of X-ray associated with a given CME and the importance of geomagnetic storms. Unfortunately, these studies were based on the sky plane speeds of CMEs without consideration of the projection effects. The parameters describing properties of CMEs, especially for HCMEs, are affected by projection effects (Gopalswamy et al., 2000b). Assuming that the shape of HCMEs is a cone and they propagate with constant angular widths and speeds, at least in their early phase of propagation, we have developed a technique (Michalek et al. 2003) which allows us to determine the following parameters: the linear distance \\(r\\) of source location measured from the solar disk center, the angular distance \\(\\gamma\\) of source location measured from the plane of sky, the angular width \\(\\alpha\\) (cone angle =0.5\\(\\alpha\\)) and the space velocity \\(V\\) of a given HCME. A similar cone model was used recently by Xie et al. (2004) to determine the angular width and orientation of HCMEs.
The present paper is divided into two parts. First, in the Section 2 we applied the cone model (Michalek et al. 2003) to obtain the space parameters of all HCMEs observed by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) until the end of 2002. In the Subsection 2.2 a short statistical analysis, based on the derived parameters, of HCMEs is presented (Fig. 1 - Fig. 4). In the Section 3, we use these parameters to identify the most important factors determining geoeffectiveness of HCMEs and how they could be used for space weather forecast (Fig. 5 - Fig. 17)
## 2 Space parameters of HCMEs
### Data
The list of HCMEs studied in this paper is shown in Table 1. We considered only frontside full (type F) and asymmetric (type A) HCMEs (Gopalswamy et al. 2003b). Only these events could be considered using the technique proposed by Michalek et al. (2003). Full halos are the classical halo CMEs which originate from close the disk center. Asymmetric halos are typically wide, near-limb CMEs, which become halos late in the event. They are different from partial halos. The partial halos never appear aroundthe entire occulting disk, even in LASOC/C3 observations (their width is \\(<360^{o}\\)). Only frontside events could be potentially geoeffective. The first four columns of Table 1 are from the SOHO/LASCO catalog (date, time of first appearance in the coronagraph field of view, projected speed and position angle of the fastest part of the HCME). Details about the SOHO/LASCO catalog and the method of measurements are described by Yashiro et al. (2004). Parameters \\(r,\\gamma,\\alpha,\\) and \\(V\\), estimated from the cone model (Michalek et al. 2003), are shown in columns (5), (6), (7), and (8), respectively. It is important to note that for some events, the space velocity determined by this technique could be smaller than the projected speeds reported in the LASCO catalog. This is because the Michalek et al. (2003) technique applies only to the beginning phase of CMEs, whereas the CME catalog gives average speed within the LASCO's field of view. The model also cannot estimate the parameters for symmetric HCMEs originating very close to the disk center and for limb events appearing as halos on account of deflections of preexisting coronal structures. In column (9) the source locations of the associated H-flares are given. The associated flares were determined using two restrictions. They should originate in the same part of solar disk and set up in the same time as respective CMEs (limit time is about half an hour). To be sure that our determination is correct we checked together EIT and LASCO movies also. It is important to note that localization of solar flares might be slightly shifted with respect to origin of CMEs. This might affect some figures and presented correlations. By examining the solar wind plasma data from Solar Wind Experiment (Wind/SWE, [http://web.mit.edu/space/www/wind/](http://web.mit.edu/space/www/wind/)) and interplanetary magnetic field data (from Magnetic Field Investigation, [http://lepmfi.gsfc.nasa.gov/mfl](http://lepmfi.gsfc.nasa.gov/mfl)), we identified, when possible, the associated interplanetary CMEs (ICMEs). The changes of geomagnetic indices \\(D_{ST}\\) and \\(Ap\\) caused by these ICMEs are presented in columns 10 and 11, respectively. The last two columns give the maximum value of magnitude (\\(B\\)) and southward component (\\(B_{Z}\\)) of magnetic field in the ICME. We considered 144 frontside HCMEs (FHCMEs) recorded by the LASCO coronagraphs until the end of 2002. For 101(70%) of them we were able to determine the required parameters (\\(r,\\gamma,\\alpha,\\) and \\(V\\)). The events that could not be measured were mostly too faint to get height-time plots at the opposite sides of the occulting disk. Only a few (16) were symmetric for which we could not obtain the HCME parameters.
### Statistical analysis
#### 2.2.1 The space velocities of FHCMEs
The properties of halo CMEs observed by SOHO/LASCO have been described in a number of papers (Gopalswamy et al., 2003a; Gopalswamy, 2004; Yashiro et al., 2004). Here we describe the properties of FHCMEs measured according to Michalek et al. (2003). Fig. 1 shows the distribution of the space velocities (\\(V\\)) of FHCMEs during the ascending (1996-1999) and maximum phases of solar activity (2000-2002) as well as for the whole period (1996-2002). It was noted before, e.g., by Webb et al. (1999), Gopalswamy (2004, see Fig 1.13, 1.14) and Yashiro et al. (2004), that HCMEs are much faster and more energetic than typical CMEs. Our results also confirm this. The average speed of the HCMEs is \\(1300km/s\\) (about 25% larger than that for HCMEs from SOHO/LASCO catalog, Yashiro et al. (2004)). The difference, between average speeds received in the present paper and by Yashiro et al. (2004), is likely to be due to the fact that we are using corrected speeds while Yashiro et al. (2004) used sky-plane speeds. We use a smaller number of events in the statistic. From the histograms in Figure 1, it is evident that velocities of HCMEs increase significantly following the solar activity cycle as for all CMEs (Yashiro et al., 2004). During the maximum of solar activity the FHCMEs have, on the average, velocities about 40% higher than the average velocities during the minimum of solar activity. The speed of the slowest event is \\(189km/s\\) while the speed of the fastest one is \\(2655km/s\\).
In Fig. 2, we present the sky-plane speeds against the corrected (space) speeds. The solid line represents the linear fit to the data points. The inclination of the linear fit demonstrates that the projection effect increases slightly with the speed of CMEs. It is clear that the projection effect is important, and on average the corrected speeds are 25% higher than the velocities measured in the plane of sky. This was also anticipated based on other considerations (Gopalswamy et al., 2001). It is important to note that both sky-plane and corrected speeds are determined at the same distance (\\(2R_{\\odot}\\)) from the disk center.
#### 2.2.2 Widths of FHCMEs
Fig. 3 shows the distribution of the estimated widths (\\(\\alpha\\)) of FHCMEs during the ascending (1996-1999) and maximum phases of solar activity (2000-2002) as well as for the whole period (1996-2002). The average width of HCMEs is \\(120^{o}\\) (more than twice the average value obtained from the SOHO/LASCO catalog, Yashiro et al., 2004). The average width of HCMEs does not change significantly with solar activity, except for a small increase during the maximum of solar activity. The most narrow HCME has a width of \\(39^{o}\\) and the widest one has \\(\\alpha\\) as large as \\(168^{o}\\).
#### 2.2.3 Source locations of FHCMEs
Fig. 4 presents the distribution of source location (\\(\\gamma\\)) of FHCMEs during the ascending (1996-1999) and maximum phases of solar activity (2000-2002) and for the whole period (1996-2002). FHCMEs with \\(\\gamma\\) close to \\(0^{o}\\) originate near to the solar limb while events with \\(\\gamma\\) close to \\(90^{o}\\) originate from the disk center region. Fig. 4 shows that the FHCMEs originate close to the Sun center with a maximum of distribution around \\(\\gamma=62^{o}\\). The distribution of source location does not depend on the period of solar activity. We have to note that these distributions are slightly biased due to the fact that we neglected 16 symmetric FHCMEs (these CMEs cannot be measured using the cone model). They originate very close to the disk center and should slightly increase the average value of \\(\\gamma\\).
## 3 Geofectiveness of FHCMEs.
Having defined the parameters describing FHCMEs, we now explore which of theses parameters determine the strength of geomagnetic disturbances. In situ counterparts of frontside HCMEs can be recognized in the magnetic field and plasma measurements as ejecta (EJs) or magnetic clouds (MCs). Magnetic clouds can be identified by the following characteristic properties: (1) the magnetic field strength is higher than the average; (2) the proton temperature is lower than the average; (3) and the magnetic field direction rotates smoothly (Burlaga 1988, 2002, 2003a,b; Lepping et al., 1990). In the present paper we refer to both MCs and EJs as interplanetary CMEs (ICMEs). The presence of these signatures changes from one ICME to an other. By examining the solar wind plasma data we identified, when possible, ICMEs. These ICMEs could be responsible for geomagnetic disturbances. The strength of geomagnetic storms is described by two indices Ap (which measures the general level of geomagnetic activity over the globe) and \\(D_{ST}\\) (which is obtained using magnetometer data from stations near the equator). The maximum values of \\(D_{ST}\\) and \\(Ap\\) indices associated with the ICMEs are presented in Table 1. We included those events for which the \\(D_{ST}\\) index decreased below -25nT. We now examine the relation between the geomagnetic indices and \\(V\\), \\(\\alpha\\) and \\(\\gamma\\). First in the Subsection 3.1, we consider influence of different parameters on georefectiveness of FHCMEs (Fig. 5 - Fig. 14). In the Subsection 3.2 we try to find which FHCMEs could cause false alarms (Fig. 15 - Fig. 17).
### Geoeffectiveness of FHCMEs
#### 3.1.1 Georefectiveness and space velocity (\\(V\\)) of FHCMEs.
Fig. 5 shows the scatter plots of plane of sky speeds versus \\(D_{ST}\\) and \\(Ap\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to the data points associated with eastern events, and the dashed lines are linear fits to data points associated with western events. The dot-dashed vertical lines indicate velocity limits above which HCMEs can cause geomagnetic storms with \\(D_{ST}\\leq-150nT\\). These lines were inferred from two events on 1 May 1998 and 2 May 1998. Upon inspection of this figure, it is clear that the major geomagnetic storms can be generated by slow (speeds \\(\\approx 500km/s\\)) and fast HCMEs originating in the western hemisphere. There is not a significant correlation (correlation coefficients are \\(<0.50\\)) between the projected speed and geomagnetic indices. Linear and Spearman correlation coefficients are approximately equal \\(0.35(0.31)\\) for the western and \\(0.10(0.05)\\) for eastern events, similar to the results of Zhang et al. (2003). The situation is different when we consider the space velocities of HCMEs. In Fig. 6, the scatter plots of \\(V\\) versus \\(D_{ST}\\) and \\(Ap\\) indices are presented. The space velocities are larger than the plane of sky speeds and all events in the panels are shifted to the higher velocity range, especially for the two events on 1 May 1998 and 2 May 1998, which seem to be narrow (width \\(\\approx 40^{o}\\)) and three times faster than they appear in LASCO observations (Table 1). Determination of the space velocity is consistent with observations of ICMEs associated with these events. Since, these CMEs needed only \\(\\approx 46\\) hours to reach Earth (Manoharan et al., 2004, Michalek et al., 2004), they must be very fast. In LASCO observations these CMEs appear faint, suggesting that they seem to be narrow and they could be observed as halos when they are far from the Sun. Upon inspection of this figure, it is clear that only very fast events (\\(V\\geq 1100km/s\\)) originating in the western hemisphere can cause the biggest geomagnetic storms (\\(D_{ST}\\leq-150nT\\)). The dot-dashed verticals lines indicate velocity limits above which HCMEs can cause severe geomagnetic storms. We find significant correlation (correlation coefficients are \\(>0.50\\)) between velocity and geomagnetic indices for the western events. The linear and Spearman correlation coefficients are \\(0.60(0.54)\\) and \\(0.62(0.56)\\) for \\(Ap\\) and \\(D_{ST}\\) indices respectively. In contrast there is very little correlation between the space velocity and geomagnetic indices for the eastern events. the linear and spearman correlation coefficients are \\(0.16(0.04)\\) and \\(0.07(0.02)\\) for \\(Ap\\) and \\(D_{ST}\\) indices, respectively. Events originating in the eastern hemisphere are not likely to cause major geomagnetic storms. Fig. 7 shows the distribution of the space velocities (\\(V\\)) of FHCMEs, which cause geomagnetic disturbance with \\(D_{ST}\\) index lower than \\(-25nT\\), \\(-60nT\\) and \\(-100nT\\), respectively. These histograms demonstrate again that geoeffectiveness of HCMEs depend on their space velocities and sever geomagnetic storms with \\(D_{ST}<-100nT\\) can be caused by fast CMEs (with \\(V>700km/s\\)) only. The results seem to be different from these reported by Zhang et al. (2003) for the plane of sky speeds. This demonstrates that conclusions based on coronagraphic observations subjected to the projection effects could be incorrect.
#### 3.1.2 Geoeffectiveness and \\(\\gamma\\) of HCMEs.
In Fig. 8 we show the scatter plots of \\(\\gamma\\) versus \\(D_{ST}\\) and \\(Ap\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to the data points associated with eastern events and the dashed lines are linear fits to data points associated with western events. For the eastern events the correlation between \\(\\gamma\\) and geomagnetic indices is not significant. For these events, the linear and Spearman correlation coefficients for \\(Ap\\) and \\(D_{ST}\\) indices are \\(0.14(0.18)\\) and \\(0.20(0.38)\\), respectively. In contrast the western events originating close to the disk center (\\(\\gamma\\geq 65^{\\circ}\\)) are more likely to cause the biggest geomagnetic storms. For these events correlation coefficients for \\(Ap\\) and \\(D_{ST}\\) indices are \\(0.39(0.38)\\) and \\(0.35(0.42)\\), respectively. Similar conclusions are obtained when we consider H-alpha flare locations. In Fig. 9 we show the scatter plots of absolute values of longitudes of H-alpha flares associated with HCMEs versus \\(D_{ST}\\) and \\(Ap\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. These results confirm previous conclusion that the western events originating close to the disk center are more likely to cause the biggest geomagnetic storms. We have to note that there is one event on 04 April 2000 which originate far from the disk center (N26W66) and cause the severe geomagnetic storm with \\(D_{ST}=-288nT\\). Now the correlation between longitude and geomagnetic indices is very poor for the western and eastern events as well. The results are proved by histograms presented in Fig. 10. This figure presents the distribution of the longitude of FHCMs which cause geomagnetic disturbance with \\(D_{ST}\\) index lower than \\(-25nT\\), \\(-60nT\\) and \\(-100nT\\). Upon inspection of the histograms, it is clear that the geoeffectiveness of CMEs depends on the longitude of source location and that the severe geomagnetic disturbance (\\(D_{ST}<-100nT\\)) are mostly caused by the western events originating close to the disk center. During the study period of time there were only two severe geomagnetic storms (\\(D_{ST}<-100nT\\)) caused by western events originating far from the disk center. It is important to note that the peak of the longitude distribution is shifted to the west from the disk center.
#### 3.1.3 Geoeffectiveness and angular widths (\\(\\alpha\\)) of CMEs
In Fig. 11, we have shown the scatter plots of \\(\\alpha\\) against \\(D_{ST}\\) and \\(Ap\\) indices. Cross and diamonds symbols are associated with the eastern and western events, respectively. The solid lines are the linear fits to the data points associated with eastern events and the dashed lines are linear fits to data points associated with western events. Upon inspection of the figures it is clear that the geoeffectiveness of CMEs depends very little on their widths. All considered correlation coefficients are \\(\\leq 0.22\\). Even severe geomagnetic storms can be caused by both narrow and wide HCMEs. This means that HCMEs do not have to be very large to cause major geomagnetic storms.
#### 3.1.4 Geoeffectiveness against velocities and source localization of the FHCMEs
As we demonstrated in the previous subsection, geoeffectiveness of FHCMEs strongly depends on their space velocity \\(V\\) and source location \\(\\gamma\\). These parameters may be helpful for space weather forecast. In Figs. 12 and 13, \\(Ap\\) and \\(D_{ST}\\) indices versus \\(\\gamma\\) and \\(V\\) are shown in contour plots using the Kriging (Isaacs and Srivastava, 1989) procedure for generating regular grids. The darker the shade, the higher are the \\(Ap\\) and \\(D_{ST}\\) indices. Knowing the source location and space velocity of a given HCME, we can, in a simple way, predict its geoeffectiveness. From the inspection of the picture we see that the strongest geomagnetic storms can occur for fast events originating close to the disk center.
Fig. 14 shows the scatter plots of the maximum values of magnitude (\\(B\\)) and southward component (\\(B_{Z}\\)) of ICME magnetic field versus \\(D_{ST}\\) and \\(A_{p}\\) indices. The solid lines are linear fits to the data points. This figure clearly confirms that the major geomagnetic storms are generated by CMEs carrying strong magnetic filed with significant southward component. Correlation between these parameters and geomagnetic indices is very large and linear coefficients are approximately equal 0.70 for (\\(B\\)) and (\\(B_{Z}\\)) as well. Spearman correlation coefficients in this case are slightly smaller and are approximately equal 0.60 for (\\(B\\)) and (\\(B_{Z}\\)). Spearman correlation coefficients in this case are slightly smaller and are approximately equal 0.60 for (\\(B\\)) and (\\(B_{Z}\\)). It is due to the fact that they are derived from the rank of variable within the distribution and they are not sensitive to the 4 outlying points with \\(|B|>30nT\\). Unfortunately, \\(B\\) and \\(B_{Z}\\) are measured in situ and hence may not be useful for space weather forecast.
### False alarms
Previous studies (e.g. Cane et al., 2000; St. Cyr et al., 2000; Wang et al. 2002) have shown that a large fraction of frontside HCMEs is non-geoeffective. St. Cyr et al. (2000) found that only 20/40 (50%) of all frontside HCMEs during 1996-1998 from SOHO/LASCO caused geomagnetic storms with \\(K_{P}\\geq 5\\). Wang et al. (2002) used a larger data base (March 1997 to December 2000) showed that 59/132 (45%) of frontside HCMEs could result in moderate to severe geomagnetic storms (\\(K_{P}\\geq 5\\)) and that the majority of these events occurred within latitude \\(\\pm(10^{o}-30^{o})\\). They also found an asymmetry in the central meridian distance distribution. In the western hemisphere, a geoeffective event could be expected even at \\(\\sim 70^{o}\\). On the eastern side, there were no geoeffective HCMEs outside of \\(40^{o}\\). We performed a similar analysis for our sample of FHCMEs. During the study period, there were only 88/144 FHCMEs with geomagnetic signatures (\\(D_{ST}>-25nT\\)) at Earth which means that only 60% of FHCMEs are geoeffective. For 65/88 (73%) of them we determined the space parameters (\\(r,\\gamma,\\alpha,\\) and \\(V\\)). If we take into account only those FHCMEs which caused moderate to severe geomagnetic storms (\\(D_{ST}>-60nT\\)) the fraction of geoeffective events decreased to 51/144 (36%). It is important to recognize them because they generate \"false alarms\". In our sample, there were 56/144 (39%) not geoeffective HCMEs. For 36/56 (62%) of them we were able to determine the space parameters (\\(r,\\gamma,\\alpha,\\) and \\(V\\)). We now explore why these FHCMEs did not cause geomagnetic disturbances. Fig. 15 presents the distributions of longitude and the space velocities of the 36 non-geoeffective FHCMEs. The histograms show that these events originate from the whole solar disk and have velocities from \\(100km/s\\) up to \\(2500km/s\\). The distributions do not demonstrate any specific signatures characterizing these events. Fig. 16 shows, in the successive panels, the distributions of: the space velocities for FHCMEs originating close to the disk center (\\(|longitude|<30^{o}\\)), the space velocities for FHCMEs originating close to the limb (\\(|longitude|>30^{o}\\)), the longitude for slow FHCMES (\\(V<1200km/s\\)) and the longitude for fast FHCMEs (\\(V>1200km/s\\)). Upon the inspection of the histograms (the first and last panel in the figure) it is clear that all fast HCMEs (\\(V>1200km/s\\)) originating close to the disk center (\\(|longitud|<30^{o}\\)) must be geoeffective. There is no false alarm for such events. Slower FHCMEs (\\(V<1200km/s\\)) originating close to the disk center do not have to be geoeffective. In the third panel we note 20 events originating from the disk center but not influencing Earth. On the other hand, even very fast FHCMEs (\\(V>1200km/s\\)) were not geoeffective when they originated close to the limb. In the fourth panel we have 16 fast FHCMEs originating close to the limb without geomagnetic signatures at Earth. Fig. 17 shows the scatter plot of the space velocities versus longitude for all FHCMEs. Diamond symbols represent geoeffective and cross symbols non-geoeffective FHCMEs. The solid lines are linear fits for non-geoeffective events originating from the east and west hemisphere. For non-geoeffective eastern and western events the linear and Spearman correlation coefficients are very large and equal 0.88(0.87) and 0.79(0.74), respectively. Upon inspection of the figure, it is clear that geoeffective events are faster than the non-geoeffective events originating at the same longitude. It is also clear from the strong correlation that events originating farther form the disk center are faster than those originating close to the disk center. Linear fits to the non-geoeffective events could be considered as lower limits for the space velocities above which a given CME originating at a given longitude could be observed as a halo event. In the vicinity of these fits we see both geoeffective and non-geoeffective FHCMEs. Slightly above these fits we see only geoeffective FHCMEs. It is important to note that the inclination of linear fit to the eastern events is steeper than that for the western events. Eastern events must be faster to appear as halo events or to be geoeffective than western events originating at the same angular distance from the disk center.
Generally our results are consistent with those of previous studies. We would like to emphasize that the geoeffectiveness of HCMEs depends not only on source locations, but also on their space velocity. Having both the parameters improves our ability to forecast whether a given HCME will be geoeffective or not. Non geoeffective events are slow or fast but originating far from disk center. They do not affect magnetosphere. If they are directly ejected to Earth they are slow and disturbed before reach Earth. If they are fast, they are ejected not directly to Earth (events with large longitude) and they only touch magnetosphere by flanks. Unfortunately there is very difficult to give sharp boundary limits dividing CMEs on geoeffective and non geoeffective events. These limits depend not only on CMEs properties but also on condition of interplanetary medium. Approximate limits can be obtain from Fig. 17. Of course, we appreciate that additional parameters such as the strength and orientation of the resulting interplanetary CME are also expected to play a role in deciding the geoeffectiveness. It is difficult to give sharp boundary conditions for non geoeffective events.
## 4 Summary
In this study we considered the geoeffectiveness of all full HCMEs observed by SOHO/LASCO coronagraphs from the launch in 1995 until the end of 2002. For 101/144 (70%) of full HCMEs we were able to find the source location, width and space velocity using the cone model (Michalek et al., 2003). We must be aware that the cone model is only rough simplification of real events. We know that not all CMEs are perfectly symmetric (Moran and Davila, 2004; Jackson et al., 2004). Most of CMEs could be approximate using cone model but probably for some of them this assumption is unrealistic. Fortunately technique presented by Michalek et al. does not demand perfect symmetry for CMEs. This approach requires measurements of sky-plane speeds and the moments of the first appearance of the halo CMEs above limb at only two opposite points. We are able to determine, with good accuracy, the space velocity and with of a given CME at least in the plane symmetry crossing CMEs at these points. When a given CME could be approximated by the cone model these derived parameters are valid for the entire CME. HCMEs originating very close to the disk center (mostly within a latitude of \\(\\pm 40^{o}\\)), are very wide (theaverage angular width \\(=120^{o}\\)) and are very fast (the average space speed \\(=1291km/s\\)). We find significant (40%) increase in the average space velocities of HCMEs during the maximum of solar activity. These results could suggest that the HCMEs represent a special class of CMEs which are very wide and fast. It is important to note that this \"class\" of CMEs is defined due to artificial effect caused by coronagraphic observations. Events originating close to the disk center (from SOHO/LASCO point of view) must be wide and fast to appear as HCMEs in LASCO observations. This is not due to localization on the solar disk but due to coulting disk which not only blocks bright photospheric light but also eliminates some narrow and slow events. We have to emphasize that this effect mostly depends on the dimension of coulting disk but in less degree on the sensitivity of instrument. More sensitive instrument can record some poorer events (halos and also not halos so statistic will be similar) but could not register these events which never appear behind occulting disk. Potentially more sensitive instrument could register less energetic events (narrower and slower) and the average velocities and widths (for halos and whole population of CMEs) could be slightly lower but the main relation between the halos and whole population of events will be the same. Fortunately, poor events do not cause a big concern because they are not geoeffective. We do not expect, in the near future, any special programs devoted to looking for less energetic CMEs. Next scientific mission (STEREO) will be mostly dedicated to recognize 3D structure of CMEs. Such fast and wide CMEs are known to be associated with electron and proton acceleration by driving fast mode MHD shocks (e.g., Cane et al. 1987; Gopalswamy et al. 2002a). Using observations from Wind spacecraft, interplanetary magnetic clouds (MC) and geomagnetic disturbances associated to HCMEs were identified. The strength of geomagnetic storms, described by \\(D_{ST}\\) and \\(Ap\\) indices, is highly correlated with the source location and space velocity of a given event. Only HCMEs originating in the western hemisphere, close to the solar center and very fast (space velocity \\(\\geq 1100km/s\\)) are likely to cause major geomagnetic storms (\\(D_{S}T<-150nT\\)). Slow HCMEs (space velocity \\(\\leq 1100km/s\\)), even originating close to the solar center, may not cause severe geomagnetic disturbances. We have to note that there was one event (04 April 2000), which originated far from disk center and produced a severe geomagnetic storm (\\(D_{ST}=-288nT\\)). Probably this storm was not due to an ICME. It was caused by the sheath region ahead of the CME as was reported by Gopalswamy (2002b). We illustrated, using contour maps, how the derived HCME parameters can be useful for space weather forecast. We have to note that geoeffectiveness of events does not depend on their widths.
During our study period we recognized 56/144 (30%) FHCMEs without any geomagnetic signature at Earth. This is significant population of FHCMEs. To distinguish them from the geoeffective events we considered the source locations and space velocities of HCMEs. When both the parameters are available, it becomes easier to assess the geoeffectiveness of HCMEs. We may say that fast FHCMs (\\(V>1200km/s\\)) originating close to the disk center (\\(\\langle longitude\\rangle<30^{\\prime}\\)) must be geoeffective. For such events there were no false alarms. But, even very fast events originating far from the disk center can be non-geoeffective.
**Acknowledgments. This work was done when GM visited the Center for Solar Physics and Space Weather, The Catholic University of America in Washington. Work done by Grzegorz Michalek was partly supported by _Komitet Badai Naukowych_ through the grant PB 0357/P04/2003/25. Part of this research was also supported by NASA/LWS and NSF/SHINE programs.**
## References
* (1988) Burlaga, L.F., et al., 1988, J. Geophys. Res., 93,7217
* (2003a) Burlaga, L.F., et al., 2003a, ApJ, 585, 115893
* (2003b) Burlaga, L.F., et al., 2003b, J. Geophys. Res., 108, SSH2-1
* (1987) Cane, H.V., et al., 1987, J. Geophys. Res., 92, 9869
* (2000) Cane, H.V., Richardson, I.G., St.Cyr, O.C.,2000, Geophys. Res. Lett., 27, 3591
* (2000a) Gopalswamy, N., et al. 2000a, Geophys. Res. Lett., 27, 145
* (2000b) Gopalswamy, N., et al. 2000b, Geophys. Res. Lett., 27, 1427
* (2001) Gopalswamy, N., et al., 2001, J. Geophys. Res., 106, 29207
* (2002a) Gopalswamy, N., et al., 2002a, ApJL, 572, L103
* (2002b) Gopalswamy, N., 2002b, in solar-terrestrial magnetic activity ans space environment, Ed. H.N. Wang and R.L. Lin, COSPRA Colloquia Ser., Vol. 14, p157
* (2003a) Gopalswamy, N., et al., 2003a, ApJ, 598,L63
* (2003b) Gopalswamy, N., et al., 2003b, Proc. ISCS 2003 Synposium, Tatranska Lomnica, Slovakia, p403
* (2004) Gopalswamy, N., 2004, ASSL series, ed. G. Poletto and S. Suess, KLUWER, in press
* (1993) Gosling J.T., 1993, J. Geophys. Res., 98, 18937
* (1990) Gosling J.T., et al., 1990, Geophys. Res. Lett., 17, 901
* (1982) Howard R.A., et al., 1982, ApJ, 263, L101
* Oxford University Press, New York.
* (2004) Jackson, B., et al., 2004, AGU, Fall Meeting 2004, abstract SH21A-0393
* (1992) Kahler S.W., 1992,Annu. Rev. Astron. Astrophys.,30,113
* (1990) Lepping, R.P., et al., 1990, J.Geophys.Res., 95, 11957
* (2004) Manoharan, P.K., et al., 2004 J. Geophys. Res., 109, A06109
* (2003) Michalek, G., et al., 2003, ApJ, 584, 472
* (2004) Michalek, G., et al., 2004, A&A, 423,729
* (2004) Moran, T., Devin, 2004, Science, 305, 66
* (2000) Srivastava, N., Venkatakrishnan, P., Geophys. Res. Lett., 29, 101029
* (1999) Sheeley, N.R., Jr., Walters, J.H., Wang, Y.-M., Howard, R.A. 1999, J. Geophys. Res., 104, 24739
* (2000) St. Cyr, O.C., et al., 2000, J. Geophys. Res., 105, 18169
* (1998) Tsurutani B.T., Gonzalez W.D., 1998, Geophys. Monogr. 98; Washington, DC,AGU,77
* (2002) Wang, Y.M., et al., 2002, J. Geophys. Res., 107, 1340
* (1997) Webb, D.F., et al. 1997, J. Geophys. Res., 102, 24161
* (2000) Webb, D.F., et al. 2000, J. Geophys. Res., 105, 7491
* (2004) Xie H., et al. 2004, J. Geophys. Res., 109, A03109
* (2004) Yashiro, S., et al. 2004, J. Geophys. Res., 109, A07106
* (2003) Zhang et al., 2003,ApJ, 582, 520Figure 1: The histograms showing the distribution of space velocities (\\(V\\)) of HCMEs during the ascending (1996-1999) and maximum phases of solar activity (2000-2002) and for the whole considered period (1996-2002). From the histograms, it is evident that velocities of HCMEs increase significantly following the solar activity cycle.
Figure 2: The plane of sky speed versus the corrected (space) speed of HCMEs. The solid line shows the linear fit to data points. The inclination of the linear fit demonstrates that the projection effect increases slightly with the speed of CMEs.
Figure 3: The histograms showing the distribution of width (\\(\\alpha\\)) of HCMEs during the ascending (1996-1999) and maximum phases of solar activity (2000-2002) and for the whole considered period (1996-2002). The average width of HCMEs does not change significantly with solar activity
Figure 4: The histogram showing distribution of source location (\\(\\gamma\\)) of HCMEs during the ascending (1996-1999) andmaximum phases of solar activity (2000-2002) and for the whole considered period (1996-2002). Histograms shows that the FHCMEs originate close to the Sun center with a maximum of distribution around \\(\\gamma=62^{o}\\). The distribution of source location does not depend on the period of solar activity.
Figure 5: The scatter plots of the sky-plane speed versus \\(Ap\\) and \\(D_{ST}\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to data points associated with eastern events, and the dashed lines are linear fits to data points associated with western events. The dot-dashed vertical lines indicate velocity limits above which HCMEs can cause severe (\\(D_{ST}\\leq-150nT\\)) geomagnetic storms. Upon inspection of this figure, it is clear that the major geomagnetic storms can be generated by slow (speeds \\(\\approx 500km/s\\)) and fast HCMEs originating in the western hemisphere.
Figure 6: The scatter plots of the space velocity (\\(V\\)) versus \\(Ap\\) and \\(D_{ST}\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to data points associated with eastern events and the dashed lines are linear fits to data points associated with western events. The dot-dashed vertical lines indicate velocity limits above which HCMEs can cause significant geomagnetic storms (\\(D_{ST}\\leq-150nT\\)). Upon inspection of this figure, it is clear that only very fast events (\\(V\\geq 1100km/s\\)) originating in the western hemisphere can cause severe geomagnetic storms.
Figure 8: The scatter plots of the source location (\\(\\gamma\\)) versus \\(Ap\\) and \\(D_{ST}\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to the data points associated with eastern events, and the dashed lines are linear fits to data points associated with western events. It is clear that the western events originating close to the disk center (\\(\\gamma\\geq 65^{o}\\)) are more likely to cause the biggest geomagnetic storms.
Figure 7: The histograms showing distribution of the space velocities (\\(V\\)) of FHCMEs which cause geomagnetic disturbance with \\(D_{ST}\\) index lower than \\(-25nT\\), \\(-60nT\\) and \\(-100nT\\). These histograms demonstrate that geoeffectiveness of HCMEs depend on their space velocities and sever geomagnetic storms with \\(D_{ST}<-100nT\\) can be caused by fast CMEs (with \\(V>700km/s\\)) only.
Figure 10: The histograms showing distribution of the longitude of HCMEs which cause geomagnetic disturbance with \\(D_{ST}\\) index lower than \\(-25nT\\), \\(-60nT\\) and \\(-100nT\\). Upon inspection of the histograms, it is clear that the coeffectiveness of CMEs depends on the longitude of source location and that the severe geomagnetic disturbance (\\(D_{ST}<-100nT\\)) are mostly caused by the western events originating close to the disk center.
Figure 9: The scatter plots of absolute values of longitudes of H-alpha flares associated to HCMEs versus \\(Ap\\) and \\(D_{ST}\\) indices. Diamond symbols represent events originating from the western hemisphere and cross symbols represent events originating from the eastern hemisphere. The solid lines are the linear fits to the data points associated with eastern events, and the dashed lines are linear fits to data points associated with western events. Upon inspection of the figures, it is clear that the western events originating close to the disk center are more likely to cause the biggest geomagnetic storms.
Figure 11: The scatter plots of the \\(\\alpha\\) versus \\(Ap\\) and \\(D_{ST}\\) indices. Diamond symbols represent events originating in the western hemisphere and cross symbols represent events originating in the eastern hemisphere. The solid lines are the linear fits to data points associated with eastern events, and the dashed lines are linear fits to data points associated with western events. The geoeffectiveness of CMEs depends
Figure 12: The contour map presenting \\(D_{ST}\\) index versus the space velocity (\\(V\\)) and the source location (\\(\\gamma\\)). From the inspection of the picture we see that the strongest geomagnetic storms can occur for fast events originating close to the disk center.
Figure 14: The scatter plots of the \\(B\\) and \\(B_{Z}\\) versus \\(Ap\\) and \\(D_{ST}\\) indices. Correlation between these parameters and geomagnetic indices is significant (correlation coefficient are \\(>0.50\\)) and linear and (Spearman) coefficients are approximately equal \\(0.70(0.60)\\) for (\\(B\\)) and (\\(B_{Z}\\)) as well.
Figure 13: The contour map presenting \\(Ap\\) index versus the space velocity (\\(V\\)) and the source location (\\(\\gamma\\)). From the inspection of the picture we see that the strongest geomagnetic storms can occur for fast events originating close to the disk center.
Figure 16: The histograms showing: space velocities for non-geoeffective FHCMEs originating close to the disk center (\\(|longitude|<30^{o}\\)), space velocities for non-geoeffective FHCMEs originating close to the limb (\\(|longitude|>30^{o}\\)), longitude for slow non-geoeffective FHCMEs (\\(V<1200km/s\\)) and longitude for fast non-gegeofective FHCME (\\(V>1200km/s\\)). Upon the inspection of the histograms (the first and last panel in the figure) it is clear that all fast HCMEs (\\(V>1200km/s\\)) originating close to the disk center (\\(|longitud|<30^{o}\\)) must be geoeffective.
Figure 15: The histogram showing distributions of the longitude and space speed of non-geoeffective FHCMEs. The histograms show that these events originate from the whole solar disk and have velocities from \\(100km/s\\) up to \\(2500km/s\\).
Figure 17: Figure shows the scatter plot of the space velocities versus longitude for all FHCMEs. Diamond symbols represent geoeffective and cross symbols non-geoeffective FHCMEs. The solid lines are linear fits for non-geoeffective events originating from the east and west hemisphere.Upon inspection of the figure, it is clear that geoeffective events are faster than the non-geoeffective events originating at the same longitude.
\\begin{table}
\\begin{tabular}{r r r r r r r r r r r r r} \\hline \\hline DATA & TIME & SPEED & PA & r & \\(\\gamma\\) & \\(\\alpha\\) & V & Location & Dst & Ap & B & \\(B_{s}\\) \\\\ \\hline & & km/ s & Deg & \\(\\frac{\\pi}{R_{\\odot}}\\) & Deg & Deg & km/s & & & & nT & nT \\\\
| Halo coronal mass ejections (HCMEs) originating from regions close to the center of the Sun are likely to be geoeffective. Assuming that the shape of HCMEs is a cone and they propagate with constant angular widths and velocities, at least in their early phase, we have developed a technique (Michalek et al. 2003) which allowed us to obtain the space speed, width and source location. We apply this technique to obtain the parameters of all full HCMEs observed by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) experiment until the end of 2002. Using this data we examine which parameters determine the geoeffectiveness of HCMEs. We show that in the considered period of time only fast halo CMEs (with the space velocities higher than \\(\\sim 1000\\frac{km}{s}\\) and originating from the western hemisphere close to the solar center could cause the severe geomagnetic storms. We illustrate how the HCME parameters can be used for space weather forecast. It is also demonstrated that the strength of a geomagnetic storm does not depend on the determined width of HCMEs. This means that HCMEs do not have to be very large to cause major geomagnetic storms.
Space Weather, Vol.??, XXX, DOI:10.1029/, | Summarize the following text. |
arxiv-format/0710_5887v1.md | # A Double Resonance Approach to Submillimeter/Terahertz Remote Sensing at Atmospheric Pressure
Frank C. De Lucia,, Douglas T. Petkie, and Henry O. Everitt
Manuscript received XXX, 2006. This work was supported by the Army Research Office and the Defense Advanced Projects Research Agency. F. C. De Lucia is with the Dept. of Physics, Ohio State University, Columbus, OH 43210; phone 614-688-4774; fax 614-292-7557; [email protected]. T. Petkie is with the Department of Physics, Wright State University, Dayton, OH 45435.H. O. Everitt is with the Army Aviation and Missile Research, Development, and Engineering Center, Redstone Arsenal, AL. 35898.
## I Introduction
There has been a long-standing interest in the remote spectroscopic detection and quantification of gases at or near atmospheric pressure. While most of this activity has been in the much more technologically developed infrared, there has been considerable interest in the spectral region variously referred to as the microwave, millimeter wave, submillimeter wave, or terahertz [1-4], a region which we will refer to in this paper as the submillimeter/terahertz (SMM/THz).
Interestingly, the most highly developed and successfully fielded applications have also probed the most remote regions: studies of the interstellar medium [5] and the upper atmosphere [6]. However, there are important applications that involve sensing at, or near the ambient pressure of the earth's surface, and it is these applications that are addressed by this paper. The principle physical characteristic that separates these terrestrial applications from the highly successful astronomical and upper atmospheric applications is the large linewidths associated with tropospheric pressure broadening. At atmospheric pressure, these linewidths are \\(\\sim 5000\\) MHz, in comparison to their Doppler limited linewidths in this spectral region which are \\(\\sim 1\\) MHz [7]. These large linewidths have significant negative impact on both specificity and, although less generally recognized, on sensitivity as well.
The use of this spectral region and the molecular rotational fingerprint for point detection of gas is highly favorable because the pressure of the sample can be reduced so that the linewidths are reduced to the Doppler limit, \\(\\sim 1\\) MHz, well below the average spacing of spectral lines. These narrow, well-resolved lines provide not only the redundant fingerprints of the molecular species, but also provide important means of separating the molecular signatures from other random and systematic system variations. We have considered this application in some detail and will not further consider it here [8-10].
In this paper, we will consider the issues that make conventional remote sensing at tropospheric pressures _orders of magnitude more challenging_ than the established astrophysical, upper atmosphere, and point gas sensing counterparts. We will then quantitatively discuss a new approach that mitigates these challenges.
## II The Challenges
In this section, we will introduce and briefly discuss the challenges that must be overcome for successful application of the SMM/THz spectral region to the terrestrial remote sensing of gases.
### _The Number of Available Channels_
The available spectral space for spectroscopic remote sensing is range dependent because of atmospheric transmission, but if one counts the number of \\(\\sim 5\\) GHz channels in the atmospheric windows, the number is fewer than \\(100\\). As a result, unless the potential analytes in the atmospheric mixture include only a few light species that havevery sparse spectra, there are insufficient information channels for robust, molecule-specific detection.
### Species with Resolvable Lines or Bandheads
While small molecules such as water and ammonia with very strong and sparse spectra are often used in THz demonstrations, most molecules of interest are heavier and have considerably more crowed spectra. In fact, at atmospheric pressure, molecules that are heavier than \\(\\sim 50\\) amu will have even prominent spectral structural features such as band heads washed out by the pressure broadened linewidths.
### Separation of Absorption Signal from Fluctuations
The narrow linewidths associated with Doppler limited spectra provide a convenient signature for the rejection of fluctuations in time and/or frequency imposed by the sensor system. In SMM/THz spectrometers, interference effects notoriously cause frequency-dependent power variations of tens of percent even in well-designed systems. If there is no way to separate this effect from the molecular absorption, then the minimum detectable absorption will also be tens of percent.
Perhaps less broadly appreciated, temporal fluctuations of atmospheric attenuation, caused primarily by temporal and spatial variations of water concentration and temperature along the path, must also be separated from molecular absorption. While the time scale is highly scenario dependent, it is expected to be on the time scale of 1 s, which is comparable to a typical sensor integration time. Since ambient atmospheric SMM/THz attenuation is large, even in the transmission windows, this time-varying clutter can overwhelm weak spectra [11, 12].
### Comparison with the Point Sensor Approach
To understand the impact of these effects, it is useful to compare this atmospheric situation with that of well established SMM spectrometers or point gas sensors based on low-pressure samples. Because electronic sources in this spectral region are inherently very quiet and there is little ambient black body radiation [13], very small fractional absorptions (typically 1 part in 10\\({}^{7}\\) after 1 s of signal averaging) are routinely observed in Doppler-limited spectrometers [8, 9]. This is six orders of magnitude better than the limits discussed above for a SMM/THz absorption system operating in the real atmosphere. It is important at this point to remember that this difference is not due to any fundamental limits (which are scenario independent), but rather due to these _system and scenario_ limitations.
## III A New Methodology
To meet these challenges we discuss here a new methodology to provide: (1) More independent channels and/or kinds of information, and (2) A way to modulate the signal associated with the trace molecular species so that it can be separated from atmospheric clutter effects and noise.
Specifically, we propose an IR pump - SMM/THz probe double resonance technique in which a pulsed CO\\({}_{2}\\) TEA laser modulates molecular SMM/THz emission and absorption. Typical lasers produce pulses of duration 100 nsec or, if mode locked, pulses of duration \\(\\sim\\)100 psec. This modulation is then detected by a bore sighted THz transceiver.
The sensor could be configured to operate either in a general survey mode or optimized for a particular scenario. For the former, the laser would step thorough its available frequencies, i.e. the \\(\\sim 50\\) available CO\\({}_{2}\\) laser lines, while the probe monitored the SMM/THz response of the target molecular cloud. An attractive choice for the latter would be the \\(\\sim\\)10 independent (pressure broadened limited) channels in the 50 GHz wide atmospheric window centered at 240 GHz for which broadband frequency multiplier solid-state technology is commercially available. This probe could be stepped through the resolution channels. For the simultaneous observation of multiple probe channels, more elaborate multichannel receiver technology similar to that used by the radio astronomy community is also a choice. For scenario specific implementations, this generality of pump and probe frequencies can be significantly reduced, but for the minimization of false alarms, particular attention would need to be paid to the signatures of potential clutter species.
In many ways, this is an atmospheric pressure version of an Optically Pumped Far Infrared (OPFIR) laser. Moreover, the TEA laser source is broad and tunable, and the target gas lines are broad as well because of the broadening due to the atmospheric pressure. Consequently, pump coincidence is much more easily achieved than for low pressure OPFIR lasers, and the proposed technique should be quite general for any molecule with IR absorption in the 9-11 micron region.
At a fundamental level, the technique depends on the rapid molecular collisional relaxation time, which at atmospheric pressure and temperature is \\(\\sim 100\\) psec and only weakly depends on molecular mass and dipole moment. Thus, the short 100 psec laser pulses and rapid collisional relaxation modulate the molecular THz emission or absorption on a time scale much faster than the 1 s temporal atmospheric fluctuations, making it straightforward to separate the molecular signal from signals due to clutter. Furthermore, the time-modulated spectra exhibit a molecule-unique pattern of enhanced and reduced absorption that can be resolved from the atmospheric baseline, as will be discussed below.
### A specific spectroscopic example \\({}^{13}\\)CH\\({}_{3}\\)F
Consider as an example the application of this technique to trace amounts of \\({}^{13}\\)CH\\({}_{3}\\)F in the atmosphere. Because \\({}^{13}\\)CH\\({}_{3}\\)F is a well known OPFIR laser medium, it is well studied and the parameters for a quantitative analysis are available [14-16]. However, we will show below that the proposed methodology is not particularly dependent upon its favorable OPFIR laser properties. Fig. 1 shows both the energy levels involved and the impact of the pump laser on the spectral signature. The 9P(32) line of the TEA CO\\({}_{2}\\) laser excites the R-branch (\\(\\Delta J=+1\\)) transition from the heavily populated \\(J=4\\) level of the ground (v = 0) vibrational state of \\({}^{13}\\)CH\\({}_{3}\\)F to the nearly empty \\(J=5\\) level of the first excited (C-F stretch) vibrational state (v = 1). Specific rotational transitions in both the ground and excited vibrational state are population modulated by the TEA laser pump, which can be observed as modulated SMM/THz molecular emission and absorption signals.
Specifically, the top right trace of Fig. **1** shows that excitation by a TEA CO\\({}_{2}\\) laser can result in enhanced absorption or emission on the \\(J\\) = 4 - 5 transitions near 0.25 THz for molecules in both the \\(v\\) = 0 and 1 vibrational states. For each \\(J\\) = 4 - 5 transition, the large pump and atmospheric (\\(\\sim\\) 5 GHz) linewidths cause _all five_ of the \\(K\\) states (the spacing from \\(K\\) = 0 to \\(K\\) = 4 is 68 MHz) to be pumped simultaneously, adding to the strength of the SMM/THz signal. Furthermore, because the \\(J\\) = 4 - 5 frequencies in one vibrational state differ from those in the other vibrational state by \\(\\sim\\) 3 GHz, less than the atmospherically pressure-broadened linewidth, these two composite THz signals overlap and add as well. Most importantly, the analyte signatures can easily be separated from atmospheric clutter because the millisecond or longer atmospheric fluctuations are effectively frozen on the nanosecond analytic modulation time scale.
Molecule specificity arises because the molecular fingerprint requires both a coincidence between the laser pump wavelength and IR molecular absorption _and_ a modulated THz signal at specific frequencies: In this case the emission at 250 GHz and the enhanced absorptions due to the \\(J\\) = 5 - 6 v = 1 transition near 300 GHz and the J = 3 - 4 v = 0 transition near 200 GHz. This tripartite signature will last for the relaxation time of the atmosphere (100 psec), which may also contain a temporal signature. In the case of methyl fluoride, each feature will be easily resolved because the spectral width of \\(\\sim\\)5 GHz is smaller than the \\(2B\\) = 50 GHz spacing of the features.
Now consider a much heavier species, with smaller rotational constants. While the rotational selection rules and spectroscopy of molecules, especially asymmetric rotors, is both complex and molecule specific, some general observations can be made. Because each asymmetric rotor has limiting prolate and oblate symmetric top bases, the general character (the density, strength, and general location in frequency) of its spectrum can be established by consideration of their symmetric top limits. Because in the transition from the symmetric top base to the asymmetric top base there is a generalization that the direction of the dipole moment can lie along any of the principal axes of intertia, the transition frequencies of strong lines are separated by \\(\\sim\\)2\\(R\\) (where \\(R\\) is _any_ one of the rotational constants \\(A\\), \\(B\\), or \\(C\\)).
If, as in the case of CH\\({}_{3}\\)F, the appropriate \\(R\\) is \\(B\\) - with 2\\(B\\) = 50 GHz (considerably larger than the atmospheric pressure broadening width of \\(\\sim\\) 5 GHz), the full tripartite modulation shown in Fig. 1 is realized. However, as the offset between the emission and absorption features of the signature approaches the pressure broadened linewidth, there is an overlap and the net modulation efficiency is reduced as shown in Fig. 2. Figure 3 shows this quantitatively. Because molecular size affects both vapor pressure and the size of rotational constants, there is a correlation between molecules that have vapor pressure and those for which the modulation efficiency is reasonably favorable. To within wide spectroscopic and vapor pressure variability, this gas phase limit might occur for the case for which the emission/absorption off set \\(\\sim\\) 2.5 GHz \\(\\sim\\) one half of the pressure broadened linewidth and for which the modulation efficiency is only reduced by a factor of \\(\\sim\\)5.
Additionally, we will show in Section IV.A that as the rotational constants are reduced, molecular absorptions averaged over atmospheric pressure broadening linewidths will actually increase, thereby at least partially compensating for possible loss in modulation efficiency.
B. _Requirements for excitation laser:_ To assess the feasibility of this technique, let us consider quantitatively some of the required CO\\({}_{2}\\) TEA laser characteristics. In terms of energy these range from relatively small laboratory systems of energy \\(\\sim\\)0.1 J/macropulse to more complex, but 'compact' systems which produce \\(\\sim\\)100 J/macropulse [17]. The pulse structure of these systems also has a wide range; including the native macropulse duration of \\(\\sim\\) 100 ns, the generation of multigigawatt pulse trains with micropulse widths of the order of 1 ns [18], passively mode-locked micropulses of duration \\(\\sim\\)150 psec [19], the production of terawatt micropulses of duration 160 psec [17], and the amplification and generation of micropulses whose duration was less than 1 psec [20].
Efficient pulse train production methods approximately conserve energy [18, 21]. Thus, for the purposes of this discussion we will assume that an appropriate modelock, either in a single oscillator or in a master oscillator - slave configuration, will be used to convert a 100 ns macropulse into a train of 10 micropulses, each of 100 psec duration and separated by 10 ns. For this pulse sequence, the peak power of each micropulse will range from \\(10^{8}\\) W for the small 0.1 J laboratory system to \\(10^{11}\\) W for the 100 J system.
The pump intensity must be high enough that the Rabi frequency is comparable to the atmospheric relaxation rate so that significant population transfer can take place from the rotational level in the ground vibrational state to the pumped rotational level in the upper vibrational state. In MKS units, the Rabi frequency is [22]
\\[\\omega_{*}=\\frac{\\mu E}{\\hbar}=8.75\\times 10^{4}\\sqrt[4]{I}[W/m^{2}]. \\tag{1}\\]
where \\(E\\) is the electric field and \\(I\\) the laser intensity in units of Watts/m\\({}^{2}\\). Assume an IR molecular transition whose dipole moment \\(\\mu\\) = 0.1 D, typical for \\({}^{13}\\)CH\\({}_{3}\\)F and other molecules. Then if \\(10^{9}\\) W from a 1 Joule system is spread over 10 cm\\({}^{2}\\), \\(\\omega_{\\rm R}\\)\\(\\sim\\) 87 GHz, and a \\(\\pi\\) excitation pulse \\(\\tau_{\\rm x}=\\pi/\\omega_{\\rm R}\\) would last \\(\\sim\\) 35 psec. If the power from the 100 J system were spread over a 1 m\\({}^{2}\\) cross section, the corresponding Rabi frequency and \\(\\pi\\)-pulse length would be 28 GHz and 110 psec, respectively.
To obtain the collisional relaxation rate, we can assume that the dominant atmospheric collision partners for the trace gas in question are nitrogen and oxygen. For these gases, the rotational relaxation rates will be comparable to the pressure broadening rate of \\(\\sim\\) 3 MHz/Torr HWHM. This corresponds to a linewidth of \\(\\Delta\\)v = 5 GHz and a mean time between collisions (\\(\\tau_{\\rm c}\\) = 1/\\(\\pi\\Delta\\)v) of 60 psec. Thus, the mean collision time is comparable to the \\(\\pi\\) pulse lengths estimated above.
C. _Sensitivity:_ In the microwave (and SMM/THz as well) spectral region under equilibrium conditions there is a significant population not only in the lower state of the transition, but in the upper state as well. Accordingly, the usual calculation of molecular absorption strength includes not only the absorption of molecules that are promoted from the lower state of the transition to the upper state, but also the largely canceling effect of the emission from molecules that make a transition from the upper state to the lower state.
Quantitatively, in the usual equilibrium absorption coefficient, this emission from the upper state cancels all but (1-exp (-hv/kT)) \\(\\approx\\)\\(h\
u\\)\\(kT\\) of the absorption from the lower state [7]. However, in our non-equilibrium double resonance case, the \\(\\pi\\)-pulse pump only places population in one state and the emissions/absorptions are stronger by the inverse of this factor - about 20 at 300 GHz.
In a pure sample, the absorption coefficient for one of the K components of the J = 4 - 5 transition of the CH\\({}_{3}\\)F rotational transitions near 300 GHz is \\(\\sim\\) 10\\({}^{\\circ}\\)cm\\({}^{-1}\\)[7]. Because the pressure broadening of CH\\({}_{3}\\)F in oxygen or nitrogen is only about 1/5 of its self broadening (15 MHz/Torr), its peak absorption coefficient in the atmosphere is about 5 times larger. Thus, a sample dilution of 10\\({}^{\\circ}\\) (1 ppm) over a 100 m (10\\({}^{4}\\) cm) path would yield a _modulated_ signal absorption
\\[\\alpha_{pumped}l = \\frac{\\frac{\\Delta\
u_{CH_{l}}}{\\Delta\
u_{air}}\\times\\alpha_{CH_ {l}\
u}\\times path\\times\\frac{kT}{h\
u}}{dilution} \\tag{2}\\] \\[= \\frac{(5)(10^{\\circ}cm^{-1})(10^{4}cm)(20)}{10^{4}}\\] \\[= 10^{-2}\\]
a very large signal relative to the noise sensitivity of a THz probe system. However, it is not large in comparison to the atmospheric clutter variations or to the 'absolute calibration' of the THz probe system. Thus, it is enormously more advantageous to lock onto this modulation signature than to try to deconvolute this absorption from the very large absorptions \\(\\alpha l\\sim 1\\) (primarily due to water vapor) which are fluctuating in the atmosphere on a time scale of 1 s.
_This is one of the three main points of this paper_.
### _Uniqueness of Signature_
One of the problems with infrared (IR) based remote detection systems is that the molecular absorption contours, which arise from the unresolved rotational structure, aren't very molecule specific. The same would be true for an ordinary THz sensing system because both would have the same pressure broadened linewidth of \\(\\sim\\) 5 GHz. In fact, a widely tunable high-resolution infrared system (e. g. a _large_ FTIR) would have the advantage of looking at several different vibrational bands. However, the methodology that we present here provides a 3-D signature space that is much more molecule specific. The axes of this signature space are:
1. The frequency of the THz signature.
2. The frequency of the IR pump.
3. The time of the relaxation between a. and b.
An example of a 3-D specificity matrix is shown in Fig. 4.
The probe axis is the SMM/THz frequency. Its number of
SMM/THz resolution elements will be limited to the number of 5 GHz wide channels that can be accommodated by whatever atmospheric transmission windows are available at the range of the scenario. This would be the number of channels available in a more conventional SMM/THz remote sensing system, increased by a factor that is determined by the gain attributable to the modulation against clutter and the _kT_/Ir \\(\\sim\\)20 enhancement that result from the action of the infrared pump.
The pump axis would have the same number of resolution elements as a similar purely infrared remote sensing system. This number would be determined by the overlap between the pump and the infrared vibrational-rotation transitions. For both the infrared and the double resonance system described here, the tunability of the TEA laser is therefore important.
The third axis, the temporal relaxation is more difficult to characterize because it represents an experimental regime that has not been explored; the rotation-vibrational relaxation of the analytes in air at high pressure. The closest work on this subject has been on low pressure gases in the context of optically pumped infrared lasers [16]. The primary decay observed by the probe will be most closely related to a state's total depopulation rate, which in turn is a sum over many constituent rates. Therefore, we expect that there will be some signature associated with this axis but that it will not represent as many independent points as the frequency axis. Typically, a simple exponential decay will be observed, whose characteristic time can be measured to \\(\\pm\\)10% but may only vary over about a factor of two from state to state. Weaker signatures may be considerably more specific.
_The much larger number of signature points in this 3-D matrix, as opposed to traditional spectroscopy which considers only the number on one of its 1-D axes, is the second of the three main points of this paper._
### _Remote sensing_
In remote sensing applications, the performance of the system described here will be a complex function of system parameters and scenario. Here we will discuss two specific examples, based on the 250 GHz CH\\({}_{3}\\)F signature shown in Fig. 1, to provide baselines, as well as some of the derivatives from these baselines to provide a broader quantitative sense.
In order to explore the edge of this performance, we will consider two limits of a challenging scenario, the detection with good specificity of a 100 m cloud 1 km away, with a gas dilution of 1 ppm. One km is probably near the maximum range imposed by atmospheric SMM/THz propagation at 250 GHz.
We will assume a SMM/THz probe which provides a diffraction limited \\(\\sim\\) 1 m diameter beam at 1 km with a 1 meter antenna. This beam width corresponds to the example of the 100 J laser discussed in section III.B. In that example, when the TEA laser beam was expanded to fill a 1 m cross section, the resultant Rabi frequency was longer than the relaxation time by about a factor of two, which would reduce the pump efficiency also by about a factor of two.
Alternatively, one could reduce the pump beam diameter relative to the probe beam to make the \\(\\pi\\)-pulse lengths and the atmospheric relaxation times match, but at a similar cost in filling factor.
We also need to make a SMM/THz scenario assumption. Path geometries might include a background retro-reflector, direct transmission through the volume of interest, or a background with diffusive scattering.
For the purposes of baseline discussions, we will assume a cw 1 mW SMM/THz active illuminator for the retro-reflector/direct transmission scenario and a 60 W pulsed extended interaction oscillator (EIO) for the diffuse scatter scenario. More than 1 mW can currently be produced by solid-state sources, and significant development programs are currently underway that have targets for compact sources at least two orders of magnitude higher. The EIO is similar to those used for radar experiments in the SMM/THz, which in many cases also have to depend upon backscatter for their signals.
1. For the most optimistic direct retro-reflector geometry, if we assume a 10 db two-way atmospheric propagation loss at 250 GHz, the signal returned to the receiver will include a carrier of \\(\\sim\\)10\\({}^{4}\\) W overlaid by whatever modulation arises from the molecular interaction. The 1% absorption calculated in Eq. 2 needs to be modified by a factor of two, because of the two way path, and by another factor of five because the K = 0, 1, 2, 3, and 4 components of the J = 4 - 5 transition will be degenerate at atmospheric pressure. Together these provide a 10% modulation and an absorbed power of 10\\({}^{-5}\\) W. With the factor of two that resulted from the under pumping/filling factor effect from above, this reduces the absorbed power to \\(\\sim\\)5 x 10\\({}^{-6}\\) W.
A heterodyne receiver has a noise power of
\\(P_{N}=kT_{N}(Bb)^{1/2}\\) where \\(T_{N}\\) is the noise temperature of the SMM/THz receiver, \\(b\\) the IF bandwidth, and \\(B\\), the post detection bandwidth. For \\(T_{N}=3000\\) K and \\(b=B=10^{10}\\) Hz (the bandwidth required for the mode-locked case), \\(P_{N}=5\\) x 10\\({}^{10}\\) W.
However, since we are seeking to detect a small change in power in the returned signal (the carrier), we must also consider the noise \\(\
ot\\!P_{n}^{\\prime}\\) associated with the mixing of the blackbody noise with the carrier of the returned signal [24]. For this case
\\[P_{n}^{\\prime}\\sim\\sqrt{kT\\Delta\
u P_{n}^{\\prime}}=\\sqrt{(5\\times 10^{-1})(10^{ -4})}\\sim 7\\times 10^{-8}W \\tag{3}\\]
As above, the absorption provides a signal level of 5 x 10\\({}^{-6}\\) W, about 70 times \\(\
ot\\!P_{n}^{\\prime}\\). However, in the mode-lock process, each macropulse of the TEA laser, produces \\(\\sim\\)10 micropulses, so with a 10 Hz TEA laser, in one second of integration we have a S/N of 700.
2. At 1 km, the diffuse backscatter limit is less favorable, but of considerable importance because of the wider range of scenarios that it can encompass. For diffusive backscattering at one kilometer, only 10\\({}^{6}\\) of the SMM/THz probe will be backscattered into the 1 m receiver antenna, and with a 10% reflection efficiency, the probe signal is further reduce to \\(\\sim\\)10\\({}^{7}\\) relative to the direct transmission and retro-reflector geometries. Even with the EIO source, the received carrier is reduced to 6 x 10\\({}^{-7}\\) W, and the absorbed power is now 6 x 10\\({}^{8}\\) W. However, the mixing noise of Eq. 3 is also reduced, in this example to \\(\\sim\\)5 x 10\\({}^{-9}\\) W. This provides for a single 100 ps pulse a S/N \\(\\sim\\) 12, or for one second of integration a S/N \\(\\sim\\)120.
F. _The trade space_
A general purpose implementation which can probe a number of SMM/THz frequencies simultaneously so as to take full advantage of the specificity matrix shown in Fig. 2 would be complex, with the required array of EIOs - or would await the success of one of many THz technology initiatives.
Accordingly, we would like briefly to discuss the trade spaces involved to understand this double resonance approach in the context of current and projected SMM/THz technology.
1. If rather than a S/N of 120 in the diffuse scattering scenario, a S/N of 2 is acceptable, considerably less SMM/THz probe power is needed. Since this reduction in probe carrier power takes us near to the receiver noise limit rather than the mixing noise limit, we can do the simpler calculation based on the receiver noise of 5 x 10\\({}^{-10}\\) W. Then the 10\\({}^{-9}\\) W absorption required at the receiver for the S/N of 2 must be increased by 10\\({}^{7}\\) to account for the scattering loss, 10\\({}^{1}\\) to account for the atmospheric loss, and 10 to account for the 10% absorption. This results in a required SMM/THz probe power of 1 W. For 1 second of integration time, this is reduced to 0.1 W.
2. In many scenarios, more than 1 second is available to make the observation. In receiver noise limited scenarios, 100 seconds would reduce the required SMM/THz power by 10; in mixing noise limited scenarios by a factor of 100.
3. At 100 m, for diffuse back scattering scenarios, the required SMM/THz power is reduced by a factor of 100, because the return signal from diffuse backscattering decreases with the square of the distance.
4. Finally, there may be objects in the background that provide signal returns larger than that of a diffusive reflection. Indeed, it is this effect that results in the well known large dynamic range in active images that combine both 'glint' and diffuse returns.
Collectively, these provide a large and attractive design space. While the last of these items is not quantifiable, he first three taken together reduce the required SMM/THz power by \\(\\sim\\)10\\({}^{5}\\) - 10\\({}^{6}\\) from the 60 W EIO limit to levels well below 1 mW. Any SMM/THz power available above this could be traded for shorter integration, higher S/N, longer range, and unaccounted for system losses.
## IV How special is CH\\({}_{3}\\)F? What about larger molecules?
Clearly CH\\({}_{3}\\)F is not a large molecule. What happens as we go toward many of the species of interest? In this section, we will discuss the third important point of this paper:
_Because of the spectral overlap at high pressure, many of the rotational partition function problems associated with large molecules are either reduced or in some cases even turned into advantages._
### Rotational Considerations
For simplicity, we will consider symmetric-top molecules. While most of the species of interest are asymmetric rotors,their spectral density and intensity are similar to those of symmetric tops. The main difference - the complexity of the high-resolution asymmetric rotor spectrum - is not important at the low resolution of atmospheric spectra.
Although one might expect the larger rotational partition function (which reduces the strength of each _individual_ spectral line) to degrade the proposed scheme for larger molecules significantly, it doesn't because at a given frequency the increased partition function is largely cancelled because more \\(K\\) levels exist within the pressure broadened linewidth. This is because there are \\(2J+1\\)\\(K\\) levels for each \\(J\\) and, on average, the value of \\(J\\) in a spectrum at a chosen frequency is _inversely_ proportional to the rotational constants. Townes and Schawlow [24] calculate this summed coefficient to be
\\[\\alpha_{\\mathrm{small}}=\\frac{2\\pi\\mathrm{i}^{2}Nf_{v}}{9c(kT)^{2}B}\\sqrt{ \\frac{\\pi\\mathrm{c}\\mathrm{i}}{kT}}\\mu^{2}\\cdot(\\frac{4J+3}{J+2})\\frac{\
u_{v} ^{2}\\Delta\
u}{(J+1)^{2}} \\tag{4}\\]
where \\(N\\) is the number density of the molecules, \\(h\\) is Planck's constant, \\(f_{v}\\) is the fraction in the vibrational state of interest (see the next section), \\(B\\) and \\(C\\) are rotational constants inversely proportional to the moments of inertia, \\(\\mu\\) the dipole matrix element, \\(\\Delta\
u\\) the linewidth, \\(\
u_{0}\\) the transition frequency, and \\(T\\) the temperature.
This relation shows that at worst the overlapped pressure broadened absorption is a slow function of frequency and actually would appear to increase for larger molecules with smaller rotational constants.
### _Vibrational Considerations_
This is more difficult to evaluate in general and will depend on the details of the particular species.
a. For this scheme to work, there must be a vibrational band within the _tunable_ range (\\(\\sim\\)9-11 \\(\\upmu\\)m) of the TEA laser. For large molecules with many vibrational modes, it is highly probable that there will be vibrational bands in this range. These bands may be weaker or stronger absorbers (more will be weaker than stronger, but to some extent it will be possible to choose among several) and the required pump power will scale accordingly.
b. Especially in large molecules, there may be many low-lying torsional or bending modes. If some number of these lie at or below \\(kT,f_{v}\\) will be reduced and absorptions such as calculated in Eq. (4) will be reduced accordingly. However, because of the pressure broadening (which is probably greater than the vibrational changes in the rotational spectra) this won't matter too much. Molecules will be pumped out of whatever low-lying mode they are in and promoted \\(\\sim\\)1000 cm\\({}^{3}\\) to a corresponding combination band. All of these smaller absorptions will overlap and the result described by Eq. (4), with \\(f_{v}=1\\), will be restored by the sum.
## V Discussion
Above we have discussed quantitatively a proposed methodology for remote sensing of trace gases at ambient atmospheric pressure. Here we discuss some of its scientific and technological unknowns in the context of its overall prognosis.
A. While the underlying double resonance physics of systems at low pressures typical of optically pumped lasers have been extensively studied [14-16, 25], atmospheric pressure is 4 to 5 orders of magnitude higher. Other than accelerated collisional processes and correspondingly fast pump/probe schemes, there are no first order modifications of the low-pressure behavior that would result from this higher pressure.
B. As an example of a potential obfuscating effect, non-linearities or other higher order effects might cause the TEA laser to modulate the IR and SMM/THz transmission of the atmosphere's ambient constituents. For example, although the water absorption bands in the atmosphere are far removed in frequency from the CO\\({}_{2}\\) pump, water is abundant and the bands are strong.
C. Many of the molecules of interest are large (\\(>100\\) amu). Because these species were inappropriate for low pressure OPFIRs, their spectroscopy and collision dynamics have been little studied. However, as noted above, they may be relatively much more advantageous for this atmospheric pressure application than in ordinary OPFIR lasers. While the calculations above appear favorable, this is an unknown spectroscopic frontier that would be worthy of early study in any pursuit of this approach.
D. The proposed double resonance scheme may also be useful with an infrared probe. The fundamental power of the proposed scheme lies primarily in the use of the TEA laser pump to modulate the atmosphere on a time scale related to its relaxation. The advantageous role of spectral overlap discussed in section IV is independent of the probe. At long range, an infrared probe would allow a much smaller probe beam diameter and significantly reduce the TEA laser power requirements. On the other hand, the SMM/THz is quieter, and it is possible to build room temperature receivers whose sensitivity are within an order of magnitude of even the limits set by these low noise levels.
## VI Conclusions
The use of the rotational signature of molecules for remote sensing applications is a problem that has received considerable attention and thought. In this paper, we have considered the impact of the fundamental difference, pressure broadening, between this atmospheric pressure application and highly successful low-pressure applications in astrophysics, atmospheric science, and the laboratory. We have used this analysis to put forth a double resonance methodology with three important attributes relative to other proposed and implemented techniques:
1. The time resolved pump makes it efficient to separate signal from atmospheric and system clutter, thereby gaining a very large factor in sensitivity. This is very important and not often discussed.
2. The 3-D information matrix (pump laser frequency, probe frequency, and time resolved molecular relaxation) can provide orders of magnitude greater specificity than a sensor that uses only one of these three.
3. The congested and relatively weak spectra associated with large molecules can actually be a positive because the usually negative impact of overlapping spectra can be used to increase signal strength.
## References
* [1] N. Gopalsami and A. C. Raptis, \"Millimeter Wave Sensing of Airborne Chemicals,\" IEEE Trans. Microwave Theory Tech., vol. MTT-49, pp. 646-653, 2001.
* [2] N. Gopalsami and A. C. Raptis, \"Millimeter-wave Imaging of Thermal and Chemical Signatures,\" Proc SPIE Passive Millimeter-Wave Imaging Technology III, vol. 3703, pp. 130-138, 1999.
* [3] N. A. Salmon and R. Appleby, \"Carbon monoxide detection using passive and active millimetre wave radiometry,\" Proc. SPIE Passive Millimeter-Wave Imaging Technology IV, vol. 4032, pp. 119-122, 2000.
* [4] S. Szlazak, S. Y. Yam, D. Mastrorovic, H. Hansen, and D. Abbott, \"Remote Gas Detection Using Millimeter-Wave Spectroscopy for Counter Bio-Terriversion,\" Proc. SPIE, vol. 4937, pp. 73-83, 2002.
* [5] E. Herbst, \"Chemistry in the Interstellar Medium,\" Ann. Rev. Phys. Chem., vol. 46, pp. 27-53, 1995.
* [6] J. W. Waters, W. G. Read, L. Froidevaux, R. F. Jarnot, R. E. Cofield, D. A. Flower, G. K. Lau, H. M. Picket, M. L. Sante, D. L. Wu, M. A. Boyles, J. R. Burke, R. R. Lay, M. S. Loo, N. J. Livesey, T. A. Lungu, G. L. Mannev, L. L. Nakamura, S. V. Bernu, P. B. Kleatonnev, Z. Shippony, P. H. Siegel, R. P. Thurustans, R. S. Harwood, H. C. Pumphrey, and M. J. Filipiak, \"The UARS and EOS Microwave Limb Sounder Experiments,\" J. Atmos. Sci., vol. 56, pp. 194-218, 1999.
* [7] W. Gordy and R. L. Cook, Microwave Molecular Spectra, vol. 18, Third ed. New York: John Wiley & Sons, 1984.
* [8] S. Albert, D. T. Petkie, R. P. A. Bettens, S. Belov, and F. C. De Lucia, \"FASSIST: A new Gas-Phase Analytical Tool,\" Anal. Chem., vol. 70, pp. 1794-7274, 1998.
* [9] I. R. Medvedev, M. Behnke, and F. C. De Lucia, \"Fast analysis of gases in the submillimeter/terahertz with \"absolute\" specificity,\" Appl. Phys. Lett., vol. 86, pp. 1, 2005.
* [10] F. C. De Lucia and D. T. Petkie, \"THz gas sensing with submillimeter techniques,\" Proc. SPIE, vol. 5790, pp. 44-53, 2005.
* [11] F. C. De Lucia and D. T. Petkie, \"The physics and chemistry of THz sensors and imagers: Long-standing applications, new opportunities, and pitfalls,\" Proc. SPIE, vol. 5989, pp. 59891, 2005.
* [12] F. C. De Lucia, D. T. Petkie, R. K. Shelton, S. L. Westcott, and B. N. Strecker, \"THz + X\": a search for new approaches to significant problems,\" Proc. SPIE, vol. 5790, pp. 219-230, 2005.
* [13] F. C. De Lucia, \"Noise, detectors, and submillimeter-terahertz system performance in nonambient environments,\" J. Opt. Soc. Am., vol. B21, pp. 1275, 2004.
* [14] W. H. Mattson and F. C. De Lucia, \"Millimeter Wave Spectroscopic Studies of Collision-induced Energy Transfer Processes in the \\({}^{15}\\)CH\\({}_{3}\\)F Laser,\" IEEE J. of Quan. Electron., vol. 19, pp. 1284-1293, 1983.
* [15] R. I. McCormick, H. O. Everitt, F. C. De Lucia, and D. D. Skatrud, \"Collisional Energy Transfer in Optically Pumped Far-Infrared Lasers,\" IEEE J. of Quantum Electron., vol. 23, pp. 2069-2077, 1987.
* [16] H. O. Everitt and F. C. De Lucia, \"Rotational Energy Transfer in Small Polyatomic Molecules,\" in Advances in Atomic and Molecular Physics, vol. 35, B. Henderson and H. Walther, Eds. San Diego: Academic, 1995, pp. 331-400.
* [17] S. Y. Tochitsky, R. Narang, C. Filip, C. E. Clayton, K. A. Marsh, and C. Joshi, \"Generation of 160-togestraur-power CO\\({}_{2}\\) laser pulses,\" Opt. Lett., vol. 24, pp. 1717-1719, 1999.
* [18] P.-A. Belanger and J. Boivin, \"Gigawatt peak-power pulse generation by injection of a single short pulse in a regenerative amplifier above threshold (RAAT),\" Can J. Phys., vol. 54, pp. 720-727, 1976.
* [19] A. J. Alcock and A. C. Walker, \"Generation and detection of 150-psec mode-locked pulses from a multi-atmosphere CO\\({}_{2}\\) laser,\" Appl. Phys. Lett., vol. 25, pp. 299-301, 1974.
* [20] P. Corkum, \"Amplification of Picosecond 10 mm Pulses in Multimatmosphere CO\\({}_{2}\\) Lasers,\" IEEE J. of Quan. Electron., vol. QE-21, pp. 216-232, 1985.
* [21] A. J. Alcock and P. B. Corkum, \"Ultra-Short Pulse Generation with CO\\({}_{2}\\) Lasers,\" Phil. Trans. Royal Soc. (London). vol. A298, pp. 365-376, 1980.
* [22] P. F. Bernath, Spectra of Atoms and Molecules. New York: Oxford University Press, 1995.
* [23] E. L. Jacobs, S. Moyer, C. C. Franck, F. C. De Lucia, C. Casto, D. T. Petkie, S. R. Murill, and C. E. Halford, \"Concealed weapon identification using terahertz imaging sensors,\" Proc. SPIE, vol. 6212, pp. 621-6210, 2006.
* [24] C. H. Townes and A. L. Schawlow, Microwave Spectroscopy. New York: McGraw-Hill Dover Publications, Inc., 1955.
* [25] H. O. Everitt and F. C. De Lucia, \"A Time-Resolved Study of Rotational Energy Transfer into A and E Symmetry Species of \\({}^{15}\\)CH\\({}_{3}\\)F,\" J. Chem. Phys., vol. 90, pp. 3520-3527, 1989.
\\begin{tabular}{c} Frank C. De Lucia is a University Professor and Professor of Physics at Ohio State University and previously was Professor of Physics at Duke University. He has served both departments as Chairman. Along with his students and coworkers he has developed many of the basic technologies and systems approaches for the SMM/THz and exploited them for scientific studies. Among his research interests are imaging and phenomenology, remote sensing, the spectroscopy of small, fundamental molecules, SMM/THz techniques, collisional processes and mechanisms, the excitation and study of excited states, molecules of atmospheric and astronomical importance, and analytical chemistry and gas sensing. He is a member of the Editorial Board of The Journal of Molecular Spectroscopy; belongs to the American Physical Society, the Optical Society of America, the Institute of Electronic and Electrical Engineers, and Phi Beta Kappa. He was awarded the 1992 Max Planck Research Prize in Physics and the 2001 William F. Meggers Award of the Optical Society of America. \\\\ \\end{tabular} \\begin{tabular}{c} Douglas T. Petkie received a B.S. in physics from Carnegie Mellon University, Pittsburgh, PA in 1990 and a Ph.D. in physics from Ohio State University, Columbus, OH in 1996. He is currently an assistant professor of physics at Wright State University, Dayton, OH. His research interests include the development of submillimeter and terahertz systems for in-situ and remote sensing applications that utilize spectroscopy, imaging and radar techniques. He is a member of the American Physical Society, American Association of Physics Teachers, Council on Undergraduate Research, and Sigma Pi Sigma. \\\\ \\end{tabular}
\\begin{tabular}{c} Henry O. Everitt is an Army senior research scientist at the Aviation and Missile Research, Development, and Engineering Center located at Redstone Arsenal. AL. He is also an Adjunct Professor of Physics at Duke University and the University of Alabama, Huntsville. His early work focused on the development and understanding of optically pumped far infrared lasers through the use of time-resolved THz/IR pump-probe double resonance techniques. More recently, he has concentrated on ultrafast optical studies of relaxation dynamics in wide bandgap semiconductor heterostructures and nanostructures. He is a member of the Editorial Board for the journal Quantum Information Processing and belongs to the American Physical Society, the Optical Society of America, the American Academy for the Advancement of Science, Sigma Xi, and Phi Beta Kappa. In 2004 he became a Fellow of the Optical Society of America and the Army Research Laboratory (Emeritus). \\\\ \\end{tabular}
Figure 1: The energy level diagram at the left shows that the pump connects J = 4 in the v = 0 state with J = 5 in the v = 1 state. The figure on the right shows the effect of the pump on the SMM/THz probe.
Figure 4: The 3 - D specificity matrix associated with the time resolved double resonance scheme for the \\({}^{12}\\)CHJF example. This includes only the signature that is known from low pressure OPFIR studies. At atmospheric pressure, other pump coincidence may be found. If so, this would add additional planes to this figure, one for each additional pump coincidence. Only the first order relaxation signatures are shown. There will be many other weaker signatures associated with the complexities of rotational relaxation.
Figure 3: The reduction in net modulation amplitude as a function of ratio of the signature offset to the pressure broadened linewidth.
Figure 2: SMM/THz signature as a function of separation of the pump induced absorption and pump induced emission in units of pressure broadened line width. | The remote sensing of gases in complex mixtures at atmospheric pressure is a challenging problem and much attention has been paid to it. The most fundamental difference between this application and highly successful astrophysical and upper atmospheric remote sensing is the line width associated with atmospheric pressure broadening, \\(\\sim 5\\) GHz in all spectral regions. In this paper, we discuss quantitatively a new approach that would use a short pulse infrared laser to modulate the submillimeter/terahertz (SMM/THz) spectral absorptions on the time scale of atmospheric relaxation. We show that such a scheme has three important attributes: (1) The time resolved pump makes it possible and efficient to separate signal from atmospheric and system clutter, thereby gaining as much as a factor of \\(10^{6}\\) in sensitivity, (2) The 3-D information matrix (infrared pump laser frequency, SMM/THz probe frequency, and time resolved SMM/THz relaxation) can provide orders of magnitude greater specificity than a sensor that uses only one of these three dimensions, and (3) The congested and relatively weak spectra associated with large molecules can actually be an asset because the usually deleterious effect of their overlapping spectra can be used to increase signal strength.
Double resonance, remote sensing, terahertz | Provide a brief summary of the text. |
arxiv-format/0711_1337v1.md | # External Fields as a Probe for Fundamental Physics
Holger Gies
Institute for Theoretical Physics, Heidelberg University,
Philosophenweg 16, D-69120 Heidelberg, Germany [email protected]
## 1 Introduction
With the advent of quantum field theory, our understanding of the vacuum has changed considerably from a literal \"nothing\" to such a complex \"something\" that its quantitative description requires to know almost \"everything\" about a given system.
Consider a closed quantum field theoretic system in a box with boundaries, where all matter density is already removed (pneumatic vacuum). Still, the walls of the system which are in contact with surrounding systems may have a temperature, releasing black-body radiation into the box. Charges and currents outside the box can create fields, exerting their influence on the box's inside. The box may furthermore be placed on a gravitationally curved manifold. And finally, the boundaries itself do generally impose constraints on the fluctuating quantum fields inside.
A pure quantum vacuum which is as close to trivial as possible requires to take the limit of vanishing parameters which quantify the influence on the quantum fluctuations, i.e., temperature, fields \\(\\to 0\\), and boundaries \\(\\rightarrow\\infty\\). Even then, the quantum vacuum may be thought of as an infinity of ubiquitous virtual processes - fluctuations of the quantum fields representing creations and annihilations of wave packets (\"particles\") in spacetime - which are compatible with Heisenberg's uncertainty principle.
Even if the ground state realizes the naive anticipation of vanishing field expectation values, we can probe the complex structure of the quantum vacuum by applying externalfields or boundaries etc., and measuring the response of the vacuum to a suitable probe. For instance, let us send a weak light beam into the box; it may interact with the virtual fluctuations and will have finally traveled through the box at just \"the speed of light\". If we switch on an external magnetic field, the charged quantum fluctuations in the box are affected and reordered by the Lorentz force. This has measurable consequences for the speed of the light probe which now interacts with the reordered quantum fluctuations. Thus, quantum field theory invalidates the superposition principle of Maxwell's theory. The quantum world creates nonlinearities and also nonlocalities [1, 2, 3].
Quantum vacuum physics inspires many research branches, ranging from mathematical physics studying field theory with boundaries and functional determinants to applied physics where the fluctuations may eventually be used as a building block to design dispersive forces in micro- and nanomachinery. Many quantum vacuum phenomena such as the Casimir effect are similarly fundamental in quantum field theory as the Lamb shift or \\(g-2\\) experiments, and hence deserve to be investigated and measured with the same effort. Only a high-precision comparison between quantum vacuum theory and experiment can reveal whether we have comprehensively understood and properly computed the vacuum fluctuations.
In this article, I will argue that, with such a comparison, one further step can be taken: a high-precision investigation can then also be used to look for systematic deviations as a hint for new physics phenomena. Similarly to \\(g-2\\), quantum vacuum experiments can systematically be used to explore new parameter ranges of particle-physics models beyond the Standard Model (BSM).
What are the scales of sensitivity which we can expect to probe? Consider a typical Casimir experiment: micro- or mesoscopic setups probe dispersive forces between bodies at a separation \\(a={\\cal O}({\\rm nm}-10\\mu{\\rm m})\\). This separation \\(a\\) also sets the scale for the dominant quantum fluctuation wavelengths which are probed by the apparatus. The corresponding energy scales are of order \\({\\cal O}(10{\\rm meV}-100{\\rm eV})\\). As another example, consider an optical laser propagating in a strong magnetic field of a few Tesla. Again, the involved energy scales allow to probe quantum fluctuations below the \\({\\cal O}(10{\\rm eV})\\) scale. Therefore, quantum vacuum experiments can probe new physics below the eV scale and hence are complementary to accelerators. Typical candidates are particles with masses in the meV range, i.e., _physics at the milli scale_[4].
The particular capability of these experiments is obviously not a sensitivity to heavy particles, but a sensitivity to light but potentially very weakly coupled particles. In the following, I will especially address optical experiments. Here, there are at least two lever arms for increasing the sensitivity towards weak coupling: Consider a laser beam entering an interaction region, say a magnetized quantum vacuum; some photons may leave the region towards a detector. Let us assume that the setup is such that the Standard Model predicts zero photons in the detector; this implies that the observation of a single photon (which is technically possible) is already a signature for new physics. On the other hand, an incoming beam, for instance, from an optical 1000Watt laser contains \\(\\sim 10^{21}\\) photons per second. It is this ratio of \\(10^{21}:1\\) which can be exploited for overcoming a weak-coupling suppression. Second, the interaction region does not have to be microscopic as in accelerator experiments, but can be of laboratory size (meters) or can even increase to kilometers if, e.g., the laser light is stored in a high-finesse cavity.
Why should we care for the milli scale at all? First of all, exploring a new particle-physics landscape is worthwhile in itself; even if there is no discovery, it is better to _know_ about a non-existence than to _assume_ it. Second, we already know about physics at the milli scale: neutrino mass differences and potentially also their absolute mass is of order \\({\\cal O}(1-100{\\rm meV})\\); also, the cosmological constant can be expressed as \\(\\Lambda\\sim(2{\\rm meV})^{4}\\). A more systematic search for further particle physics at the milli scale hence is certainly worthwhile and could perhaps lead to a coherent picture. Third, a large number of Standard-Model extensions not only involves but often requires - for reasons of consistency - a hidden sector, i.e., a set of so far unobserved degrees of freedom very weakly coupled to the Standard Model. A discovery of hidden-sector properties could much more decisively single out the relevant BSM extension than the discovery of new heavy partners of the Standard Model.
Optical quantum vacuum experiments can be very sensitive to new light particles which are weakly coupled to photons. From a bottom-up viewpoint, I will first discuss low-energy effective theories of the Standard Model and of BSM extensions which allow for a classification of possible phenomena and help relating optical observables with fundamental particle properties. Subsequently, current bounds on new-physics parameters are critically examined. In Sec. 3, I briefly describe current and future experimental setups, and discuss recently published data. An emphasis is put on the question of how dedicated quantum vacuum experiments can distinguish between different particle-physics scenarios and extract information about the nature of the involved degrees of freedom. Section 4 gives a short account of underlying microscopic models that would be able to reconcile a large anomalous signal in the laboratory with astrophysical bounds. Conclusions are given in Sec. 5.
## 2 Low-energy effective actions
A plethora of ideas for BSM extensions can couple new particle candidates to our photon. From a bottom-up viewpoint, many of these ideas lead to similar consequences for low-energy laboratory experiments, parameterizable by effective actions that describe the photon coupling to the new effective degrees of freedom. In the following, we list different effective actions that are currently often used for data analysis. This list is not unique nor complete.
### QED and Heisenberg-Euler effective action
The first example is standard QED as a low-energy effective theory of the Standard Model: if there are no light particles coupling to the photon other than those of the Standard Model, present and near-future laboratory experiments will only be sensitive to pure QED degrees of freedom, photon and electron. If the variation of the involved fields as well as the field strength is well below the electron mass scale, the low-energy effective action is given by the lowest-order Heisenberg-Euler effective action [1, 2, 3],
\\[\\Gamma_{\\rm HE}=\\!\\int_{x}\\!\\Biggl{\\{}\\!-\\frac{1}{4}F_{\\mu\
u}F^{\\mu\
u}+ \\frac{8}{45}\\frac{\\alpha^{2}}{m^{4}}\\left(\\frac{1}{4}F_{\\mu\
u}F^{\\mu\
u} \\right)^{2}+\\frac{14}{45}\\frac{\\alpha^{2}}{m^{4}}\\left(\\frac{1}{4}F_{\\mu\
u} \\widetilde{F}^{\\mu\
u}\\right)^{2}\\!\\!+\\mathcal{O}\\left(\\frac{F^{6}}{m^{8}} \\frac{\\rho^{2}F^{2}}{m^{2}}\\right)\\!\\Biggr{\\}}, \\tag{1}\\]
which arises from integrating out the \"heavy\" electron-positron degrees of freedom to one-loop order. In addition to the Maxwell term, the second and third term exemplify the fluctuation-induced nonlinearities. The corresponding quantum equations of motion thus entail a photon self-coupling. As an example, let us consider the propagation of a laser beam with a weak amplitude in a strong magnetic field \\(B\\). From the linearized equations of motion for the laser beam, we obtain a dispersion relation which can be expressed in terms of refractive indices for the magnetized quantum vacuum [2, 5, 6]:
\\[n_{\\parallel}\\simeq 1+\\frac{14}{45}\\frac{\\alpha^{2}}{m^{4}}\\,B^{2}\\sin^{2} \\theta_{B},\\quad n_{\\perp}\\simeq 1+\\frac{8}{45}\\frac{\\alpha^{2}}{m^{4}}\\,B^{2} \\sin^{2}\\theta_{B}, \\tag{2}\\]
where \\(\\theta_{B}\\) is the angle between the \\(B\\) field and the propagation direction. Most importantly, the refractive indices, corresponding to the inverse phase velocity of the beam, depend on the polarization direction \\(\\parallel\\) or \\(\\perp\\) of the laser with respect to the \\(B\\) field. The magnetized quantum vacuum is birefringent. As a corresponding observable, an initially linearly polarized laser beam which has nonzero components for both \\(\\parallel\\) and \\(\\perp\\) modes picks up an _ellipticity_ by traversing a magnetic field: the phase relation between the polarization modes changes, but their amplitudes remain the same. The ellipticity angle \\(\\psi\\) is given by \\(\\psi=\\frac{\\omega}{2}\\ell(n_{\\parallel}-n_{\\perp})\\sin 2\\theta\\), where \\(\\theta\\) is the angle between the polarization direction and the \\(B\\) field, and \\(\\ell\\) is the path length inside the magnetic field.
So far, a direct verification of QED vacuum magnetic birefringence has not been achieved; if measured it would be the first experimental proof that the superposition principle in vacuum is ultimately violated for macroscopic electromagnetic fields.
Another optical observable is important in this context: imagine some effect modifies the amplitudes of the \\(\\parallel\\) or \\(\\perp\\) components in a polarization-dependent manner, but leaves the phase relations invariant. By such an effect, a linearly polarized beam will then effectively change its polarization direction after a passage through a magnetic field by a _rotation_ angle \\(\\Delta\\theta\\). Since amplitude modifications involve an imaginary part for the index of refraction, rotation from a microscopic viewpoint is related to particle production or annihilation. In QED below threshold \\(\\omega<2m\\), electron-positron pair production by an incident laser is excluded. Only photon splitting in a magnetic field would be an option [6]. However, for typical laboratory parameters, the mean free path exceeds the size of the universe by many orders of magnitude and hence is irrelevant.1 We conclude that a sizeable signal for vacuum magnetic rotation \\(\\Delta\\theta\\) in an optical experiment would be a signature for new fundamental physics.
### Axion-Like Particle (ALP)
As a first BSM example, we consider a neutral scalar \\(\\phi\\) or pseudo-scalar degree of freedom \\(\\phi^{-}\\) which is coupled to the photon by a dimension-five operator,
\\[\\Gamma_{\\rm ALP}=\\int_{x}\\left\\{-\\frac{g}{4}\\phi^{(-)}F^{\\mu\
u}\\stackrel{{ (\\sim)}}{{F}}_{\\mu\
u}-\\frac{1}{2}(\\partial\\phi^{(-)})^{2}-\\frac{1}{2}m_{\\phi }{}^{2}\\phi^{(-)2}\\right\\}. \\tag{3}\\]
This effective action is parameterized by the particle's mass \\(m_{\\phi}\\) and the dimensionful coupling \\(g\\). For the pseudo-scalar case, this action is familiar from axion models [8], where the two parameters are related, \\(m_{\\phi}\\sim g\\). Here, we have a more general situation with free parameters in mind which we refer to as axion-like particles (ALP). In optical experiments in strong \\(B\\) fields, ALPs can induce both ellipticity and rotation [9], since only one photon polarization mode couples to the axion and the external field: the \\(\\parallel\\) mode in the pseudo-scalar case, the \\(\\perp\\) mode in the scalar case. For instance, coherent photon-axion conversion causes a depletion of the corresponding photon mode, implying rotation. Solving the equation of motion for the coupled photon-ALP system for the pseudo-scalar case yields a prediction for the induced ellipticity and rotation,
\\[\\Delta\\theta^{-}\\!=\\!\\left(\\frac{gB\\omega}{m_{\\phi}^{2}}\\right)^{2}\\sin^{2} \\!\\left(\\frac{Lm_{\\phi}^{2}}{4\\omega}\\right)\\sin 2\\theta,\\ \\psi^{-}\\!=\\!\\!\\frac{1}{2}\\left(\\frac{gB\\omega}{m_{\\phi}^{2}} \\right)^{2}\\left(\\frac{Lm_{\\phi}^{2}}{2\\omega}-\\sin\\!\\left(\\frac{Lm_{\\phi}^{2} }{2\\omega}\\right)\\right)\\sin 2\\theta, \\tag{4}\\]
for single passes of the laser through a magnetic field of length \\(L\\). For the scalar, we have \\(\\Delta\\theta=-\\Delta\\theta^{-}\\), \\(\\psi=-\\psi^{-}\\). This case is a clear example of how fundamental physics could be extracted from a quantum vacuum experiment: measuring ellipticity and rotation signals uniquely determines the two model parameters, ALP mass \\(m_{\\phi}\\) and ALP-photon coupling \\(g\\). Measuring the signs of \\(\\Delta\\theta\\) and \\(\\psi\\) can even resolve the parity of the involved particle.
Various microscopic particle scenarios lead to a low-energy effective action of the type (3). The classic case of the axion represents an example in which only the weak coupling to the photon is relevant and all other potential couplings to matter are negligible. In this case, the laser can be frequency-locked to a cavity such that both quantities are enhanced by a factor of \\(N_{\\rm pass}\\) accounting for the number of passes. For the generated ALP component, the cavity is transparent. This facilitates another interesting experimental option, namely, to shine the ALP component through a wall which blocks all the photons. Behind the wall, a second magnetic field can induce the reverse process and photons can be regenerated out of the ALP beam [10]. The regeneration rate is
\\[\\dot{N}_{\\gamma~{}{\\rm reg}}^{(-)}=\\dot{N}_{0}\\left(\\frac{N_{\\rm pass}+1}{2} \\right)\\frac{1}{16}\\left(gBL\\cos\\theta\\right)^{4}\\left[\\sin\\left(\\frac{Lm_{ \\phi}^{2}}{4\\omega}\\right)\\right/\\!\\frac{Lm_{\\phi}^{2}}{4\\omega}\\right]^{4}, \\tag{5}\\]
where \\(\\dot{N}_{0}\\) is the initial photon rate, and the magnetic fields are assumed to be identical.
In other models, such as those with a chameleon mechanism [11], the ALP cannot penetrate into the cavity mirrors but gets reflected back into the cavity. Whereas this has no influence on the single-pass formulas for \\(\\psi\\) and \\(\\Delta\\theta\\) in Eq. (4), the use of cavities and further experimental extensions can be used to distinguish between various microscopic models, see below.
### Minicharged Particle (MCP)
In addition to the example of a neutral particle, optical experiments can also search for charged particles. If their mass is at the milli scale, these experiments can even look for very weak coupling, i.e., minicharged particles (MCPs) [12], the charge of which is smaller by a factor of \\(\\epsilon\\) in comparison with the electron charge. If the MCP is, for instance, a Dirac spinor \\(\\psi_{\\epsilon}\\), the corresponding action is
\\[\\Gamma_{\\rm MCP}=-\\bar{\\psi}(i\\partial\\!\\!\\!/+\\epsilon eA\\!\\!\\!/)\\psi+m_{ \\epsilon}\\bar{\\psi}\\psi, \\tag{6}\\]
where we again encounter two parameters, \\(\\epsilon\\) and the MCP mass \\(m_{\\epsilon}\\). At a first glance, the system looks very similar to QED. However, since the particle mass \\(m_{\\epsilon}\\) can be at the milli scale or even lighter, the weak-field expansion of the Heisenberg-Euler effective action for slowly varying fields (1) is no longer justified. Both field strength as well as laser frequency can exceed the electron mass scale with various consequences [13]: the laser frequency can be above the pair-production threshold \\(\\omega>2m_{\\epsilon}\\) such that a rotation signal becomes possible. Second, there is no perturbative ordering anymore as far as the coupling to the \\(B\\) field is concerned, hence the MCP fluctuations have to be treated to all orders with respect to \\(B\\). All relevant information is encoded in the polarization tensor corresponding to an MCP loop with two photon
with a mixing parameter \\(\\chi\\) and a paraphoton mass term \\(\\mu\\). Without the mass term, the kinetic terms could be diagonalized by a non-unitary shift \\(A^{\\prime}_{\\mu}\\rightarrow\\hat{A}^{\\prime}_{\\mu}-\\chi A_{\\mu}\\) which would decouple the fields at the expense of an unobservable charge renormalization. The mass term does not remain diagonal by this shift, such that observable \\(\\gamma\\gamma^{\\prime}\\) oscillations arise from mass mixing in this basis. The pure paraphoton theory is special in the sense that \\(\\gamma\\gamma^{\\prime}\\) conversion is possible without an external field and is not sensitive to polarizations. For instance, the conversion rate after a distance \\(L\\) is given by \\({\\cal P}_{\\gamma\\rightarrow\\gamma^{\\prime}}=4\\chi^{2}\\sin^{2}\\frac{\\mu^{2}L} {4\\omega}\\). Therefore, paraphotons can be searched for in future light-shining-through-walls experiments [18]. Below, we discuss microscopic scenarios in which paraphotons and MCPs naturally occur simultaneously.
### Bounds on low-energy effective parameters
Many different observations seem to constrain the parameters in the effective theories listed above. The strongest constraints typically come from astrophysical observations usually in combination with energy-loss arguments. Consider, for instance, the ALP low-energy effective action (3). Assuming that it holds for various scales of momentum transfer, we may apply it to solar physics. Thermal fluctuations of electromagnetic fields in the solar plasma, giving rise to non-vanishing \\(F^{\\mu\
u}\\stackrel{{(\\sim)}}{{F}}_{\\mu\
u}\\), act as a source for \\(\\phi^{(-)}\\) ALPs. In absence of other sizeable interactions, ALPs escape the solar interior immediately and contribute to stellar cooling. A similar argument for the helium-burning life-time of HB stars leads to a limit \\(g\\lesssim 10^{-10}\\)GeV\\({}^{-1}\\) for ALP masses in the eV range and below [19]. Monitoring actively a potential axion flux from the sun as done by the CAST experiment even leads to a slightly better constraint for ALP masses \\(<0.02\\)eV [20].
Astrophysical energy-loss arguments constrain also MCPs [21]: for instance, significant constraints on \\(\\epsilon\\) come from helium ignition and helium-burning lifetime of HB stars, resulting in \\(\\epsilon\\leq 2\\times 10^{-14}\\) for \\(m_{\\epsilon}\\) below a few keV.
Without going into detail, let us stress that all these bounds on the effective-action parameters depend on the implicit assumption that the effective actions hold equally well at solar scales as well as in the laboratory. But whereas solar processes typically involve momentum transfers on the keV scale, laboratory quantum vacuum experiments operate with much lower momentum transfers, a typical scale being \\(\\mu\\)eV. In other words, the above bounds can only be applied to laboratory experiments, if one accepts an extrapolation of the underlying model over nine orders of magnitude. In fact, it has been shown quantitatively how the above-mentioned bounds have to be relaxed, once a possible dependence of these effective-action parameters, e.g., on momentum transfer, temperature, density or other ambient-medium parameters, is taken into account [22]. This observation indeed provides for another strong imperative to perform well-controlled laboratory experiments.
Previous laboratory experiments have also produced more direct constraints on the effective action parameters. For instance, the best laboratory bounds on MCPs previously came from limits on the branching fraction of ortho-positronium decay or the Lamb shift [21, 23], resulting in \\(\\epsilon\\lesssim 10^{-4}\\). Similarly, pure laboratory bounds on ALP parameters used to be much weaker than those from astrophysical arguments.
## 3 From optical experiments to fundamental particle properties
A variety of quantum vacuum experiments is devoted to a study of optical properties of modified quantum vacua. The BFRT experiment [24] has pioneered this field by providing upper bounds on vacuum-magnetically induced ellipticity, rotation as well as photon regeneration. Improved bounds for ellipticity and rotation have recently been published by the PVLAS collaboration [25].1 Further polarization experiments such as Q&A [27] and BMV [28] have also already taken and published data.
Footnote 1: The new data is no longer compatible with the PVLAS rotation signal reported earlier [26]. Nevertheless, this artifact deserves the merit of having triggered the physics-wise well justified rapid evolution of the field which we are currently witnessing.
The PVLAS experiment uses an optical laser (\\(\\lambda=1064\\)nm and \\(532\\)nm) which is locked to a high-finesse Fabry-Perot cavity (\\(N={\\cal O}(10^{5})\\)) and traverses an \\(L=1\\)m long magnetic field of up to \\(B=5.5\\)Tesla. Owing to the high finesse, the optical path length \\(\\ell\\) inside the magnet effectively increases up to several \\(10\\)km.
The improved PVLAS bounds for ellipticity and rotation can directly be translated into bounds on the refractive-index and absorption-coefficient differences, \\(\\Delta n=n_{\\parallel}-n_{\\perp}\\), \\(|\\Delta n(B=2.3\\mbox{T})|\\leq 1.1\\times 10^{-19}/\\mbox{pass},\\quad|\\Delta \\kappa(B=5.5\\mbox{T})|\\leq 5.4\\times 10^{-15}\\mbox{cm}^{-1}\\). (9)
As an illustration, an absorption coefficient of this order of magnitude would correspond to a photon mean free path in the magnetic field of the order of a hundred times the distance from earth to sun, demonstrating the quality of these laboratory experiments.
These bounds imply new constraints, e.g., for the ALP parameters, \\(g\\simeq 4\\times 10^{-7}\\)GeV\\({}^{-1}\\) for \\(m_{\\phi}<1\\)meV. More importantly, for MCPs, we find \\(\\epsilon\\lesssim 3\\times 10^{-7}\\) for \\(m_{\\epsilon}<30\\)meV. This bound is indeed of a similar size as a cosmological MCP bound which has recently been derived from a conservative estimate of the distortion of the energy spectrum of the cosmic microwave background [29]. Hence, laboratory experiments begin to enter the parameter regime which has previously been accessible only to cosmological and astrophysical considerations.
Imagine an anomalously large signal, say, for ellipticity \\(\\psi\\) and rotation \\(\\Delta\\theta\\) is observed by such a polarization experiment, thereby providing evidence for vacuum-magnetic birefringence and dichroism. How could we extract information about the nature of the underlying particle-physics degree of freedom? The two data points for \\(\\psi\\) and \\(\\Delta\\theta\\) can be translated into parameter pairs \\(g\\) and \\(m_{\\phi}\\) for ALPs, \\(\\epsilon\\) and \\(m_{\\epsilon}\\) for MCPs, etc., leaving open many possibilities. As already mentioned above, a characteristic feature is the sign of \\(\\psi\\) and \\(\\Delta\\theta\\); i.e., identifying the polarization modes \\(\\parallel\\) or \\(\\perp\\) as fast or slow modes reveals information about the microscopic properties [15]: e.g., a pseudo-scalar ALP goes along with \\(\\Delta\\kappa,\\Delta n>0\\), whereas a scalar ALP requires \\(\\Delta\\kappa,\\Delta n<0\\). A mixed combination, say \\(\\Delta\\kappa>0,\\Delta n<0\\), would completely rule out an ALP, leaving a spinor MCP as an option, etc.
Another test would be provided by varying the experimental parameters such as length or strength of the magnetic field, or the laser frequency [15]. For instance, an ALP-induced rotation exhibits a simple \\(B^{2}\\) dependence, the nonperturbative nature of MCP-induced rotation results in a \\(B^{2/3}\\) law, cf. Eqs. (4), (7).
The underlying degree of freedom can more directly be identified by special-purpose experiments that probe a specific property of particle candidate. The light-shining-through-walls experiment is an example for such a setup. A magnetically-induced photon regeneration signal in such an experiment would clearly point to a weakly interacting ALP degree of freedom; the outgoing photon polarization would distinguish between scalar (\\(\\perp\\) mode) or pseudo-scalar (\\(\\parallel\\) mode) ALPs. For this reason, a number of light-shining-through-walls experiments is currently being built or already taking data: ALPS at DESY [30], LIPSS at JLab [31], OSQAR at CERN [32], and GammeV at Fermilab [33]. PVLAS will shortly be upgraded accordingly, and BMV has already published first results [28], yielding a new bound \\(g\\lesssim 1.3\\times 10^{-6}\\)GeV\\({}^{-1}\\) for \\(m_{\\phi}\\lesssim 2\\)meV.
MCPs do not contribute to a photon regeneration signal, since pair-produced MCPs inside the magnet are unlikely to recombine behind the wall and produce a photon.\\(\\parallel\\) A special-purpose quantum vacuum experiment for MCP production and detection has been suggested in [34]: a strong electric field, e.g., inside an RF cavity can produce an MCP dark current by means of the nonperturbative Schwinger mechanism [1, 17]. A first signature could be provided by an anomalous fall-off of the cavity quality factor (the achievable high-quality factor of TESLA cavities already implies the bound \\(\\epsilon\\lesssim 10^{-6}\\)[34]). Owing to the weak interaction, the MCP current can pass through a wall where a dark current detector can actively look for a signal.
In the case of a strongly interacting ALP, photon regeneration behind a wall would not happen either, since the wall would block both photons and generated ALPs. A special example is given by chameleon models which have been developed in the context of cosmological scalar fields and the fifth-force problem [11]. Somewhat simplified, chameleons can be viewed as ALPs with a varying mass that increases with the ambient matter density. As a result, low-energy chameleons which are initially produced in vacuo by photon conversion in a magnetic field cannot penetrate the end caps of a vacuum chamber and are reflected back into the chamber. After an initial laser pulse, the chameleons can be re-converted into photons again inside the magnetized vacuum; this would result in an afterglow phenomenon which is characteristic for a chameleonic ALP [35]. First estimates indicate that the chameleon parameter range accessible to available laboratory technology is comparable to scales familiar from astrophysical stellar energy loss arguments, i.e., up to \\(g\\sim 10^{10}\\)GeV for \\(m_{\\phi}\\lesssim 1\\)meV. Afterglow measurements are already planned at ALPS [30] and GammeV [33].
In the near future, also quantum vacuum experiments could be realized that involve strong fields generated by a high-intensity laser; for a concrete proposal aiming at vacuum birefringence, see [36] and also [37]. As major differences, laser-driven setupscan generate field strengths that exceed conventional laboratory fields by several orders of magnitude. The price to be paid is that the spatial extent of these high fields is limited to a few microns. We expect that laser-driven experiments can significantly contribute to MCP searches in the intermediate-mass range whereas ALP and paraphoton searches, which are based on a coherence phenomenon, typically require a spatially sizeable field.
Both spatially extended as well as strong fields are indeed available in the vicinity of certain compact astrophysical objects. Also cosmic magnetic fields though weak may be useful due to their extreme spatial extent. For suggestions how to exploit these fields as a probe for fundamental physics, see, e.g., [38, 39, 40].
## 4 Microscopic models
So far, we argued that quantum vacuum experiments do not only serve as a probe for fundamental physics and BSM extensions, but also are required to provide for model-independent information about potential weakly coupled light degrees of freedom. Nevertheless, in the case of a positive anomalous experimental signal a puzzle of how to reconcile this signal with astrophysical bounds would persist on the basis of the low-energy effective actions discussed above. A resolution of this puzzle has to come from the underlying microscopic theory that interconnects solar scales with laboratory scales.
A number of ideas has come up to separate solar physics from laboratory physics; for a selection of examples, see [41, 42, 43, 44, 11, 45]. A general feature of many ideas is to suppress the coupling between photons and the new particle candidates at solar scales by a parameter of the solar environment such as temperature, energy or momentum transfer, or ambient matter density. A somewhat delicate alternative is provided by new particle candidates that are strongly interacting in the solar interior, resulting in a small mean free path (similar or smaller than that of the photons!), such that they do not contribute to the solar energy flux [43].
A paradigmatic example for a parametrical coupling suppression is given by the Masso-Redondo model [41] which, in addition to resolving the above puzzle, finds a natural embedding in string-theory models [46]. As a prerequisite, let us consider the paraphoton model of Eq. (8) and include a hidden-sector parafermion \\(h\\) which couples only to the paraphoton \\(A^{\\prime}\\) with charge \\(e_{\\rm h}\\) and interaction \\(e_{\\rm h}\\bar{h}A\\!\\!\\!/h\\). After the shift \\(A^{\\prime}_{\\mu}\\to\\hat{A}^{\\prime}_{\\mu}-\\chi A_{\\mu}\\) which diagonalizes the kinetic terms, the parafermion acquires a coupling to our photon: \\(-\\chi e_{\\rm h}\\bar{h}A\\!\\!\\!/h\\). Since \\(\\chi\\) is expected to be small, we may identify \\(-\\chi e_{\\rm h}=\\epsilon e\\). As a result, the hidden-sector fermion appears as minicharged with respect to our photon. The bottom line is that a hidden sector with further U(1) fields and correspondingly charged particles automatically appear as MCPs for our photon if these further U(1)'s mix weakly with our U(1). However, if the paraphoton is massive the coupling of on-shell photons to parafermions is suppressed by this mass \\(\\mu\\), since the on-shell condition cannot be met by the massive paraphoton.
The Masso-Redondo model now involves two paraphotons, one massless and one massive, with opposite charge assignments for the parafermions. The latter charge assignment indeed cancels the parafermion-to-photon coupling at high virtuality (as, e.g., for the photon plasma modes in the solar interior), implying that solar physics remains unaffected. At low virtualities such as in the laboratory, the massive paraphoton decouples which removes the cancellations between the two U(1)'s. A photon-paraphoton system is left over in which the parafermions indeed appear as MCPs with respect to electromagnetism. In this manner, the astrophysical bounds remain satisfied, but laboratory experiments could discover unexpectedly large anomalous signatures.
In fact, hidden sectors also involving further U(1)'s and correspondingly charged matter as required for the Masso-Redondo mechanism cannot only be embedded naturally in more fundamental models, but are often unavoidable in model building for reasons of consistency.
## 5 Conclusions
Quantum vacuum experiments such as those involving strong external fields can indeed probe fundamental physics. In particular, optical experiments can reach a high precision and thereby constitute an ideal tool for searching for the hidden sector of BSM extensions containing weakly-interacting and potentially light degrees of freedom at the milli scale. A great deal of current experimental activity will soon provide for a substantial amount of new data which will complement particle-physics information obtained from accelerators.
From a theoretical viewpoint, many open problems require a better understanding of fluctuations of light degrees of freedom, the small mass of which often inhibits conventional perturbative ordering schemes. Modern quantum-field theory techniques for external-field problems such as the worldline approach [47, 16, 17] will have to be used and developed further hand in hand with experimental progress in probing the quantum vacuum.
I would like to thank M. Ahlers, W. Dittrich, D.F. Mota, J. Jaeckel, J. Redondo, A. Ringwald, D.J. Shaw for collaboration on the topics presented here. It is a pleasure to thank M. Bordag and his team for organizing the QFEXT07 workshop and creating such a stimulating atmosphere. This work was supported by the DFG under contract No. Gi 328/1-4 (Emmy-Noether program).
## References
* [1] W. Heisenberg and H. Euler, Z. Phys. **98** (1936) 714; J. Schwinger, Phys. Rev. **82** (1951) 664.
* [2] W. Dittrich and H. Gies, Springer Tracts Mod. Phys. **166**, 1 (2000).
* [3] W. Dittrich and M. Reuter, Lect. Notes Phys. **220**, 1 (1985); G. V. Dunne, arXiv:hep-th/0406216.
* [4] A. Lindner and A. Ringwald, Phys. World **20N8**, 32 (2007).
* [5] R. Baier and P. Breitenlohner, Act. Phys. Austriaca **25**, 212 (1967); Nuov. Cim. B **47** 117 (1967)
* [6] S.L. Adler, Annals Phys. **67** (1971) 599.
* [7] H. Gies and R. Shaisultanov, Phys. Lett. B **480**, 129 (2000) [arXiv:hep-ph/0009342].
* [8] R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. **38**, 1440 (1977); Phys. Rev. D **16**, 1791 (1977); S. Weinberg, Phys. Rev. Lett. **40**, 223 (1978); F. Wilczek, Phys. Rev. Lett. **40**, 279 (1978).
* [9] L. Maiani, R. Petronzio and E. Zavattini, Phys. Lett. B **175**, 359 (1986); G. Raffelt and L. Stodolsky, Phys. Rev. D **37**, 1237 (1988).
* [10] P. Sikivie, Phys. Rev. Lett. **51** (1983) 1415 [Erratum-ibid. **52** (1984) 695]; A. A. Anselm, Yad. Fiz. **42** (1985) 1480; M. Gasperini, Phys. Rev. Lett. **59** (1987) 396; K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, A. Kerman, and H. N. Nelson, Phys. Rev. Lett. **59**, 759 (1987).
* [11] J. Khoury and A. Weltman, Phys. Rev. D **69**, 044026 (2004); Phys. Rev. Lett. **93**, 171104 (2004); D. F. Mota and D. J. Shaw, Phys. Rev. D **75**, 063501 (2007); Phys. Rev. Lett. **97**, 151102 (2006); P. Brax _et al._, Phys. Rev. D **76**, 085010 (2007).
* [12] L. B. Okun, Sov. Phys. JETP **56**, 502 (1982); B. Holdom, Phys. Lett. B **166** (1986) 196.
* [13] H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. Lett. **97**, 140402 (2006) [arXiv:hep-ph/0607118].
* [14] J.S. Toll, PhD thesis, RX-1535; V. Baier and V. Katkov, Zh. Eksp. Teor. Fiz. **53**, 1478 (1967); W.-y. Tsai and T. Erber, Phys. Rev. **D12**, 1132 (1975); Phys. Rev. **D10**, 492 (1974); J.K. Daugherty and A.K. Harding, Astrophys. J. **273**, 761 (1983); H. Gies, Phys. Rev. D **61**, 085021 (2000); C. Schubert, Nucl. Phys. B **585**, 407 (2000).
* [15] M. Ahlers, H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. D **75**, 035011 (2007).
* [16] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001); Int. J. Mod. Phys. A **17**, 966 (2002).
* [17] H. Gies and K. Klingmuller, Phys. Rev. D **72**, 065001 (2005); G. V. Dunne and C. Schubert, Phys. Rev. D **72**, 105004 (2005); G. V. Dunne, Q. h. Wang, H. Gies and C. Schubert, Phys. Rev. D **73**, 065028 (2006); D. D. Dietrich and G. V. Dunne, J. Phys. A **40**, F825 (2007);
* [18] M. Ahlers, H. Gies, J. Jaeckel, J. Redondo and A. Ringwald, arXiv:0706.2836 [hep-ph].
* [19] G. G. Raffelt, arXiv:hep-ph/0611350.
* [20] S. Andriamonje _et al._ [CAST Collaboration], JCAP **0704**, 010 (2007) [arXiv:hep-ex/0702006].
* [21] S. Davidson, S. Hannestad and G. Raffelt, JHEP **0005**, 003 (2000) [arXiv:hep-ph/0001179].
* [22] J. Jaeckel _et al._,Phys. Rev. D **75**, 013004 (2007); arXiv:hep-ph/0605313.
* [23] M. Gluck, S. Rakshit and E. Reya, Phys. Rev. D **76**, 091701 (2007) [arXiv:hep-ph/0703140].
* [24] R. Cameron _et al._ [BFRT Collaboration], Phys. Rev. D **47** (1993) 3707.
* [25] E. Zavattini _et al._ [PVLAS Collaboration], arXiv:0706.3419 [hep-ex].
* [26] E. Zavattini _et al._ [PVLAS Collaboration], Phys. Rev. Lett. **96**, 110406 (2006).
* [27] S. J. Chen, H. H. Mei and W. T. Ni [Q& A Collaboration], hep-ex/0611050.
* [28] C. Robilliard _et al._, arXiv:0707.1296 [hep-ex].
* [29] A. Melchiorri, A. Polosa and A. Strumia, Phys. Lett. B **650**, 416 (2007) [arXiv:hep-ph/0703144].
* [30] K. Ehret _et al._, arXiv:hep-ex/0702023.
* [31] A. V. Afanasev, O. K. Baker and K. W. McFarlane, arXiv:hep-ph/0605250.
* [32] P. Pugnat _et al._, CERN-SPSC-2006-035; see http:/graybook.cern.ch/programmes/experiments/OSQAR.html.
* [33] see [http://gammev.fnal.gov/](http://gammev.fnal.gov/).
* [34] H. Gies, J. Jaeckel and A. Ringwald, Europhys. Lett. **76**, 794 (2006) [arXiv:hep-ph/0608238].
* [35] H. Gies, D. F. Mota and D. J. Shaw, arXiv:0710.1556 [hep-ph]; M. Ahlers, A. Lindner, A. Ringwald, L. Schrempp and C. Weniger, arXiv:0710.1555 [hep-ph].
* [36] T. Heinzl _et al._, Opt. Commun. **267**, 318 (2006) [arXiv:hep-ph/0601076].
* [37] A. Di Piazza, K. Z. Hatsagortsyan and C. H. Keitel, Phys. Rev. Lett. **97**, 083603 (2006); M. Marklund and P. K. Shukla, Rev. Mod. Phys. **78**, 591 (2006).
* [38] A. Dupays, C. Rizzo, M. Roncadelli and G. F. Bignami, Phys. Rev. Lett. **95**, 211302 (2005).
* [39] A. Mirizzi, G. G. Raffelt and P. D. Serpico, arXiv:0704.3044 [astro-ph].
* [40] A. De Angelis, O. Mansutti and M. Roncadelli, arXiv:0707.2695 [astro-ph].
* [41] E. Masso and J. Redondo, Phys. Rev. Lett. **97**, 151802 (2006) [arXiv:hep-ph/0606163].
* [42] R. N. Mohapatra and S. Nasri, Phys. Rev. Lett. **98** (2007) 050402 [arXiv:hep-ph/0610068].
* [43] P. Jain and S. Stokes, arXiv:hep-ph/0611006.
* [44] R. Foot and A. Kobakhidze, Phys. Lett. B **650** (2007) 46 [arXiv:hep-ph/0702125].
* [45] I. Antoniadis, A. Boyarsky and O. Ruchayskiy, arXiv:0708.3001 [hep-ph].
* [46] S. A. Abel, J. Jaeckel, V. V. Khoze and A. Ringwald, arXiv:hep-ph/0608248.
* [47] C. Schubert, Phys. Rept. **355**, 73 (2001). | Quantum vacuum experiments are becoming a flexible tool for investigating fundamental physics. They are particularly powerful for searching for new light but weakly interacting degrees of freedom and are thus complementary to accelerator-driven experiments. I review recent developments in this field, focusing on optical experiments in strong electromagnetic fields. In order to characterize potential optical signatures, I discuss various low-energy effective actions which parameterize the interaction of particle-physics candidates with optical photons and external electromagnetic fields. Experiments with an electromagnetized quantum vacuum and optical probes do not only have the potential to collect evidence for new physics, but special-purpose setups can also distinguish between different particle-physics scenarios and extract information about underlying microscopic properties. | Summarize the following text. |
arxiv-format/0711_1663v1.md | ## 1 Introduction
Wave propagation localized to a planar metal-dielectric interface has a long history (Zenneck, 1907; Agranovich and Mills, 1982; Boardman, 1982; Homola et al., 1999; Matveeva et al., 2005). In contrast, wave propagation localized to the planar interface of an isotropic dielectric material and a uniaxial dielectric material was found possible by D'yakonov only in 1988 (D'yakonov, 1988; Averkiev and Dyakonov, 1990). Since then, researchers have explored surface-wave propagation (SWP) on increasingly complex systems of bimaterial interfaces such as biaxial-isotropic, uniaxial-uniaxial, and biaxial-biaxial (Walker et al., 1998; Darinskii 2001, Wong et al., 2005; Polo et al., 2006, 2007a,b; Nelatury et al., 2007). In all such investigations, essentially, one looks for the selected angular regimes of the propagation direction wherein certain dispersion conditions are met. For complex systems, the angular regimes are very narrow and depend highly on the crystallographic symmetries of the two materials.
Our motivation for this paper is to show how one might exercise control on the angular regimes of SWP. Whereas temperature and pressure may be altered to control SWP, the application of an external field is expected to provide dynamic control. In particular, the electro-optic effect refers to changes in optical properties by the application of a low-frequency or dc electric field. Forinstance, an optically isotropic crystal (possessing either the \\(43m\\) or \\(23\\) point group symmetry) upon exposure to a dc electric field turns birefringent (Cook, 1996; Lakhtakia, 2006a).
The Pockels effect is a linear electro-optic (EO) effect, whereby the modification of the inverse (optical) permittivity matrix by the dc electric field is quantified through 18 electro-optic coefficients, not all of which may be independent of each other, depending on the point group symmetry (Boyd, 1992). The opportunities offered by the Pockels effect for tuning the optical response characteristics of materials have recently been highlighted for photonic band-gap engineering (Lakhtakia, 2006b; Li et al., 2007) and composite materials (Lakhtakia and Mackay, 2007; Mackay and Lakhtakia, 2007). These publications suggest that, although the changes in the optical permittivity matrix are typically small, the effect on SWP could be significant due to the extreme sensitivity of surface waves on the dielectric constitutive properties. Let us note here an earlier study wherein the quadratic EO effect named after Kerr was shown to offer control over SWP (Torner et al., 1993).
This paper introduces the influence of the Pockels effect on SWP. Although the theoretical treatment is general, numerical results are presented only for a specific EO material: potassium niobate (Zgonik et al., 1993). The remainder of the paper is organized as follows: Section 2 provides a description of a canonical boundary-value problem, the optical permittivity matrix of an EO crystal, and the derivation of the dispersion relations for SWP. In Section 3 numerical results are furnished, and our conclusions are distilled in Section 4. A note on notation: vectors are underlined, matrixes are decorated with an overbar, and the Cartesian unit vectors are denoted as \\(\\underline{u}_{x}\\), \\(\\underline{u}_{y}\\), and \\(\\underline{u}_{z}\\). All field quantities are assumed to have an \\(\\exp(-i\\omega t)\\) time-dependence.
## 2 Theory
Let the plane of SWP be a bimaterial interface. The half-space \\(z<0\\) is filled with a homogeneous, isotropic, dielectric material with an optical refractive index denoted by \\(n_{s}\\). The half-space \\(z>0\\) is filled with a homogeneous, linear, EO material, whose optical relative permittivity matrix is stated as (Lakhtakia and Reyes 2006a,b)
\\[\\bar{\\epsilon}_{\\,rel}=\\bar{S}_{z}\\left(\\psi\\right)\\cdot\\bar{R}_{y}(\\chi) \\cdot\\bar{\\epsilon}_{PE}\\cdot\\bar{R}_{y}(\\chi)\\cdot\\bar{S}_{z}\\left(-\\psi \\right)\\,. \\tag{1}\\]
Incorporating the Pockels effect due to an arbitrarily oriented but uniform dc electric field \\(\\underline{E}^{dc}\\), the matrix \\(\\bar{\\epsilon}_{PE}\\) is given by
\\[\\bar{\\epsilon}_{PE}\\approx\\left(\\begin{array}{ccc}\\epsilon_{1}^{(0)}(1- \\epsilon_{1}^{(0)}\\sum_{K=1}^{3}r_{1K}E_{K}^{dc})&-\\epsilon_{1}^{(0)}\\epsilon _{2}^{(0)}\\sum_{K=1}^{3}r_{6K}E_{K}^{dc}&-\\epsilon_{1}^{(0)}\\epsilon_{3}^{(0)} \\sum_{K=1}^{3}r_{5K}E_{K}^{dc}\\\\ -\\epsilon_{2}^{(0)}\\epsilon_{1}^{(0)}\\sum_{K=1}^{3}r_{6K}E_{K}^{dc}&\\epsilon_ {2}^{(0)}(1-\\epsilon_{2}^{(0)}\\sum_{K=1}^{3}r_{2K}E_{K}^{dc})&-\\epsilon_{2}^{( 0)}\\epsilon_{3}^{(0)}\\sum_{K=1}^{3}r_{4K}E_{K}^{dc}\\\\ -\\epsilon_{3}^{(0)}\\epsilon_{1}^{(0)}\\sum_{K=1}^{3}r_{5K}E_{K}^{dc}&-\\epsilon_ {3}^{(0)}\\epsilon_{2}^{(0)}\\sum_{K=1}^{3}r_{4K}E_{K}^{dc}&\\epsilon_{3}^{(0)}(1- \\epsilon_{3}^{(0)}\\sum_{K=1}^{3}r_{3K}E_{K}^{dc})\\end{array}\\right), \\tag{2}\\]
correct to the first order in \\(|\\underline{E}^{dc}|\\), where
\\[\\left(\\begin{array}{c}E_{1}^{dc}\\\\ E_{2}^{dc}\\\\ E_{3}^{dc}\\end{array}\\right)=\\bar{R}_{y}(\\chi)\\cdot\\bar{S}_{z}\\left(-\\psi \\right)\\cdot\\left(\\begin{array}{c}E_{x}^{dc}\\\\ E_{y}^{dc}\\\\ E_{z}^{dc}\\end{array}\\right)\\,, \\tag{3}\\]
\\(\\epsilon_{1,2,3}^{(0)}\\) are the principal relative permittivity scalars in the optical regime, whereas \\(r_{JK}\\) (with \\(1\\leq J\\leq 6\\) and \\(1\\leq K\\leq 3\\)) are the EO coefficients. The EO material can be isotropic, uniaxial, or biaxial, depending on the relative values of \\(\\epsilon_{1}^{(0)}\\), \\(\\epsilon_{2}^{(0)}\\), and \\(\\epsilon_{3}^{(0)}\\). Furthermore, the EO material may belong to one of 20 crystallographic classes of point group symmetry, in accordance with the relative values of the EO coefficients \\(r_{JK}\\).
The rotation matrix
\\[\\bar{S}_{z}(\\psi)=\\left(\\begin{array}{ccc}\\cos\\psi&-\\sin\\psi&0\\\\ \\sin\\psi&\\cos\\psi&0\\\\ 0&0&1\\end{array}\\right) \\tag{4}\\]
in Equation (1) denotes a rotation about the \\(z\\) axis by an angle \\(\\psi\\in[0,2\\pi)\\). The matrix
\\[\\bar{R}_{y}(\\chi)=\\left(\\begin{array}{ccc}-\\sin\\chi&0&\\cos\\chi\\\\ 0&-1&0\\\\ \\cos\\chi&0&\\sin\\chi\\end{array}\\right) \\tag{5}\\]
involves the angle \\(\\chi\\in[0,\\pi/2]\\) with respect to the \\(x\\) axis in the \\(xz\\) plane, and combines a rotation as well as an inversion. The angles \\(\\psi\\) and \\(\\chi\\) delineate the orientation of the EO material in the laboratory coordinate system, the full transformation from laboratory coordinates \\((x,y,z)\\) to those used conventionally for EO materials \\((1,2,3)\\) being illustrated in Figure 1.
Now \\(\\bar{\\epsilon}_{PE}\\) is a symmetric matrix, regardless of the magnitude and direction of \\(\\underline{E}^{dc}\\). With the assumption that all of its elements are real-valued, \\(\\bar{\\epsilon}_{PE}\\) can be written as \\(\\alpha_{1}\\left(\\underline{u}_{x}\\underline{u}_{x}+\\underline{u}_{y} \\underline{u}_{y}+\\underline{u}_{z}\\underline{u}_{x}\\right)+\\alpha_{2}\\left( \\underline{u}_{m}\\underline{u}_{n}+\\underline{u}_{n}\\underline{u}_{m}\\right)\\), where \\(\\alpha_{1}\\) and \\(\\alpha_{2}\\) are scalars and the unit vectors \\(\\underline{u}_{m}\\) and \\(\\underline{u}_{n}\\) are parallel to the crystallographic axes or the optical ray axes of the EO material in the \\((1,2,3)\\) coordinate system (Chen 1983). Likewise, \\(\\bar{\\epsilon}_{PE}^{-1}\\) can be written as \\(\\beta_{1}\\left(\\underline{u}_{x}\\underline{u}_{x}+\\underline{u}_{y} \\underline{u}_{y}+\\underline{u}_{z}\\underline{u}_{z}\\right)+\\beta_{2}\\left( \\underline{u}_{p}\\underline{u}_{q}+\\underline{u}_{q}\\underline{u}_{p}\\right)\\), where \\(\\beta_{1}\\) and \\(\\beta_{2}\\) are scalars and the unit vectors \\(\\underline{u}_{p}\\) and \\(\\underline{u}_{q}\\) are parallel to the optic axes of the EO material in the \\((1,2,3)\\) coordinate system. Thus, the application of the uniform dc electric field not only changes the eigenvalues of \\(\\bar{\\epsilon}_{PE}\\), but also rotates both optic axes and both optical ray axes, in general.
Figure 1: Relationship of laboratory coordinates \\((x,y,z)\\) to the conventional electro-optic coordinates \\((1,2,3)\\). SWP is taken to occur parallel to the \\(x\\) axis with phase velocity \\(\\underline{v}\\).
### Field representations
Without loss of generality we assume that the SWP direction is parallel to the \\(x\\) axis. The fields in the half-space \\(z<0\\) must satisfy the equations
\\[\\left.\\begin{array}{l}\\underline{k}_{s}\\times\\underline{\\mathcal{E}}_{s}= \\omega\\mu_{o}\\,\\underline{\\mathcal{H}}_{s}\\\\ \\underline{k}_{s}\\times\\underline{\\mathcal{H}}_{s}=-\\omega\\epsilon_{o}\\,n_{s }^{2}\\,\\underline{\\mathcal{E}}_{s}\\end{array}\\right\\}\\,, \\tag{6}\\]
where \\(\\epsilon_{o}\\) and \\(\\mu_{o}\\) are the permittivity and permeability of free space, and \\(\\underline{\\mathcal{E}}_{s}\\) and \\(\\underline{\\mathcal{H}}_{s}\\) are the complex-valued amplitudes of the electric and magnetic field phasors in the isotropic dielectric material. The wave vector
\\[\\underline{k}_{s}=k_{o}\\left(\\varkappa\\,\\underline{u}_{x}-iq_{s}\\,\\underline {u}_{z}\\right), \\tag{7}\\]
where \\(\\varkappa\\) and
\\[q_{s}=+\\sqrt{\\varkappa^{2}-n_{s}^{2}} \\tag{8}\\]
are the normalized propagation constant and the decay constant, respectively, whereas \\(k_{o}=\\omega\\sqrt{\\epsilon_{o}\\mu_{o}}\\) is the free-space wavenumber. We must have \\(\\mathrm{Re}\\left[q_{s}\\right]>0\\) for SWP; furthermore, \\(\\varkappa\\) must be real-valued and positive for un-attenuated propagation along the \\(x\\) axis. Accordingly, the acceptable solutions of Equations (6) are
\\[\\underline{\\mathcal{E}}_{s}=A_{s1}\\,\\underline{u}_{y}+A_{s2}\\left(iq_{s}\\, \\underline{u}_{x}+\\varkappa\\,\\underline{u}_{z}\\right) \\tag{9}\\]
and
\\[\\underline{\\mathcal{H}}_{s}=\\sqrt{\\frac{\\epsilon_{o}}{\\mu_{o}}}\\left[A_{s1} \\left(iq_{s}\\,\\underline{u}_{x}+\\varkappa\\,\\underline{u}_{z}\\right)-A_{s2}\\,n _{s}^{2}\\underline{u}_{y}\\right]\\,, \\tag{10}\\]
where \\(A_{s1}\\) and \\(A_{s2}\\) are unknown scalars. If \\(q_{s}\\) is purely real-valued, the electric and magnetic field phasors of the surface wave decay exponentially with respect to \\(z\\) as \\(z\\to-\\infty\\); otherwise, the decay is damped sinusoidal.
The fields in the half-space \\(z>0\\) must be solutions of the equations
\\[\\left.\\begin{array}{l}\\underline{k}_{c}\\times\\underline{\\mathcal{E}}_{c}= \\omega\\mu_{o}\\,\\underline{\\mathcal{H}}_{c}\\\\ \\underline{k}_{c}\\times\\underline{\\mathcal{H}}_{c}=-\\omega\\epsilon_{o}\\, \\bar{\\epsilon}\\,_{rel}\\cdot\\underline{\\mathcal{E}}_{c}\\end{array}\\right\\}\\,. \\tag{11}\\]
The wave vector
\\[\\underline{k}_{c}=k_{o}\\left(\\varkappa\\,\\underline{u}_{x}+iq_{e}\\,\\underline{ u}_{c}\\right) \\tag{12}\\]
must have \\(\\mathrm{Re}\\left[q_{e}\\right]>0\\) for localization of energy to the bimaterial interface. A purely real-valued \\(q_{e}\\) indicates an exponential decay of the field phasors with respect to \\(z\\) as \\(z\\to+\\infty\\), whereas \\(\\mathrm{Im}\\left[q_{e}\\right]\
eq 0\\) indicates a damped-sinusoidal decay.
Substitution of \\(\\underline{k}_{e}\\) into Equations (11) yields a set of six homogenous equations that are linear in the six Cartesian components of \\(\\underline{\\mathcal{E}}_{c}\\) and \\(\\underline{\\mathcal{H}}_{c}\\). Setting the determinant of the coefficients equal to zero gives the dispersion equation
\\[C_{0}+C_{1}q_{e}+C_{2}q_{e}^{2}+C_{3}q_{e}^{3}+C_{4}q_{e}^{4}=0\\,, \\tag{13}\\]where the coefficients
\\[C_{0} = \\epsilon_{xx}\\epsilon_{yz}^{2}+\\epsilon_{xy}^{2}\\epsilon_{zz}+ \\epsilon_{xz}^{2}\\epsilon_{yy}-\\epsilon_{xx}\\epsilon_{yy}\\epsilon_{zz}-2\\epsilon _{xy}\\epsilon_{xz}\\epsilon_{yz} \\tag{14}\\] \\[+(\\epsilon_{xx}\\epsilon_{yy}+\\epsilon_{xx}\\epsilon_{zz}- \\epsilon_{xy}^{2}-\\epsilon_{xz}^{2})\\varkappa^{2}-\\epsilon_{xx}\\varkappa^{4}\\,,\\] \\[C_{1} = 2i(\\epsilon_{xz}\\epsilon_{yy}-\\epsilon_{xy}\\epsilon_{yz}) \\varkappa-2i\\epsilon_{xz}\\varkappa^{3}\\,,\\] (15) \\[C_{2} = \\epsilon_{xz}^{2}+\\epsilon_{yz}^{2}-(\\epsilon_{xx}+\\epsilon_{yy} )\\epsilon_{zz}+(\\epsilon_{xx}+\\epsilon_{zz})\\varkappa^{2}\\,,\\] (16) \\[C_{3} = 2i\\varkappa\\epsilon_{xz}\\,,\\] (17) \\[C_{4} = -\\epsilon_{zz}\\,, \\tag{18}\\]
involve \\(\\epsilon_{xy}=\\underline{u}_{x}\\cdot\\bar{\\epsilon}_{rel}\\cdot\\underline{u}_{y}\\), etc.
The solution of Equation (13) leads to four values of \\(q_{e}\\). We select the two values of \\(q_{e}\\) that conform to the restriction \\(\\mbox{Re}\\left[q_{e}\\right]>0\\), and label them as \\(q_{e1}\\) and \\(q_{e2}\\). The corresponding wave vectors in the EO material are denoted by \\(\\underline{k}_{e1}\\) and \\(\\underline{k}_{e2}\\). Accordingly, with unknown coefficients \\(A_{e1}\\) and \\(A_{e2}\\), we write the Cartesian components of the field phasors in the half-space \\(z>0\\) as
\\[\\underline{u}_{x}\\cdot\\underline{\\cal E}_{e} = \\frac{\\epsilon_{xz}\\epsilon_{yy}-\\epsilon_{xy}\\epsilon_{yz}+ \\epsilon_{xz}q_{e1}^{2}+i(\\epsilon_{yy}q_{e1}+q_{e1}^{3})\\varkappa-i(\\epsilon_ {xz}+q_{e1})\\varkappa^{2}}{-\\epsilon_{xy}\\epsilon_{xz}+\\epsilon_{xx}\\epsilon_ {yz}+\\epsilon_{yz}q_{e1}^{2}-i\\epsilon_{xy}q_{e1}\\varkappa}A_{e1} \\tag{19}\\] \\[+\\frac{\\epsilon_{xz}\\epsilon_{yy}-\\epsilon_{xy}\\epsilon_{yz}+ \\epsilon_{xz}q_{e2}^{2}+i(\\epsilon_{yy}q_{e2}+q_{e2}^{3})\\varkappa-i(\\epsilon _{xz}+q_{e2})\\varkappa^{2}}{-\\epsilon_{xy}\\epsilon_{xz}+\\epsilon_{xx}\\epsilon _{yz}+\\epsilon_{yz}q_{e2}^{2}-i\\epsilon_{xy}q_{e2}\\varkappa}A_{e2}\\,,\\] \\[\\underline{u}_{y}\\cdot\\underline{\\cal E}_{e} = A_{e1}+A_{e2}\\,,\\] (20) \\[\\underline{u}_{z}\\cdot\\underline{\\cal E}_{e} = \\frac{\\left(\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_{xy}^{2}+( \\epsilon_{xx}+\\epsilon_{yy})q_{e1}^{2}+q_{e1}^{4}\\right)-(\\epsilon_{xx}+q_{e1} ^{2})\\varkappa^{2}}{-\\epsilon_{xx}\\epsilon_{yz}+\\epsilon_{xy}\\epsilon_{xz}- \\epsilon_{yz}q_{e1}^{2}+i\\epsilon_{xy}q_{e1}\\varkappa}A_{e1}\\] (21) \\[\\frac{\\left(\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_{xy}^{2}+( \\epsilon_{xx}+\\epsilon_{yy})q_{e2}^{2}+q_{e2}^{4}\\right)-(\\epsilon_{xx}+q_{e2} ^{2})\\varkappa^{2}}{-\\epsilon_{xx}\\epsilon_{yz}+\\epsilon_{xy}\\epsilon_{xz}- \\epsilon_{yz}q_{e2}^{2}+i\\epsilon_{xy}q_{e2}\\varkappa}A_{e2}\\,,\\]
and
\\[\\underline{u}_{x}\\cdot\\underline{\\cal H}_{e} = -i\\sqrt{\\frac{\\epsilon_{o}}{\\mu_{o}}}q_{e1}(A_{e1}+A_{e1})\\,, \\tag{22}\\] \\[\\underline{u}_{y}\\cdot\\underline{\\cal H}_{e} = i\\sqrt{\\frac{\\epsilon_{o}}{\\mu_{o}}}\\left\\{\\frac{i(-\\epsilon_{ xy}\\epsilon_{yz}+\\epsilon_{xx}\\epsilon_{yy})q_{e1}+\\epsilon_{xz}q_{e1}^{3}+(- \\epsilon_{xx}+\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_{xy}^{2})\\varkappa}{ \\epsilon_{xx}\\epsilon_{yz}-\\epsilon_{xy}\\epsilon_{xz}+\\epsilon_{yz}q_{e1}^{2 }-i\\epsilon_{xy}q_{e1}\\varkappa}\\right.\\] (23) \\[+\\left.\\frac{(-\\epsilon_{xx}+\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_ {xy}^{2}-i\\epsilon_{xz}q_{e1}+\\epsilon_{xx}q_{e1}^{2})\\varkappa^{3}-\\epsilon_ {xx}\\varkappa^{4}}{\\epsilon_{xx}\\epsilon_{yz}-\\epsilon_{xy}\\epsilon_{xz}+ \\epsilon_{yz}q_{e1}^{2}-i\\epsilon_{xy}q_{e1}\\varkappa}\\right\\}A_{e1}\\] \\[+i\\sqrt{\\frac{\\epsilon_{o}}{\\mu_{o}}}\\left\\{\\frac{i(-\\epsilon_{ xy}\\epsilon_{yz}+\\epsilon_{xx}\\epsilon_{yy})q_{e2}+\\epsilon_{xx}q_{e2}^{3}+(- \\epsilon_{xx}+\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_{xy}^{2})\\varkappa}{ \\epsilon_{xx}\\epsilon_{yz}-\\epsilon_{xy}\\epsilon_{xz}+\\epsilon_{yz}q_{e2}^{2} -i\\epsilon_{xy}q_{e2}\\varkappa}\\right.\\] \\[+\\left.\\frac{(-\\epsilon_{xx}+\\epsilon_{xx}\\epsilon_{yy}-\\epsilon_ {xy}^{2}-i\\epsilon_{xz}q_{e2}+\\epsilon_{xx}q_{e2}^{2})\\varkappa^{3}-\\epsilon_ {xx}\\varkappa^{4}}{\\epsilon_{xx}\\epsilon_{yz}-\\epsilon_{xy}\\epsilon_{xz}+ \\epsilon_{yz}q_{e2}^{2}-i\\epsilon_{xy}q_{e2}\\varkappa}\\right\\}A_{e2}\\,,\\] (26) \\[\\underline{u}_{z}\\cdot\\underline{\\cal H}_{e} = \\sqrt{\\frac{\\epsilon_{o}}{\\mu_{o}}}\\varkappa(A_{e1}+A_{e2})\\,. \\tag{27}\\]
### Boundary conditions
The boundary conditions at the interface \\(z=0\\) lead to the following four equations:
\\[\\left.\\begin{array}{rcl}\\underline{u}_{x}\\cdot\\underline{\\mathcal{E}}_{s}&=& \\underline{u}_{x}\\cdot\\underline{\\mathcal{E}}_{x}\\\\ \\underline{u}_{y}\\cdot\\underline{\\mathcal{E}}_{s}&=&\\underline{u}_{y}\\cdot \\underline{\\mathcal{E}}_{x}\\\\ \\underline{u}_{x}\\cdot\\underline{\\mathcal{H}}_{s}&=&\\underline{u}_{x}\\cdot \\underline{\\mathcal{H}}_{e}\\\\ \\underline{u}_{y}\\cdot\\underline{\\mathcal{H}}_{s}&=&\\underline{u}_{y}\\cdot \\underline{\\mathcal{H}}_{e}\\end{array}\\right\\}\\,. \\tag{28}\\]
These equations may be cast in matrix form as
\\[\\bar{M}\\cdot\\left(\\begin{array}{c}A_{s1}\\\\ A_{s2}\\\\ A_{e1}\\\\ A_{e2}\\end{array}\\right)=\\left(\\begin{array}{c}0\\\\ 0\\\\ 0\\\\ 0\\end{array}\\right)\\,, \\tag{29}\\]
where \\(\\bar{M}\\) is a 4\\(\\times\\)4 matrix. For a non-trivial solution, the determinant of \\(\\bar{M}\\) must equal zero; thus, the SWP dispersion equation is
\\[\\det\\bar{M}=0\\,. \\tag{30}\\]
Because of the complexity of Equation (30), an algebraic result could not be obtained and recourse was taken to a numerical method of solution. Parenthetically, when \\(|\\underline{E}^{dc}|=0\\), the Pockels effect is not invoked, and the presented formulation simplifies to that of Polo et al. (2007a).
## 3 Numerical Results and Discussion
The Pockels effect occurs only in dielectric materials that lack inversion symmetry. Some examples of materials of this type are potassium niobate, lithium niobate and gallium arsenide. We chose potassium niobate for calculations since it has very large EO coefficients. As potassium niobate belongs to the orthorhombic \\(mm2\\) class, the only non-zero EO coefficients are \\(r_{13}\\), \\(r_{23}\\), \\(r_{33}\\), \\(r_{42}\\), and \\(r_{51}\\); hence, we get
\\[\\bar{\\epsilon}_{PE}\\approx\\left(\\begin{array}{ccc}\\epsilon_{1}^{(0)}(1- \\epsilon_{1}^{(0)}\\,r_{13}E_{3}^{dc})&0&-\\epsilon_{1}^{(0)}\\epsilon_{3}^{(0)} \\,r_{51}E_{1}^{dc}\\\\ 0&\\epsilon_{2}^{(0)}(1-\\epsilon_{2}^{(0)}\\,r_{23}E_{3}^{dc})&-\\epsilon_{2}^{(0 )}\\epsilon_{3}^{(0)}\\,r_{42}E_{2}^{dc}\\\\ -\\epsilon_{3}^{(0)}\\epsilon_{1}^{(0)}\\,r_{51}E_{1}^{dc}&-\\epsilon_{3}^{(0)} \\epsilon_{2}^{(0)}\\,r_{42}E_{2}^{dc}&\\epsilon_{3}^{(0)}(1-\\epsilon_{3}^{(0)} \\,r_{33}E_{3}^{dc})\\end{array}\\right) \\tag{31}\\]
from Equation (2). Constitutive data for potassium niobate are as follows (Zgonik et al., 1993): \\(\\epsilon_{1}^{(0)}=4.72\\), \\(\\epsilon_{2}^{(0)}=5.20\\), \\(\\epsilon_{3}^{(0)}=5.43\\), \\(r_{13}=34\\times 10^{-12}\\) m V\\({}^{-1}\\), \\(r_{23}=6\\times 10^{-12}\\) m V\\({}^{-1}\\), \\(r_{33}=63.4\\times 10^{-12}\\) m V\\({}^{-1}\\), \\(r_{42}=450\\times 10^{-12}\\) m V\\({}^{-1}\\), and \\(r_{51}=120\\times 10^{-12}\\) m V\\({}^{-1}\\).
The existence of a surface wave was determined at various values of \\(\\chi\\), \\(\\psi\\), and \\(n_{s}\\) by satisfactory solution of Equation (28). Purely for illustrative purposes, the most detailed calculations were performed at only one value of \\(\\chi\\): \\(60^{\\circ}\\); at this value of \\(\\chi\\) the range of propagation directions is relatively wide. Let us note that the range of values of \\(n_{s}\\) for SWP at the planar interface of a non-EO biaxial dielectric material and a non-EO isotropic material is limited (Polo et al., 2007), and the range of values of the orientation angle \\(\\psi\\) for a specific \\(n_{s}\\) is quite small as well. In the remainder of this section, the mid-point of the \\(\\psi\\)-range is denoted by \\(\\psi_{m}\\) and the width of that range by \\(\\Delta\\psi\\).
### Results for \\(\\chi=60^{\\circ}\\)
In order to provide a baseline for the influence of the Pockels effect on SWP, Figure 2 shows \\(\\psi_{m}\\) and \\(\\Delta\\psi\\) as functions of \\(n_{s}\\) for \\(\\chi=60^{\\circ}\\) when \\(|\\underline{E}^{dc}|=0\\). The plots in the figure cover the entire \\(n_{s}\\)-range over which SWP was found possible. The \\(n_{s}\\)-range extends from approximately 2.292 to 2.33, and is thus only 0.038 in width. The \\(\\psi\\)-range in the figure is limited to \\([0^{\\circ},180^{\\circ}]\\); if SWP is possible for a certain value of \\(\\psi\\), it is also possible for \\(-\\psi\\), when \\(|\\underline{E}^{dc}|=0\\).
Over the range \\(0^{\\circ}\\leq\\psi\\leq 180^{\\circ}\\) two bands of \\(\\psi\\) values for SWP can be deduced from Figure 2 for each value of \\(n_{s}\\), except at the largest value of \\(n_{s}\\) where the two bands coalesce. At the lower limit of the \\(n_{s}\\)-range, \\(\\psi_{m}\\) approaches either \\(0^{\\circ}\\) or \\(180^{\\circ}\\); while at the upper limit of that range, \\(\\psi_{m}\\) approaches \\(90^{\\circ}\\) for both bands. When one considers the entire \\(\\psi\\)-range (i.e., \\(-180^{\\circ}\\leq\\psi\\leq 180^{\\circ}\\)), there are four bands of \\(\\psi\\)-values that merge into two bands at both limits of the \\(n_{s}\\)-range.
In Figure 2b, \\(\\Delta\\psi\\) is shown as a function of \\(n_{s}\\). Only one curve is shown since both \\(\\psi\\)-bands have the same width at each value of \\(n_{s}\\). The curve has a single peak and \\(\\Delta\\psi\\) goes to zero at the two endpoints of the \\(n_{s}\\)-range. The maximum value of \\(\\Delta\\psi\\) is about \\(0.03^{\\circ}\\) and occurs at \\(n_{s}\\approx 2.296\\), i.e., near the lower endpoint of the \\(n_{s}\\)-range. Thus, the \\(\\psi\\)-bands for SWP are really narrow.
In order to explore the influence of the Pockels effect on SWP, \\(|\\underline{E}^{dc}|\\) was next set equal to \\(10^{7}\\) V m\\({}^{-1}\\). This dc field was oriented along each of the laboratory coordinate axes (\\(x,y\\), and \\(z\\)) separately. We now describe the results for each direction of orientation of the dc electric field in order of increasing complexity.
Let us begin with \\(\\underline{E}^{dc}=\\underline{u}_{z}\\,10^{7}\\) V m\\({}^{-1}\\). Both \\(\\psi_{m}\\) and \\(\\Delta\\psi\\) are plotted against \\(n_{s}\\) in Figure 3. Just as for the plots for \\(|\\underline{E}^{dc}|=0\\) already discussed, there are four \\(\\psi\\)-bands with mirror symmetry about the \\(n_{s}\\)-axis. Figure 3a shows that the \\(n_{s}\\)-range for SWP has grown slightly and shifted to lower values of \\(n_{s}\\) compared to Figure 2a. With approximate lower and upper endpoints of 2.286 and 2.327 respectively, the width of the \\(n_{s}\\)-range is 0.041. Both \\(\\psi\\)-bands meet at \\(\\psi=90^{\\circ}\\), which is just the same as in the absence of the dc electric field. In Figure 3b, a single curve describes the widths of both \\(\\psi\\)-bands as \\(n_{s}\\) varies, which is similar to the curve in Figure 2b for \\(|\\underline{E}^{dc}|=0\\). The height of the \\(\\Delta\\psi\\)-peak, however, is approximately \\(0.062^{\\circ}\\), a little more than double that of the peak for \\(|\\underline{E}^{dc}|=0\\), and occurs at \\(n_{s}\\approx 2.289\\).
Figure 4 shows the \\(\\psi_{m}\\)-\\(n_{s}\\) and \\(\\Delta\\psi\\)-\\(n_{s}\\) curves when \\(\\underline{E}^{dc}=\\underline{u}_{x}\\,10^{7}\\) V m\\({}^{-1}\\). Just as in the previous two cases, there are four \\(\\psi\\)-bands with mirror symmetry about the \\(n_{s}\\)-axis. So, only two \\(\\psi\\)-bands are shown in the figure. Evidently from Figure 4a, the \\(n_{s}\\)-range for SWP is larger than when thedc electric field is either absent or aligned parallel to the \\(z\\) axis. The width of the \\(n_{s}\\)-range for SWP has grown to 0.049, with the lower and upper endpoints of the \\(n_{s}\\)-range being 2.289 and 2.338, respectively. Although both \\(\\psi\\)-bands in Figure 4a have wider \\(n_{s}\\)-ranges than in the previous two figures, the upper band (labeled Band 2), has increased more than the lower band (Band 1). In addition, the two \\(\\psi\\)-bands meet at a lower value of \\(\\psi_{m}\\) (\\(\\approx 60^{\\circ}\\)), and, thus, lack the symmetry about \\(\\psi_{m}=90^{\\circ}\\) seen in Figures 2a and 3a for the cases of \\(\\underline{E}^{dc}=0\\) and \\(\\underline{E}^{dc}\\parallel\\underline{u}_{z}\\), respectively.
Both \\(\\psi\\)-bands in Figure 4b still show a single \\(\\Delta\\psi\\)-peak each, towards the lower endpoint of the \\(n_{s}\\)-range. However, the \\(\\Delta\\psi\\)-\\(n_{s}\\) curves for the two bands are not identical. Whereas the \\(\\Delta\\psi\\)-peak for Band 2 is about the same as for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{z}\\) in Figure 3b, the \\(\\Delta\\psi\\)-peak for Band 1 is about a third of that value. In addition, the positions of the \\(\\Delta\\psi\\)-peaks are shifted slightly from the value for \\(|\\underline{E}^{dc}|=0\\), with the peak for Band 2 shifted downward to \\(n_{s}=2.300\\) and that for Band 1 upward to \\(n_{s}=2.292\\).
Figure 5 displays the influence of the dc electric field when applied along the \\(y\\) axis, i.e., \\(\\underline{E}^{dc}=\\underline{u}_{y}10^{7}\\) V m\\({}^{-1}\\). In Figure 5a, the full range of \\(\\psi\\), \\([-180^{\\circ},180^{\\circ}]\\), is displayed as the \\(\\psi\\)-bands for SWP are no longer symmetric about \\(\\psi=0^{\\circ}\\); instead, these bands (labeled I to IV) are now located symmetrically about \\(\\psi=90^{\\circ}\\). The widths \\(\\Delta\\psi\\) of the \\(\\psi\\)-bands are shown in Figure 5b. As is the
case for the dc electric field applied parallel to the \\(x\\) axis, the widths of all four bands are not equal. Band I with a \\(\\psi\\)-range of approximately \\([-27^{\\circ},17^{\\circ}]\\) and Band IV with an approximate \\(\\psi\\)-range of \\([163^{\\circ},233^{\\circ}]\\) have the same width as a function of \\(n_{s}\\). The maximum width of these two \\(\\psi\\)-bands is about \\(0.052^{\\circ}\\) and occurs near \\(n_{s}=2.304\\). Similarly, Band II with a \\(\\psi\\)-range of \\([20^{\\circ},84^{\\circ}]\\) and Band III with a \\(\\psi\\)-range of \\([96^{\\circ},160^{\\circ}]\\) share a common \\(\\Delta\\psi\\)-\\(n_{s}\\) curve. The widths of Bands II and III are more than a factor of 10 smaller than of Bands I and IV, the \\(\\Delta\\psi\\)-peak for Bands II and III being \\(0.0028^{\\circ}\\) at \\(n_{s}\\approx 2.299\\).
The sudden change in \\(\\Delta\\psi\\) at the upper endpoint of the \\(n_{s}\\) range for Bands I and IV in Figure 5b should be noted. A similar sudden change was also found for \\(\\chi=75^{\\circ}\\). The region of coalescence of two bands on the \\(n_{s}\\)-axis is hard to delineate accurately. Quite possibly, the sudden change is a numerical artifact; it will be the subject of future investigations.
The effect of variation of the magnitude of the dc electric field is shown in Figure 6 with a plot of \\(\\psi_{m}\\) vs. the dc field's signed magnitude, for \\(\\underline{E}^{dc}\\) parallel to the \\(x\\), \\(y\\), and \\(z\\) axes. For the dc field oriented along both the \\(y\\) and the \\(z\\) axes, \\(\\psi_{m}\\) increases as the signed magnitude becomes more positive: the relationship is nearly linear for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{z}\\), and somewhat S-shaped for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{y}\\). On the other hand, when \\(\\underline{E}^{dc}\\parallel\\underline{u}_{x}\\), \\(\\psi_{m}\\) decreases as the signed magnitude becomes more positive. The foregoing results clearly show that by varying the magnitude and/or direction of the dc electric field, the direction of SWP, relative to the crystal axes of the EO material, can be controlled.
### Results for other \\(\\chi\\)
Limited results were also obtained for \\(\\chi=0^{\\circ}\\), \\(30^{\\circ}\\), and \\(75^{\\circ}\\), with the search for SWP restricted to \\(\\psi\\in[0^{\\circ},90^{\\circ}]\\). Figure 7 contains \\(\\psi_{m}\\)-\\(n_{s}\\) curves for \\(\\chi\\in\\{0^{\\circ},30^{\\circ},60^{\\circ},75^{\\circ}\\}\\), for (a) \\(\\underline{E}^{dc}=0\\) and (b) \\(\\underline{E}^{dc}\\) of signed magnitude \\(+10^{7}\\) V m\\({}^{-1}\\) and parallel to the \\(x\\), \\(y\\), and \\(z\\) axes. Although both Band 1 and Band 2 (when \\(\\underline{E}^{dc}\\parallel\\underline{u}_{x}\\)) and Band I and Band II (when \\(\\underline{E}^{dc}\\parallel\\underline{u}_{y}\\)) should be visible in the plots, for simplicity, only Band 1 and Band II are shown.
The curves for the chosen values of \\(\\chi\\) are isomorphic. At each value of \\(\\chi\\), a marked downward shift of the \\(\\psi_{m}\\)-\\(n_{s}\\) curve, compared to the case for \\(|\\underline{E}^{dc}|=0\\), occurs when \\(\\underline{E}^{dc}\\parallel\\underline{u}_{x}\\), and an upward shift for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{y}\\). There is also a shift for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{z}\\), but it is much smaller and is almost unnoticeable at the lower values of \\(\\chi\\); however, the shift is significant for \\(\\chi=75^{\\circ}\\), on the same order as when the dc electric field is applied along the other two axes.
Figure 8 shows \\(\\Delta\\psi\\)-\\(n_{s}\\) curves for \\(\\chi\\in\\{0^{\\circ},30^{\\circ},60^{\\circ},75^{\\circ}\\}\\), for the dc electric field configured as
for Figure 7. When \\(\\chi=0^{\\circ}\\), the curves for all three orientations of the dc field and the zero dc field are similar, having nearly the same peak values of \\(\\Delta\\psi\\) and peaking at close to the same values of \\(n_{s}\\). As \\(\\chi\\) increases, the curves become differentiated both in the \\(\\Delta\\psi\\) peak height and its position on the \\(n_{s}\\) axis. Among the four values of \\(\\chi\\) explored, the largest peak values of \\(\\Delta\\psi\\) occur for \\(\\chi=60^{\\circ}\\). This is particularly true when \\(\\underline{E}^{dc}\\parallel\\underline{u}_{z}\\); then some values of \\(\\Delta\\psi\\) are more than an order of magnitude larger than found at other values of \\(\\chi\\).
## 4 Concluding remarks
The influence of the Pockels effect on SWP at the interface between potassium niobate and an isotropic dielectric material has been demonstrated by our numerical studies. With the application of a dc electric field, we have noted the shift in the \\(n_{s}\\)-range and the \\(\\psi\\)-bands that permit SWP. The greatest influence of the Pockels effect is on the median propagation angle \\(\\psi_{m}\\) of the very narrow \\(\\psi\\)-bands in which SWP is possible. Shifts of over \\(30^{\\circ}\\) at a fixed value of \\(n_{s}\\) have been deduced. Thus, the Pockels effect can be pressed into service for an electrically controlled on-off switch for surface waves.
As of now, the existence of surface waves at the planar interface between a biaxial dielectric material and an isotropic dieletric material has not been demonstrated experimentally. The narrow \\(\\psi\\)- and \\(n_{s}\\)-regimes for SWP may discourage experimentalists from searching for surface waves in these scenarios. Electrical control of SWP direction may provide a convenient way to search for surface waves. The experiment could be carried out with a fixed geometry defining the propagation direction and the orientation of the linear EO material, and the signed magnitude of \\(\\underline{E}^{dc}\\) could then easily be swept electronically until a surface wave is detected. Various experimental configurations for exciting surface waves are available (Agranovich and Mills, 1982), as also are optically transparent electrodes to apply dc electric fields (Minami 2005; Medvedeva 2007). Finally, we hope that our work shall spur the development of artificial linear EO materials with much higher EO coefficients than now available.
Figure 7: Median propagation angle \\(\\psi_{m}\\) versus \\(n_{s}\\) for: a) \\(\\chi=0^{\\circ}\\), b) \\(\\chi=30^{\\circ}\\), c) \\(\\chi=60^{\\circ}\\), d) \\(\\chi=75^{\\circ}\\). Only Band 1 is shown for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{x}\\); only Band II is shown for \\(\\underline{E}^{dc}\\parallel\\underline{u}_{y}\\). See the text for other parameters.
Figure 8: Same as Figure 7, except that \\(\\Delta\\psi\\) is plotted against \\(n_{s}\\).
## References
Agranovich, V. M., & D. L. Mills. (Eds.) 1982. _Surface Polaritons: Electromagnetic Waves at Surfaces and Interfaces_. Amsterdam: North-Holland. Averkiev, N. S., & M. I. Dyakonov. 1990. Electromagnetic waves localized at the interface of transparent anisotropic media. _Opt. Spectrosc. (USSR)_ 68:653-655. Boardman, A. D. (Ed.) 1982. _Electromagnetic Surface Modes_. Chichester: Wiley. Boyd, R. W. 1992. _Nonlinear Optics_. San Diego: Academic. Chen, H. C. 1983. _Theory of Electromagnetic Waves: A Coordinate-Free Approach_, 219-226. New York: McGraw-Hill. Cook, Jr., W. C. 1996. Electrooptic coefficients. In: Nelson, D.F. (Ed.) 1996. _Landolt-Bornstein, Vol. 3/30A_, 164. Berlin: Springer. Darinskii, A. N. 2001. Dispersionless polaritons on a twist boundary in optically uniaxial crystals. _Crystallogr. Repts._ 46:842-844. D'yakonov, M. I. 1988. New type of electromagnetic wave propagating at an interface. _Sov. Phys. JETP_ 67:714-716. Farias, G. A., E. F. Nobre, & R. Moretzsohn. 2002. Polaritons in hollow cylinders in the presence of a dc magnetic field. _J. Opt. Soc. Am. A_ 19:2449-2455. Homola, J., S. S. Yee, & G. Gauglitz. 1999. Surface plasmon resonance sensors: review. _Sens. Actuat. B: Chem._ 54:3-15. Lakhtakia, A. 2006a. Electrically switchable exhibition of circular Bragg phenomenon by an isotropic slab. _Microw. Opt. Technol. Lett._ 48:2148-2153; corrections: 2007. 49:250-251. Lakhtakia, A. 2006b. Narrowband and ultranarrowband filters with electro-optic structurally chiral materials. _Asian J. Phys._ 15:275-282. Lakhtakia, A., & T. G. Mackay. 2007. Electrical control of the linear optical properties of particulate composite materials. _Proc. R. Soc. Lond. A_ 463:583-592. Lakhtakia, A., & J. A. Reyes. 2006a. Theory of electrically controlled exhibition of circular Bragg phenomenon by an obliquely excited structurally chiral material - Part 1: Axial dc electric field. _Optik_ doi:10.1016/j.ijleo.2006.12.001. Lakhtakia, A., & J. A. Reyes. 2006b. Theory of electrically controlled exhibition of circular Bragg phenomenon by an obliquely excited structurally chiral material - Part 2: Arbitrary dc electric field. _Optik_ doi:10.1016/j.ijleo.2006.12.002. Li, J., M.-H. Lu, L. Feng, X.-P. Liu, & Y.-F. Chen. 2007. Tunable negative refraction based on the Pockels effect in two-dimensional photonic crystals composed of electro-optic crystals. _J. Appl. Phys._ 101:013516. Mackay, T. G., & A. Lakhtakia. 2007. Scattering loss in electro-optic particulate composite materials. _J. Appl. Phys._ 101:083523. Matveeva, E., Z. Gryczynski, J. Malicka, J. Lukomska, S. Makowiec, K. Berndt, J. Lakowicz, & I. Gryczynski. 2005. Directional surface plasmon-coupled emission: Application for an immunoassay in whole blood. _Anal. Biochem._ 344:161-167.
Medvedeva, J. E. 2007. Unconventional approaches to combine optical transparency with electrical conductivity. _Appl. Phys. A_ doi:10.1007/s00339-007-4035-4.
Minami, T. 2005. Transparent conducting oxide semiconductors for transparent electrodes. _Semicond. Sci. Technol._ 20:S35-S44.
Mineralogy Database, [http://www.webmineral.com/](http://www.webmineral.com/) (20 April 2006).
Nelatury, S. R., J. A. Polo Jr., & A. Lakhtakia. 2007. Surface waves with simple exponential transverse decay at a biaxial bicrystalline interface. _J. Opt. Soc. Am. A_ 24:856-865; corrections: 24:2102.
Polo Jr., J. A., S. Nelatury, & A. Lakhtakia. 2006. Surface electromagnetic wave at a tilted uniaxial bicrystalline interface. _Electromagnetics_ 26:629-642.
Polo Jr., J. A., S. R. Nelatury, & A. Lakhtakia. 2007a. Propagation of surface waves at the planar interface of a columnar thin film and an isotropic substrate. _J. Nanophoton._ 1, 013501.
Polo Jr., J. A., S. R. Nelatury, & A. Lakhtakia. 2007b. Surface waves at a biaxial bicrystalline interface. _J. Opt. Soc. Am. A_ 24:xxxx-xxxx (at press).
Torner, L., J. P. Torres, F. Lederer, D. Mihalache, D. M. Baboiu, & M. Ciumac. 1993. Nonlinear hybrid waves guided by birefringent interfaces. _Electron. Lett._ 29:1186-1188.
Torreri, P., M. Ceccarini, P. Maciece, & T. Petrucci. 2005. Biomolecular interactions by surface plasmon resonance technology. _Ann. Ist. Super. Sanita_ 41:437-441.
Walker, D. B., E. N. Glytsis, & T. K. Gaylord. 1998. Surface mode at isotropic-uniaxial and isotropic-biaxial interfaces. _J. Opt. Soc. Am. A_ 15:248-260.
Wong, C., H. Ho, K. Chan, S. Wu, & C. Lin. 2005. Application of spectral surface plasmon resonance to gas pressure sensing. _Opt. Eng._ 44:124403.
Zenneck, J. 1907. Uber die Fortpflanzung ebener elektromagnetischer Wellen langs einer ebenen Lieterflache und ihre Beziehung zur drahtlosen Telegraphie. _Ann. Phys. Lpz._ 23:846-866.
Zgonik, M., R. Schlesser, I. Biaggio, E. Volt, J. Tscherry, & P. Gunter. 1993. Material constants of KNbO\\({}_{3}\\) relevant for electro- and acousto-optics. _J. Appl. Phys._ 74:1287-1297. | Surface waves can propagate on the planar interface of a linear electro-optic (EO) material and an isotropic dielectric material, for restricted ranges of the orientation angles of the EO material and the refractive index of the isotropic material. These ranges can be controlled by the application of a dc electric field, and depend on both the magnitude and the direction of the dc field. Thus, surface-wave propagation can be electrically controlled by exploiting the Pockels effect.
_Key words:_ Electro-optics, Pockels effect, surface wave
**ELECTRICAL CONTROL OF SURFACE-WAVE PROPAGATION AT THE PLANAR INTERFACE OF A LINEAR ELECTRO-OPTIC MATERIAL AND AN ISOTROPIC DIELECTRIC MATERIAL**
SUDARSHAN R. NELATURY
Department of Electrical, Computer and Software Engineering,
Pennsylvania State University, The Behrend College,
5101 Jordan Road, Erie, PA 16563-1701, USA.
JOHN A. POLO JR.
Department of Physics and Technology,
Edinboro University of Pennsylvania,
235 Scotland Rd., Edinboro, PA 16444, USA.
AKHLESH LAKHTAKIA
CATMAS -- Computational & Theoretical Materials Sciences Group,
Department of Engineering Science and Mechanics,
Pennsylvania State University, University Park, PA 16802, USA. | Summarize the following text. |
arxiv-format/0711_1781v1.md | # Detection of Endolithes Using Infrared Spectroscopy
S. Dumas, Y. Dutil and G. Joncas
## 1 Introduction
Space exploration is a difficult task and the search for life is no different. The equipement size a probe can bring severely limit the scope of the search. Every sample cannot be analyzed and selecting those that can be is not trivial. This project investigates the possibilities of remote detection using infrared spectroscopy in order to select those few samples to pick and analyse further.
## 2 Infrared Spectroscopy
The reason for using IR spectroscopy is mainly that almost all biological marker molecules (i.e. biomarkers) will show some spectral features in the near to far IR (Hand et al (2005)). Furthermore, the spectroscopy will not destroy the sample (e.g. there is no contact with the sample, no contamination). Previous techniques of detection tended to destroy, or damage, the endolithes or the environment in which they lived (e.g. by using electron-microscope, chemical and biological or analysis).
Biomarkers are an important source of information in the search for evidence of life in geological samples. Even if the organisms are dead or dormant, it is still possible to detect their presence, byproducts or even their chemical alterations of the environment.
In searching for extraterrestrial life, it is important to have a minimum of preconception about them in order to find it.
## 3 Endolithes
Endolithes are organisms that live inside rocks or in the pores between mineral grains. There are thousands of known species of endolithes, including members from Bacteria, Archaea, and Fungi. They represent near half the Earth's biomass and also present an ideal model of life for Mars (Ascaco (2002), Hand et al (2005)).
This study used two groups of samples. One from the Guelph region (west of Toronto, Ontario) and the other near Eureka on the Ellesmere Island in the Nunavut (Omelon (2006)). Both regions have different geology and climate.
## 4 Methodology
Samples were scanned using a technique called diffuse scattering. The IR beam was directed toward the sample and a series of mirrors redirected the scattered beam to the IR sensor. The spectra were obtained using an IR Nicolet spectrometer.
The apparatus used to collect the scattered IR beam could not receive large rock samples. It was necessary to break them into smaller pieces. The same device was very sensible to the angle of incidence and reflectance. It was important to position both mirrors near the vertical above the sample. The adjustment of each mirrors, in order to optimize the reception of light, was very time consuming. Those adjustments were performed using an aluminum plate as the sample to maximize the received flux to the sensor. When the rock sample was placed in the light path, the total flux drops a lot but the signal-to-noise ratio was high enough.
Spectra in middle IR (132) and near IR (49) on 15 samples from the two groups were taken. The best results were from the Middle IR. Most of the spectra were taken at a resolution of 4 or 8 cm\\({}^{-1}\\). After analysis, it appears that a spectral resolution of 8 cm\\({}^{-1}\\) is enough for our purpose. The middle IR spectra (from 4000 to 650 cm\\({}^{-1}\\)) were then processed using Principal Component Analysis in order to classify them.
## 5 Principal Component Analysis
Principal Component Analysis (Marchi (2007)) is a technique of Factorial Analysis (e.g. Multivariate Statistics). It is often used to find a new coordinate system in which the original data will be better aligned on some axes (e.g. principal components).
The technique can be summarized by the equation \\(A=UWV^{T}\\) where A,U,W and V are matrices.
\\(A\\) is a \\(m\\times n\\) matrix containing the spectra on each row. The value of each absorption band is then regrouped in the columns which are the variables on which the PCA works. \\(U\\), also a \\(m\\times n\\) matrix, contains the spectra in the new coordinate system. The matrix \\(V\\), where \\(VV^{T}\\) is an identity matrix, contains the eigenvectors. \\(W\\) is a diagonal matrix containing the squared root of the eigenvalues and provides a clue about how many principal components (PC) can be used to describe the data. It is important to have a matrix \\(A\\) with morecolumns than rows (ie. \\(n>m\\)) else \\(W\\) becomes a singular matrix and the whole PCA fails.
To facilitate data manipulation and visualization for this paper, we have taken only the first three principal components. The result of the PCA process is illustrated in figure 1. The process can be extended to as many components as needed. Based on the eigenvalues from our data, the first five components are relevant, the others being buried in the noise.
We have identified several clusters. Cluster \\(E\\) groups the spectra containing features showing the presence of organic compounds. Cluster \\(E\\) is very close to another cluster of points calculated from organic spectra used as a reference (i.e. cluster O). Further, most of the spectra of cluster \\(E\\) have been taken directly from green regions visible on the samples.
## 6 Results and Conclusions
Our results show that it is possible to detect biological signatures using a spectrometer operating in the middle infrared range. However, it is not possible to highlight a particular region, or regions, in a spectrum to be used to identify biomarkers. The interdependance of the absorption bands related to the living are too complex to simply isolate a few.
The proposed technique calls for a more subtle approach by comparing witness spectra and an unknown spectrum by plotting them in the PCA's space. If the test spectrum, once projected into the PCA's space, is close to the reference
Figure 1: Spectra plotted in the PCA space
group then the probability of it is containing biomarkers is high. Adding a non-organic spectra to the PCA space as references may improve the identification scheme.
This technique could be used to pinpoint potential life harboring rock for more detailed analysis. It could be possible to extend the method to better idenfity the unknown spectrum using a more precise reference database.
We wish to thank the following people for their help in providing the samples used for this study :
* Christopher Omelon, Microbial Geochemistry Laboratory, University of Toronto Geology
* Uta Matthes, Department of Integrative Biology, University of Guelph
## References
* Ascaco (2002) Ascaco, C., 2002, New approach to the study of Antarctic lithobiontic microorganisms and their inorganic traces, and their application in the detection of life in Martien rocks, Int. Microbiol
* Blackhurt (2005) Blackhurt, R.L., 2005, Cryptoendolith colonization of diverse substrates, Lunar and Planetary Science XXXVI
* Burford et al. (2003) Burford, E.P., Kierans, M., Gadd, G.M., 2003 Geomyocology: fungi in mineral substrata, Mycologist, Vol. 17, 98-107
* Einbeck et al. (2007) Einbeck, J., Evers, L., Bailer-Jones, C., 2007, Representing complex data using localized principal component with application to astronomical data, astro-ph/0709.1538
* Hand et al. (2005) Hand, K.P., Carlson, R.W., Sun,H., Anderson, M, Wadsworth, W., Levy, R., 2005, Utilizing active mid-infrared microspectrometry for in-situ analysis of cryptoendolithic microbial communities of Battleship Promontory, Dry Valleys, Antartica, Astrobiology and Planetary Missions, vol 5906, p.302
* Johnston and Vestal (1992) Johnston, C.G., Vestal, J.R., 1992, Biogeochemistry of Oxalate in the Antarctic Cryptoendolithic Lichen-Dominated Community, Microbial Ecology, Vol 25., 305-319
* Marchi (2007) Marchi, S., 2007, Extrasolar planet taxonomy : a new statistical approach, astro-ph/0705.0910v1
* Omelon (2006) Omelon, C., 2006, Environmental controls on microbial colonization of high Arctic cryptoendolithic habitats, Polar Biology, V.30
* Sigler et al. (2003) Sigler, W.V., Bachofen, R., Zeyer, J., 2003 Molecular characterization of endolithic cyanobacteria inhabiting exposed dolomite in central Switzerland Environmental Microbiology 5 (7), p.618
* Torporski and Steele (2002) Torporski, J., Steele, A., 2002, The relevance of bacterial biomarkers in astrobiological research, Proceeding of the Second European Workshop on Exo/Astrobiology, Graz, Austria, p.239. | On Earth, the Dry Valleys of Antarctica provide the closest martian-like environment for the study of extremophiles. Colonies of bacteries are protected from the freezing temperatures, the drought and UV light. They represent almost half of the biomass of those regions. Due to there resilience, endolithes are one possible model of martian biota.
We propose to use infrared spectroscopy to remotely detect those colonies even if there is no obvious sign of their presence. This remote sensing approach reduces the risk of contamination or damage to the samples.
Dept. de physique, de genie physique et d'optique et Observatoire du mont Megantic, Universite Laval, Quebec, Canada, G1K 7P4 | Summarize the following text. |
arxiv-format/0711_2584v1.md | # Remote sensing of chromospheric magnetic fields
via the Hanle and Zeeman effects
J. Trujillo Bueno\\({}^{(1)}\\)\\({}^{(2)}\\) and R. Manso Sainz\\({}^{(1)}\\)\\({}^{(3)}\\)
\\({}^{(1)}\\) Instituto de Astrofisica de Canarias, E-38205, La Laguna (Tenerife), Spain ([email protected]) \\({}^{(2)}\\) Consejo Superior de Investigaciones Cientificas, Spain \\({}^{(3)}\\) Dipartimento di Astronomia e Scienza dello Spazio, Universita degli Studi di Firenze, I-50125, Florence, Italy ([email protected])
## 1 Introduction
The physical processes that underlie solar magnetic activity are of fundamental importance to astrophysics as well as in controlling the heliosphere including near-earth space weather. However, with the possible exception of the solar photosphere --thethin surface layer where almost all of the radiative energy flux is emitted--, our empirical knowledge concerning the magnetism of the outer solar atmosphere (chromosphere, transition region, corona) is still very primitive. This is very regrettable because many of the physical challenges of solar and stellar physics arise precisely from magnetic processes taking place in such outer layers.
In particular, the \"quiet\" solar chromosphere is a crucial region whose magnetism we need to understand for unlocking new discoveries. It is in this highly inhomogeneous and dynamic region of low density plasma overlying the thin solar photosphere where the magnetic field becomes the globally dominating factor. If we aim at understanding the complex and time-dependent structure of the outer solar atmosphere we must first decipher how is the intensity and topology of the magnetic fields of the solar chromosphere.
According to the \"standard picture\" of chromospheric magnetism described in the recently-published Encyclopedia of Astronomy and Astrophysics there is \"a layer of magnetic field which is directed parallel to the solar surface and located in the low chromosphere, overlying a field-free region of the solar photosphere\". This so-called _magnetic canopy_ \"has a field strength of the order of 100 gauss and covers a large fraction of the solar surface\" [1].
This picture of chromospheric magnetism seems to be in the minds of most solar physicists since the beginning of the 1980s, when R.G. Giovanelli and H.P. Jones interpreted solar magnetograms in chromospheric lines (like the IR triplet of ionized calcium or the Mg I \\(b_{2}\\) line) taken in network unipolar regions near the solar limb, as well as in sunspots and related active regions. Such chromospheric magnetograms seem to show a polarity inversion and are considerably more diffused in appearence than photospheric magnetograms, which is interpreted as the result of the expansion of the magnetic field lines with height in the solar atmosphere [2, 3, 4, 5, 6].
The magnetic canopy model was later reinforced in the 1990s via magnetohydrostatic extrapolations of photospheric magnetic flux tube models [7]. However, it was found that only magnetic field extrapolations that allow for substantial differences between the temperatures of the atmospheres within and outside the assumed magnetic flux tubes are capable of producing a low-lying canopy field. If the internal and external atmospheres are assumed to be similar the canopy extrapolated field forms in the upper chromosphere and the corona. It was argued that the assumption of much lower temperatures in the external atmosphere fits nicely with the observational finding of strong CO absorption lines near the extreme solar limb [8].
Magnetograms and extrapolations thus led to the idea that the \"quiet\" solar chromosphere is pervaded by magnetic canopies with predominantly horizontal fields overlying \"field-free\" regions whose temperatures remain relatively cool up to the canopy bases. As a matter of fact, some researchers investigated the impact of \"the magnetic canopy\" on the frequencies of solar \\(p\\)- and \\(f\\)-modes (see _e.g._, [9]), while others found of interest to consider its influence on the linear polarization of some resonance lines [10]. It is however very important to emphasize that, as pressed with great force by a working group on chromospheric fields (see [11]), chromospheric magnetograms have never \"detected\" magnetic canopies in the truly quiet Sun where the network is fragmentary and photospheric magnetograms show the well-known \"salt and pepper\" patterns of mixed polarity. In fact, the Ca ii IR triplet and other chromospheric lines are relatively broad, which implies that the magnetic fields of the \"quiet\" chromospheric regions are difficult to diagnose via the only consideration of the longitudinal Zeeman effect on which magnetograms are based on. Obviously, the above-mentioned chromospheric magnetograms (of network and active regions) and magnetohydrostatic extrapolations (of photospheric magnetic flux tube models) are not suitable for drawing conclusions on the magnetism of the most quiet regions of the solar chromosphere.
Over the last few years, observational investigations of scattering polarization on the Sun have pointed out the existence of \"enigmatic\" linear polarization signals in several spectral lines (observed in the \"quiet\" solar chromosphere close to the limb as well as in solar filaments), which cannot be understood in terms of the classical theory of scattering polarization [12, 13, 14, 15, 16]. In particular, the \"enigmatic\" features of the linearly-polarized solar-limb spectrum have motivated some novel theoretical investigations of scattering polarization in spectral lines [17, 18, 19, 20, 21, 22, 23, 16]. Such investigations have been carried out within the framework of polarization transfer theories that allowed us to formulate scattering polarization problems taking into account a physical ingredient that had been previously neglected: ground-level atomic polarization (_i.e._, the existence of population imbalances and/or coherences among the Zeeman sublevels of the lower-level of the spectral line under considerartion).
Of particular interest in this respect is the letter published in Nature by Landi Degl'Innocenti with the title \"Evidence against turbulent and canopy-like magnetic fields in the solar chromosphere\" [18]. He concludes that the explanation in terms of ground-level atomic polarization of the \"enigmatic\" linear polarization peaks of the sodium D-lines observed by Stenflo and Keller in quiet regions close to the solar limb [12], implies that the magnetic field of the \"quiet\" solar chromosphere has to be either isotropically distributed but extremely low (with \\(B\\raisebox{-2.15pt}{$\\stackrel{{<}}{{\\sim}}$}10\\) milligauss) or, alternatively, practically vertically orientated. More recently, the opinion that magnetic fields of milligauss or weaker strength cannot exist in the highly conductive solar atmospheric plasma has led Stenflo _et al._ to the conclusion that the magnetic field in the most quiet regions of the solar chromosphere has then to be preferentially vertical [24].
The only way to obtain reliable empirical information on the intensity and topology of the weak magnetic fields of the \"quiet\" solar chromosphere is via the measurement and rigorous physical interpretation of weak polarization signals in chromospheric spectral lines. The aim of this keynote article is to show in some detail how the most recent advances in the observation and physical interpretation of weak polarization signals in terms of the Hanle and Zeeman effects is giving us decisive new clues about the topology and intensity of the magnetic fields of the \"quiet\" solar chromosphere.
## 2 The Zeeman and Hanle effects
In order to understand why the observed polarization signals reported in Section 3 are weak, first we need to advance something concerning their physical origin. The _circular_ polarization signals are mainly due to the longitudinal Zeeman effect. As is well known, Zeeman-induced circular polarization signals are sensitive to the net magnetic flux density over the spatio-temporal resolution element of the observations. Although it is true that a complex magnetic field topology within the line formation region may conspire to make the observed circular polarization signals weak, we have some good reasons to believe that the Stokes \\(V\\) signals are weak mainly because the magnetic fields of the \"non-magnetic\" solar chromosphere are intrinsically weak (_i.e._, below 100 gauss).
The physical origin of the observed _linear_ polarization signals is completely different and has nothing to do with the transverse Zeeman effect. The observed Stokes \\(Q\\) and \\(U\\) signals are due to _atomic polarization_, _i.e._, to the existence of population imbalances and quantum interferences (or coherences) among the sublevels pertaining to the upper and/or lower atomic levels involved in the line transition under consideration. Thisatomic polarization is the result of a transfer process of \"order\" from the radiation field to the atomic system (see [21]). The most obvious manifestation of \"order\" in the solar radiation field is its degree of anisotropy arising from its centre-to-limb variation. In fact, the main source of atomic polarization is the anisotropic illumination of the atoms of the solar atmospheric plasma, which produces a _selective_ radiative pumping. This pumping is \"selective\", in the sense that it produces population imbalances among the Zeeman sublevels of each atomic level. This implies sources and sinks of linear (and even circular) polarization at each point within the medium. These locally generated polarization signals are then modified via transfer processes in the stellar plasma. The emergent polarization signals are weak because the degree of anisotropy of the solar radiation field is weak (which leads to population imbalances and coherences that are small compared with the overall population of the atomic level under consideration), but also because we have collisions and magnetic fields which tend to modify the atomic polarization.
The Hanle effect is the modification of the atomic polarization (and of the ensuing _linear_ polarization profiles \\(Q(\\lambda)\\) and \\(U(\\lambda)\\)) due to the action of a weak magnetic field (see the review [21]). As the Zeeman sublevels of degenerate atomic levels are split by the magnetic field, the degeneracy is lifted and, as long as the sublevels still overlap, the coherences (and, in general, also the population imbalances among the sublevels) are modified. Therefore, the Hanle effect is sensitive to magnetic fields such that the corresponding Zeeman splitting is comparable to the inverse lifetime (or natural width) of the lower or the upper atomic levels of the line transition under consideration. On the contrary, the Zeeman effect is most sensitive in _circular_ polarization (quantified by the Stokes \\(V\\) parameter), with a magnitude that scales with the ratio between the Zeeman splitting and the width of the spectral line (which is very much larger than the natural width of the atomic levels).
The basic approximate formula to estimate the _maximum_ magnetic field intensity \\(B\\) (measured in gauss) to which the Hanle effect can be sensitive is
\\[10^{6}\\ B\\ g_{J}\\ \\approx\\ 1/t_{\\rm life}\\, \\tag{1}\\]
where \\(g_{J}\\) and \\(t_{\\rm life}\\) are, respectively, the Lande factor and the lifetime in seconds of the atomic level under consideration (which can be either the upper or the lower level of the chosen spectral line transition). This formula shows that the measurement and physical interpretation of weak polarization signals in suitably chosen spectral lines may allow us to diagnose magnetic fields having intensities between \\(10^{-3}\\) and 100 gauss approximately, _i.e._, in a parameter domain that is very hard to study via the Zeeman effect alone.
While the Hanle effect modifies the atomic polarization, elastic collisions always produce atomic-level _depolarization_. The depolarization is complete only if \\(D\\,t_{\\rm life}\\,{\\rightarrow}\\,\\infty\\), where \\(D\\) (given in s\\({}^{-1}\\)) is the depolarizing rate of the given atomic level and \\(t_{\\rm life}\\) its lifetime. Therefore, at first sight, one would be tempted to conclude that _ground_ and _metastable_ levels are more vulnerable to elastic collisions than atomic levels of shorter lifetimes. This is however only true if \\(D\\) is assumed to be of the same order-of-magnitude for both, the long-lived and short-lived atomic levels under consideration. Unfortunately, our current knowledge on depolarizing rates due to elastic collisions is very poor, but we may hope to use the Sun itself as an atomic physics laboratory for improving the situation.
## 3 Observations of weak polarization signals in chromospheric lines
Stenflo and Keller [12] have adopted the term \"the second solar spectrum\" to refer to the linearly polarized solar limb spectrum which can be observed with spectropolarimeters that allow the detection of very low amplitude polarization signals (with \\(Q/I\\) of the order of \\(10^{-3}\\) or smaller). Such observations with the polarimeter ZIMPOL (see also the atlas [25]) have been confirmed (and extended to the full Stokes vector) by Dittmann _et al._[26], Martinez Pillet _et al._[27] and Trujillo Bueno _et al._[15] using the Canary Islands telescopes. One of these telescopes is THEMIS, which has allowed us to carry out observations of the full Stokes vector in several spectral lines simultaneously [15]. Given the increasing interest of this research field, THEMIS is being used also by many other colleagues (see, _e.g._, Bommier's report in this volume). Thanks to a modified version of their polarimeter, Stenflo _et al._ have also started to investigate the four Stokes parameters of optical spectral lines in solar regions near the limb with varying degrees of magnetic activity [24]. It is also of interest to point out the enormous diagnostic potential offered by the near-UV spectral region where the degree of anisotropy of the solar radiation field is relatively high. Fortunately, there is at least one solar polarimeter that has been developed recently thinking seriously in the scientific interest of this near-UV region: ZIMPOL-UV.
In the remaining part of this section we show some particularly interesting examples of our own spectropolarimetric observations in optical and near-IR chromospheric lines, which have been obtained using different polarimeters attached to the Tenerife solar telescopes (VTT, GCT and THEMIS). As we shall see below, the physical interpretation of these observations in terms of the quantum theory of polarization highlights the key role played by some subtle physical mechanisms in producing the emergent polarization.
Figure 1 shows an example of our VTT+TIP observations using the He i 10830 A multiplet [16]. TIP is the Tenerife Infrared Polarimeter, which is based on ferroelectric-liquid-crystals [28]. The figure shows the case of a solar filament that was located _exactly_ at the very center of the solar disk during the observing day. The open circles indicate the spectropolarimetric observation, while the solid line shows the theoretical modelling based on the density matrix polarization transfer theory (see Section 5). Like prominences, solar filaments are magnetized plasma ribbons embedded in the \\(10^{6}\\) K solar corona, and confined by the action of highly inclined magnetic fields (with respect to the stellar radius) and having intensities in the gauss range. (The only difference is that prominences are observed off-the-limb, _i.e._, against the dark background of the sky, while filaments are observed against the bright background of the solar disk. Therefore, we see emission lines in prominences, but absorption lines in filaments.)
The observational results of Fig. 1 are very interesting. First of all, we have sizable Stokes \\(Q\\) signals in both the \"blue\" and \"red\" components of the He i 10830 A multiplet. This demonstrates that the Hanle effect can give rise to significant linear polarization even at the very center of the solar disk where we meet the case of forward-scattering (see [21]). Moreover, the fact itself that the \"blue\" component is linearly polarized is particularly interesting because it is the result of \\(J_{l}=1\\!\\rightarrow\\!J_{u}=0\\!\\rightarrow\\!J_{l}=1\\) scattering processes (with \\(J_{l}\\) and \\(J_{u}\\) the total angular momentum of the lower and upper levels, respectively). According to scattering polarization transfer theories neglecting the role of lower-level atomic polarization, such a line transition should be intrinsically unpolarizable because the upper level, having \\(J_{u}=0\\), cannot carry any atomic polarization. We will see below that the physical origin of this \"enigmatic\" linear polarization signal is the existence of a sizable amount of atomic polarization in the lower level, whose \\(J_{l}=1\\).
Another interesting feature of our solar filament observation is that the \"blue\" and \"red\" lines of the He i 10830 A multiplet show up with amplitudes of opposite sign, which cannot be modeled via the assumption of independent two-level atomic models for the three line components of the helium multiplet. At least a two-term atom taking into account the fine structure of the upper \\(2^{3}P_{2,1,0}\\) term is needed in order to be able to obtain qualitative agreement with the observed linear polarization amplitudes.
Figure 2 shows the full Stokes vector of the Ca ii 8662 A line observed on the disk at 5\" from the solar limb. This observation is the result of a collaboration between Dittmann, Semel and Trujillo Bueno. They have used Semel's stellar polarimeter attached to the Tenerife Gregory Coude Telescope and carried out during September 2000 spectropolarimetric observations of the Ca ii IR triplet in regions near the limb with varying degrees of magnetic activity. The Ca ii 8662 A line is of particular interest because its upper level, having \\(J_{u}=1/2\\), cannot harbour any atomic alignment. This has led to consider the reported detection of a significant Stokes \\(Q/I\\) amplitude in this
Figure 1: He i 10830 Å spectropolarimetric observation of a solar filament located at the solar disk center (open circles) versus theoretical modelling (solid line). The fit to the observations has been achieved assuming a magnetic field vector of 20 gauss inclined by \\(105^{\\circ}\\) with respect to the radial direction and that the observed filament region was located at a height of \\(40^{\\prime\\prime}\\) above the solar photosphere. The positive reference direction for Stokes \\(Q\\) is parallel to the projection of the magnetic field vector on the solar disk. It turns out that this direction made an angle of about \\(10^{\\circ}\\) in the clockwise direction with respect to the axis of the solar filament. The Stokes parameters are normalized to the maximum line-core depression (from the continuum level) of the Stokes \\(I\\) profile of the “red” absorption line. This observation has been obtained with the TIP polarimeter attached to the Vacuum Tower Telescope (VTT). From [16].
spectral line as \"enigmatic\", because of the belief that the polarization effects come only from the population imbalances and coherences in the _excited_ states of the scattering process [14]. The full Stokes vector observation of Fig. 2 shows the existence of sizable linear polarization signals in the Ca ii 8662 A line, both in \\(Q/I\\) and \\(U/I\\).
Finally, Fig. 3 shows an example of THEMIS observations of the second solar spectrum (see [15]). It shows the full Stokes vector of the oxygen IR triplet at 777 nm, as observed on-the-disk at 4\" from the North solar limb. It is of great scientific interest to point out that the two lines at 7772 A and 7774 A have _positive_\\(Q/I\\) fractional linear polarization amplitudes, while the 7776 A line shows _negative_ polarization (_i.e._, along the solar radius through the observed point!) all over its full spectral range. A forthcoming publication will show in detail that this is due to the existence of atomic polarization in the _metastable_ lower-level of the oxygen triplet(1). Finally, note that in these oxygen lines we find significant circular polarization signals, which can only be produced by magnetic fields substantially larger than 0.01 gauss, as is the case also with the Stokes \\(V\\) profiles of the Ca ii 8662 A line shown in Fig. 2.
Figure 2: The full Stokes vector of the Ca ii 8662 Å line observed on the solar disk at about 5” from the limb during the equinox period of September 2000. The positive reference direction for Stokes \\(Q\\) is along the line perpendicular to the radial direction through the observed point. The vertical dashed-line to the _rhs_ of each panel indicates the central wavelength of the 8662 Å line, while the _lhs_ dashed-line gives the position of a nearby photospheric iron line. This spectropolarimetric observation with the Tenerife Gregory Coudé Telescope (GCT) is the result of a collaboration between Dittmann, Semel and Trujillo Bueno.
The physical origin of the enigmatic polarization signals: atomic polarization of metastable levels and dichroism
The \"enigmatic\" signals of the \"second solar spectrum\" are detected in spectral lines whose lower level is the ground state or a metastable level. These levels have a lifetime \\(t^{l}_{life}{\\approx}1/B_{lu}\\bar{J}^{0}_{0}\\), which is much larger than the upper level lifetime \\(t^{u}_{life}{\\approx}1/A_{ul}\\) (with \\(B_{lu}\\) and \\(A_{ul}\\) are the Einstein coefficients for absorption and stimulated emission, respectively, and \\(\\bar{J}^{0}_{0}\\) is the line integrated mean intensity of the radiation field). Therefore, the atomic polarization of such _long-lived_ lower-levels is very sensitive to depolarizing mechanisms. However, it is very important to emphasize that only collisions can depolarize completely
Figure 3: The fractional polarization of the oxygen IR triplet at 777 nm observed with THÉMIS on the solar disk at about 4” from the North solar limb. From [15].
a given atomic level. Except for a few very particular cases (see [39]), the depolarization of the atomic levels due to magnetic fields (the Hanle effect) is never complete. For instance, elastic collisions and a microturbulent and isotropic magnetic field modify the degree of population imbalance of the upper level of a two-level atom (with \\(J_{l}=0\\) and \\(J_{u}=1\\)) as dictated by the following approximate expression (cf. [30]):
\\[\\sigma_{0}^{2}\\,=\\,\\frac{\\rho_{0}^{2}}{\\rho_{0}^{0}}\\approx\\frac{{\\cal H}}{1+ \\delta}\\;{\\cal A}, \\tag{2}\\]
where \\({\\cal A}=\\bar{J_{0}^{2}}/\\bar{J_{0}^{0}}\\) is the anisotropy factor(2) of the pumping radiation field, \\(\\delta\\) is the collisional depolarizing rate in units of the Einstein \\(A_{ul}\\) coefficient, and \\({\\cal H}\\) is the Hanle depolarization factor which varies between 1 (for the zero magnetic field case) and 1/5 (for a Zeeman splitting very much larger than the natural width of the upper level). We point out that \\(\\sqrt{3}\\rho_{0}^{0}\\) is the _overall_ population of the upper level, while its atomic alignment (or degree of population imbalance) is quantified by \\(\\rho_{0}^{2}=[N_{1}\\,-2N_{0}\\,+N_{-1}]/\\sqrt{6}\\) (where \\(N_{i}\\) are the individual populations of the three Zeeman sublevels of the upper level).
Footnote 2: Its possible values are such that \\(-\\frac{1}{2}\\,\\lesssim\\,\\sqrt{2}{\\cal A}\\,\\lesssim\\,1\\). (Note that there is a typing error in Eq. (11) of [21], since the inequalities given there are correct for \\(2{\\cal A}\\), _not_ for \\({\\cal A}\\).)
A key question is the following(3): to which extent can the atomic polarization of long-lived atomic levels survive the partial Hanle-effect destruction produced by highly inclined magnetic fields having intensities in the range of gauss?. The answer to this question is of greatest importance for a correct diagnostics of the magnetic fields of the outer solar atmosphere (chromosphere, transition region and corona). This is because, as clarified below, the physical origin of the above-mentioned \"enigmatic\" polarization signals is the existence of a significant amount of population imbalances and coherences in the _metastable_ lower-levels of their respective spectral lines.
Footnote 3: In this article, the atomic polarization of a given atomic level is quantified by means of the spherical tensor components of its atomic density matrix and the quantization-axis (the \\(z\\)-axis) is taken along the stellar radius (see [21]).
Interestingly, the upper level of some of the \"enigmatic\" spectral lines cannot carry any atomic alignment (because it has \\(J_{u}=0\\) or \\(J_{u}=1/2\\), as is the case with the \"blue\" line of the He i 10830 A multiplet of Fig. 1 or with the Ca ii 8662 A line of Fig. 2, respectively). For this type of lines there is no contribution of upper-level atomic polarization to the \\(Q\\) and \\(U\\) components of the _emission_ vector (_i.e._, to \\(\\epsilon_{Q}\\) and \\(\\epsilon_{U}\\)), simply because such upper levels cannot carry any atomic alignment. Moreover, the contributions to \\(\\epsilon_{Q}\\) and \\(\\epsilon_{U}\\) arising from the Zeeman splitting of the lower level (_i.e._, due to the transverse Zeeman effect) are negligible for the \"weak\" magnetic fields of prominences, filaments and of the \"quiet\" solar chromosphere and corona. This type of lines (with \\(J_{u}=0\\) or \\(J_{u}=1/2\\)) may thus be called \"null\" lines, because the spontaneously emitted radiation that follows the anisotropic radiative excitation is vitually _unpolarized_.
However, \\(\\eta_{Q}\\) and \\(\\eta_{U}\\) can have sizable values, if a significant amount of lower-level atomic polarization is present. In principle, this is possible for the aforementioned helium and calcium lines because their lower levels can be polarized (because they have \\(J_{l}\\)=1 and \\(J_{l}=3/2\\), respectively). When a sizable amount of atomic polarization is present in such lower levels, as it happens in the outer solar atmosphere, then the role of the emissivity in Stokes \\(Q\\) and \\(U\\) comes exclusively from the terms \\(-\\eta_{Q}\\,I\\) and \\(-\\eta_{U}\\,I\\) that arise in their respective radiative transfer equations. If the Stokes-\\(I\\) intensity along the line of sight is important enough (as it occurs for the on-the-disk observations of Figs. 1,2 and 3), then we can have an important contribution of the absorption process itself to the emergent linear polarization. We call this mechanism _dichroism_ in a weakly magnetized medium which, we would like to stress, has nothing to do with the transverse Zeeman effect (see [17]). This _dichroism_ mechanism, which requires the presence of a sizable amount of lower-level polarization, plays a crucial role in producing the observed \"enigmatic\" linear polarization signals in a variety of chromospheric lines [20, 21, 22, 23, 16].
The conclusion that some of the \"enigmatic\" linear polarization signals are due to _dichroism_ demonstrates that a sizable amount of atomic polarization is present in the lower levels of such spectral lines. As mentioned above, such lower-levels are metastable (_i.e._, they are long-lived atomic levels). According to the basic Hanle-effect Eq. (1) their atomic polarization is vulnerable to magnetic fields of very low intensity (_i.e._, to fields \\(\\rm B\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$} }\\,10^{-3}\\) gauss!). This magnetic depolarization takes place for sufficiently inclined fields with respect to the radial direction of the star (_i.e._, for \\(\\theta_{B}\\raisebox{-1.0pt\\hbox{$\\sim$}}\\raisebox{1.0pt\\hbox{$\\sim$}}10^{\\circ}\\)). Unfortunately, the particular conclusion of Landi Degl'Innocenti that the atomic polarization of the hyperfine components of the ground level of sodium does not survive sufficiently in the presence of turbulent or canopy-like horizontal fields stronger than about 10 milligauss [18] has led to unjustifiable reinforcements of the belief that the atomic polarization of _any_ long-lived atomic level has to be insignificant in the presence of highly inclined solar magnetic fields having intensities in gauss range [31, 32, 33]. If this belief were correct in general, then it would be justified to conclude that the magnetic field throughout much of the \"quiet\" solar chromosphere has to be either extremely low (with \\(\\rm B\\,\\lower 2.0pt\\hbox{$\\sim$}\\kern-7.5pt\\raise 1.0pt\\hbox{$<$}\\,0.01\\) gauss) or, alternatively, oriented fairly close to the stellar radial direction (but having intensities in the gauss range), in contradiction with the observational results [34, 35] obtained from spectral lines whose lower level is intrinsically unpolarizable.
5 Multilevel modelling of the Hanle and Zeeman effects: diagnostics of chromospheric magnetic fields
The physical interpretation of weak polarization signals requires to calculate the polarization of the atomic or molecular levels within the framework of a rigorous theory for the generation and transfer of polarized radiation. A suitable theory for many spectral lines of diagnostic interest is the density matrix polarization transfer theory of Landi Degl'Innocenti, which is based on the Markovian assumption of complete frequency redistribution [36, 37]. This theory provides a physically consistent description of scattering phenomena if the spectrum of the pumping radiation is flat across a sufficiently large frequency range \\(\\Delta\
u\\)[38]1.
Footnote 1: The required extension of this \\(\\Delta\
u\\)-interval depends on whether or not coherences among Zeeman sublevels of _different_\\(J-\\)levels can be neglected [38]. If they need to be taken into account (as it occurs, _e.g._, with the He i D\\({}_{3}\\) multiplet at 5876 Å ) then \\(\\Delta\
u\\) has to be of the order of the frequency range of the multiplet. However, if such coherences can be neglected (as it happens, _e.g._, when modelling the Hanle effect in the Ca ii IR triplet) then \\(\\Delta\
u\\) needs to be only larger than the inverse lifetime of the atomic levels.
The theoretical modelling of the He i 10830 A multiplet in solar prominences and filaments (see the solid line of Fig. 1) is based on the density matrix theory [36, 37, 16]. Trujillo Bueno _et al._[16], have assumed a slab of He i atoms lying at about 40\" above the solar photosphere, from where it is illuminated by unpolarized and spectrally-flat radiation. They have adopted a realistic multiterm model atom incomplete Paschen-Back effect regime. They also take into account coherences among magnetic sublevels of each \\(J\\)-level, and between magnetic sublevels of the different \\(J\\)-levels of each term (because they are important for some terms of the model atom like _e.g._, for the upper term of the D\\({}_{3}\\) multiplet). From the fitting to the spectropolarimetric observation of the disk-center filament (open circles of Fig. 1) they infer a magnetic field of 20 gauss and inclined by about 105 degrees with respect to the radial direction through the observed point. The agreement with the spectropolarimetric observation is remarkable. It demonstrates that a very significant amount of the atomic polarization that is induced by optical pumping processes in the metastable \\(2^{3}S_{1}\\) lower-level survives the partial Hanle-effect destruction due to horizontal magnetic fields with intensities in the gauss range, and produces sizable linear polarization signals.
As is well known, prominences and filaments are located tens of thousands of kilometers above the solar photosphere and their confining magnetic field does _not_ have a random azimuthal component within the spatio-temporal resolution element of the observation. Therefore, one may ask whether the above-mentioned belief can be safely applied to the solar chromosphere, where the degree of anisotropy of the pumping radiation is significantly lower and the magnetic fields may have a more complex topology. This issue is investigated for the Ca ii IR triplet by the authors in [22, 23].
Firstly, we have considered the zero magnetic field reference case and demonstrated that the \"enigmatic\" relative \\(Q/I\\) amplitudes (among the three lines) observed by Stenflo _et al._[14], are the natural consequence of the existence of a sizable amount of atomic polarization in the metastable levels \\({}^{2}\\)D\\({}_{3/2}\\) and \\({}^{2}\\)D\\({}_{5/2}\\) (which are the lower-levels of the Ca ii IR triplet). Secondly, we have investigated the Hanle effect in the IR triplet at 8498, 8542 and 8662 A considering a realistic multilevel atomic model. Figure 4 is one of our most recent and interesting results, which we will describe in full detail in forthcoming publications. It shows the fractional linear polarization calculated at \\(\\mu=0.1\\) (about 5\" from the limb) assuming magnetic fields of given inclination, but with a random azimuthal component within the spatio-temporal resolution element of the observation.
The results of this figure indicate that, basically, there are two magnetic-field topologies (assuming that the magnetic field lines have a random azimuthal component over
Figure 4: The fractional linear polarization of the Ca ii IR triplet calculated at \\(\\mu=0.1\\) (about 5” from the limb) in an isothermal atmosphere with T=6000 K. Each curve corresponds to the indicated inclination (\\(\\theta_{B}\\)) of the assumed random-azimuth magnetic field.
the spatio-temporal resolution element of the observations) for which the limb polarization signals of the 8542 and 8662 A lines can have amplitudes with \\(Q/I\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$}}\\,0.1\\%\\) (_i.e._, of the order of the observed ones). As one could have expected, the first topology corresponds to magnetic fields with inclinations \\(\\theta_{B}\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}\\,30^{\\circ}\\). The second corresponds to magnetic fields which are practically parallel to the solar surface, _i.e._, \"horizontal\" fields with \\(80^{\\circ}\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}\\, \\theta_{B}\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}\\,100^{\\circ}\\). This demonstrates that a significant amount of the atomic polarization that is induced by optical pumping processes in the metastable \\({}^{2}\\)D\\({}_{3/2}\\) lower-level survives the partial Hanle-effect destruction produced by non-resolved canopy-like horizontal fields with intensities in the gauss range, and generates significant linear polarization signals via the dichroism mechanism.
The spectropolarimetric observation of Fig. 2 is only one example among many other different cases of our GCT observations. The sizable Stokes \\(V/I\\) signal of Fig. 2 indicates that we were observing here a moderately magnetized region close to the solar limb. Within the framework of the CRD theory of line formation (see [37]), this particular observation of Fig. 2 cannot be modelled assuming a random azimuth magnetic field, otherwise Stokes \\(U\\) would have been undetectable. It would be of interest to confirm with other telescopes the detection of that significant \\(U/I\\) signal for the 8662 A Ca ii line, because it can only be due to the existence of quantum interferences (coherences!) among the Zeeman sublevels of the metastable \\({}^{2}\\)D\\({}_{3/2}\\) lower-level (see Section 7 in [21]). For this particular observation of Fig. 2 a good fit can be obtained assuming deterministic magnetic fields with intensities in the gauss range and having inclinations \\(\\theta_{B}\\,\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}}\\,30^{\\circ}\\) (see the multilevel Hanle and Zeeman modelling of Fig. 5). In any case, observations of more \"quiet\" and more \"active\" solar limb regions have been performed. In some regions \\(Q\\) is detected, whereas \\(U{\\approx}0\\) and/or \\(V{\\approx}0\\). In other regions \\(V\\) is detected, but \\(Q{\\approx}U{\\approx}0\\). The physical interpretation of these spectropolarimetric observations in terms of the Hanle and Zeeman effects is giving us valuable clues about the intensities and magnetic field topologies in different regions close to the solar limb.
Figure 5: The emergent Stokes parameters of the Ca ii 8662 Å line calculated at \\(\\mu=0.1\\) in the FAL-C semi-empirical model. We have assumed a deterministic magnetic field of 20 gauss that is inclined by \\(25^{\\circ}\\) with respect to the radial direction. This figure is to be compared with the observational results of Fig. 2.
## 6 Concluding remarks
The physical origin of the \"enigmatic\" linear polarization signals observed in a variety of chromospheric lines is the existence of atomic polarization in their metastable lower-levels, which permits the operation of a _dichroism_ mechanism that has nothing to do with the transverse Zeeman effect. Therefore, the absorption process itself plays a key role in producing the linear polarization signals observed in the \"quiet\" solar chromosphere as well as in solar filaments.
The population imbalances and coherences among the Zeeman sublevels of such _long-lived_ atomic levels can be sufficiently significant in the presence of horizontal magnetic fields having intensities in the gauss range (see, however, [39] concerning the very particular case of the 'enigmatic' sodium D\\({}_{1}\\) line). Therefore, in general, one should not feel obliged to conclude that the magnetic fields throughout the \"quiet\" solar chromosphere have to be either extremely low (_i.e._, with intensities \\(B\\raise 1.29pt\\hbox{$<$\\kern-7.5pt\\raise-4.73pt\\hbox{$\\sim$}}10\\) mG), or, alternatively, oriented preferentially along the radial direction. The physical interpretation of our spectropolarimetric observations of chromospheric lines in terms of the Hanle and Zeeman effects indicates that the magnetic field topology may be considerably more complex, having both moderately inclined and practically horizontal field lines with intensities above the milligauss range. A physically plausible scenario that might lead to polarization signals in agreement with the observations is that resulting from the superposition of miriads of different loops of magnetic field lines connecting opposite polarities. This suggested magnetic field topology is somehow reminiscent of the magnetic structure model of the \"quiet\" transition region proposed by Dowdy _et al._[40], but scaled down to the spatial dimensions of the solar chromosphere.
###### Acknowledgements.
We are grateful to the following scientists for their collaboration and for useful discussions: Manolo Collados, Olaf Dittmann, Egidio Landi Degl'Innocenti, Valentin Martinez Pillet, Laura Merenda, Frederic Paletou and Meir Semel. This work is part of the EC-TMR European Solar Magnetometry Network and has been partly funded by the Spanish Ministerio de Ciencia y Tecnologia through project AYA2001-1649.
## References
* [1]Steiner O., in _Encyclopedia of Astronomy and Astrophysics_, edited by P. Murdin, (Nature Publishing Group, London; and Institute of Physics, Bristol) 2001, pp. 340-343.
* [2]Chapman G.A. Sheeley N.R., in _IAU Symposium 35, Structure and Development of Solar Active Regions_, edited by K.O. Kiepenheuer, (Reidel, Dordrecht) 1968, pp. 161-173.
* [3]Pope T. Mosher J., _Solar Phys._, **44** (1975) 3.
* [4]Giovanelli R. G., _Solar. Phys._, **68** (1980) 49.
* [5]Giovanelli R. G. Jones H. P., _Solar. Phys._, **79** (1982) 267.
* [6]Jones H.P. Giovanelli R.G., _Solar Phys_, **87** (1983) 37.
* [7]Solanki S.K. Steiner O., _Astron. Astrophys._, **234** (1990) 519.
* [8]Ayres T., in _Encyclopedia of Astronomy and Astrophysics_, edited by P. Murdin, (Nature Publishing Group, London; and Institute of Physics, Bristol) 2001, pp. 346-350.
* [9]Evans D.J. Roberts B., _Nature_, **355** (1992) 230.
* [10]Faurobert-Scholl M., _Astron. Astrophys._, **285** (1994) 655.
* [11]Jones H. P., in _Chromospheric Diagnostics and Modelling_, edited by B. Lites, (National Solar Observatory, Sunspot) 1985, pp. 10-12.
* [12]Stenflo J.O. Keller C. Astron. Astrophys.3211997927.
* [13]Lin H. Penn M.J. Kuhn J.R Astrophys. J.4931998978.
* [14]Stenflo J.O. Keller C. Gandorfer A. Astron. Astrophys.3552000789.
* [15]Trujillo Bueno J. Collados M. Paletou F. Molodji G. Advanced Solar Polarimetry: theory, observations and instrumentation. 20th NSO/SP Summer Workshop, edited by M. Sigwarth, ASP Conf. Ser.2362001141.
* [16]Trujillo Bueno J. Landi Degl'Innocenti E. Collados M. Merenda L. Manso Sainz R. Nature4152002403.
* [17]Trujillo Bueno J. Landi Degl'Innocenti E. Astrophys. J. Lett.4821997183.
* [18]Landi Degl'Innocenti E. Nature3921998256.
* [19]Landi Degl'Innocenti E. Solar Polarization, edited by K.N. Nagendra J.O. Stenflo, (Kluwer Academic Publishers, Dordrecht) 1999, pp. 61-71.
* [20]Trujillo Bueno J. in Solar Polarization, edited by K.N. Nagendra J.O. Stenflo, (Kluwer Academic Publishers, Dordrecht) 1999, pp. 73-96.
* [21]Trujillo Bueno J. in Advanced Solar Polarimetry: theory, observations and instrumentation. 20th NSO/SP Summer Workshop, edited by M. Sigwarth, ASP Conf. Ser.2362001161.
* [22]Manso Sainz R. Trujillo Bueno J. Physical Review LettersVol. 91, Number11200311102-1.
* [23]Trujillo Bueno J. Manso Sainz R. Magnetic Fields Across the Hertzprung-Russell Diagram, edited by G. Mathys, S.K. Solanki D.T. WickramasingheASP Conf. Ser.248200183.
* [24]Stenflo J.O. Gandorfer A. Wenzler T. Keller C.U. Astron. Astrophys.36720011033.
* [25]Gandorfer A. The Second Solar Spectrum: a high spectral resolution polarimetric survey of scattering polarization at the solar limb in graphical representation, (vdf Hochschulverlag AG an der ETHZ, Zurich) 2000.
* [26]Dittmann O. Trujillo Bueno J. Semel M. Lopez Ariste A. In Advanced Solar Polarimetry: theory, observations and instrumentation. 20th NSO/SP Summer Workshop, edited by M. Sigwarth, ASP Conf. Ser.2362001125.
* [27]Martinez Pillet V. Trujillo Bueno J. Collados M, in Advanced Solar Polarimetry: theory, observations and instrumentation. 20th NSO/SP Summer Workshop, edited by M. Sigwarth, ASP Conf. Ser.2362001133.
* [28]Martinez Pillet V. Collados M. Sanchez Almeida J. in High resolution solar physics: theory, observations and techniques. 19th NSO/SP Summer Workshop, edited by T.R. Rimmele, K.S. Balasubramaniam R.R. Radick, ASP Conf. Ser.1831999264.
* [29]Stenflo J.O. Twerenbold D. Harvey J.W. Brault J.W. Astron. Astrophys. Suppl.541983505.
* [30]Trujillo Bueno J. Manso Sainz R. Astrophys. J.5161999436.
* [31]Stenflo J.O. Solar Magnetic Fields: Polarized Radiation Diagnostics, (Kluwer Academic Publishers, Dordrecht) 1994.
* [32]Stenflo J.O. Astron. Astrophys.3241997344.
* [33]Stenflo J.O. in Advanced Solar Polarimetry: theory, observations and instrumentation. 20th NSO/SP Summer Workshop, edited by M. Sigwarth, ASP Conf. Ser.236200197.
* [34]Bianda M. Stenflo J.O. Solanki S. Astron. Astrophys.3371998565.
* [35]Bianda M. Stenflo J.O. Solanki S. Astron. Astrophys.35019991060.
* [36]Landi Degl'Innocenti E. Solar Phys.791982291.
* [37]Landi Degl'Innocenti E. Solar Phys.8519833.
* [38]Landi Degl'Innocenti E. Landi Degl'Innocenti M. Landolfi M. In Science with THEMIS, edited by N. Mein S. Sahal-Brechot, (Publications of Paris Observatory, Paris) 1997, p.59.
* [39]Trujillo Bueno J. Casini R. Landolfi M. Landi Degl'Innocenti E. Astrophys. J. Lett.566200253.
* [40]Dowdy J.F. Jr. Rabin D. Moore R.L. Solar. Phys.105198635. | The only way to obtain reliable empirical information on the intensity and topology of the weak magnetic fields of the \"quiet\" solar chromosphere is via the measurement and rigorous physical interpretation of polarization signals in chromospheric spectral lines. The observed Stokes profiles reported here are due to the Hanle and Zeeman effects operating in a weakly magnetized plasma that is in a state far from local thermodynamic equilibrium. The physical origin of their \"enigmatic\" linear polarization \\(Q\\) and \\(U\\) components is the existence of atomic polarization in their metastable lower-levels, which permits the action of a dichroism mechanism that has nothing to do with the transverse Zeeman effect. It is also pointed out that the population imbalances and coherences among the Zeeman sublevels of such long-lived atomic levels can survive in the presence of horizontal magnetic fields having intensities in the gauss range, and produce significant polarization signals. Finally, it is shown how the most recent developments in the observation and theoretical modelling of weak polarization signals are facilitating fundamental new advances in our ability to investigate the magnetism of the outer solar atmosphere via spectropolarimetry.
PACS 96.60.-j - Solar physics.
PACS 95.30.Gv - Radiation mechanisms; polarization.
PACS 95.30.Jx - Radiative transfer; scattering.
PACS 32.80.Bx - Level crossing and optical pumping.
PACS 01.30.Cc - Invited review in conference proceedings. | Summarize the following text. |
arxiv-format/0711_2914v1.md | **Image Classification Using SVMs:**
**One-against-One Vs One-against-All**
\\({}^{*}\\)Gidudu Anthony, \\({}^{*}\\) Hulley Gregg and \\({}^{*}\\)Marwala Tshilidzi
\\({}^{*}\\)Department of Electrical and Information Engineering, University of the Witwatersrand,
Johannesburg, Private Bag X3, Wits, 2050, South Africa
Respective Tel.: +27117177261, +27117177236, +27117177217
Fax: +27114031929
[email protected], [email protected], [email protected]
**Keywords:** Support Vector Machines, one-against-one, one-against-All
### Introduction
Over the last three decades or so, remote sensing has increasingly become a prime source of land cover information (Kramer, 2002; Foody and Mathur, 2004a). This has been made possible by advancements in satellite sensor technology thus enabling the acquisition of land cover information over large areas at various spatial, temporal spectral and radiometric resolutions. The process of relating pixels in a satellite image to known land cover is called image classification and the algorithms used to effect the classification process are called image classifiers (Mather, 1987). The extraction of land cover information from satellite images using image classifiers has been the subject of intense interest and research in the remote sensing community (Foody and Mathur, 2004b). Some of the traditional classifiers that have been in use in remote sensing studies include the maximum likelihood, minimum distance to means and the box classifier. As technology has advanced, new classification algorithms have become part of the main stream image classifiers such as decision trees and artificial neural networks. Studies have been made to compare these new techniques with the traditional ones and they have been observed to post improved classification accuracies (Peddle et al. 1994; Rogan et al. 2002; Li et al. 2003; Mahesh and Mather, 2003). In spite of this, there is still considerable scope for research for further increases in accuracy to be obtained and a strong desire to maximize the degree of land cover information extraction from remotely sensed data (Foody and Mathur, 2004b). Thus, research into new methods of classification has continued, and support vector machines (SVMs) have recently attracted the attention of the remote sensing community (Huang et al., 2002).
Support Vector Machines (SVMs) have their roots in Statistical Learning Theory (Vapnik, 1995). They have been widely applied to machine vision fields such as character, handwriting digit and text recognition (Vapnik, 1995; Joachims, 1998), and more recently to satellite image classification(Huang et al, 2002; Mahesh and Mather, 2003). SVMs, like Artificial Neural Networks and other nonparametric classifiers have a reputation for being robust (Foody and Mathur, 2004a; Foody and Mathur, 2004b). SVMs function by nonlinearly projecting the training data in the input space to a feature space of higher (infinite) dimension by use of a kernel function. This results in a linearly separable dataset that can be separated by a linear classifier. This process enables the classification of remote sensing datasets which are usually nonlinearly separable in the input space. In many instances, classification in high dimension feature spaces results in over-fitting in the input space, however, in SVMs over-fitting is controlled through the principle of structural risk minimization (Vapnik, 1995). The empirical risk of misclassification is minimised by maximizing the margin between the data points and the decision boundary (Mashao, 2004). In practice this criterion is softened to the minimisation of a cost factor involving both the complexity of the classifier and the degree to which marginal points are misclassified. The tradeoff between these factors is managed through a margin of error parameter (usually designated C) which is tuned through cross-validation procedures (Mashao, 2004). The functions used to project the data from input space to feature space are sometimes called kernels (or kernel machines), examples of which include polynomial, Gaussian (more commonly referred to as radial basis functions) and quadratic functions. Each function has unique parameters which have to be determined prior to classification and they are also usually determined through a cross validation process. A deeper mathematical treatise of SVMs can be found in Christiaini (2002), Campbell (2000) and Vapnik (1995).
By their nature SVMs are intrinsically binary classifiers (Melgani and Bruzzone, 2004) however there exist strategies by which they can be adopted to multiclass tasks associated with remote sensing studies. Two of the common approaches are the One-Against-One (1A1) and One-Against-All (1AA) techniques. This paper seeks to explore these two approaches with a view of discussing their implications for the classification of remotely sensed images.
### SVM Multiclass Strategies
As mentioned before, SVM classification is essentially a binary (two-class) classification technique, which has to be modified to handle the multiclass tasks in real world situations e.g. derivation of land cover information from satellite images. Two of the common methods to enable this adaptation include the 1A1 and 1AA techniques. The 1AA approach represents the earliest and most common SVM multiclass approach (Melgani and Bruzzone, 2004) and involves the division of an N class dataset into N two-class cases. If say the classes of interest in a satellite image include water, vegetation and built up areas, classification would be effected by classifying water against non-water areas i.e. (vegetation and built up areas) or vegetation against non-vegetative areas i.e. (water and built up areas). The 1A1 approach on the other hand involves constructing a machine for each pair of classes resulting in N(N-1)/2 machines. When applied to a test point, each classification gives one vote to the winning class and the point is labeled with the class having most votes. This approach can be further modified to give weighting to the voting process. From machine learning theory, it is acknowledged that the disadvantage the 1AA approach has over 1A1 is that its performance can be compromised due to unbalanced training datasets (Gualtieri and Cromp, 1998), however, the 1A1 approach is more computationally intensive since the results of more SVM pairs ought to be computed. In this paper, the performance of these two techniques are compared and evaluated to establish their performance on the extraction of land cover information from satellite images.
### Methodology
The study area was extracted from a 2001 Landsat scene (row 171 and row 60). It is located at the source of River Nile in Jinja, Uganda. The bands used in this research consisted of Landsat's optical bands i.e. bands 1, 2, 3, 4, 5 and 7. The classes of interest were built up area, vegetation and water. IDRISI Andes was used for preliminary data preparation such as sectioning out of the study area from the whole scene and identification of training data. This data was then exported into a form readable by MATLAB (Version 7) for further processing and to effect the classification process. The SVMs that were used included the Linear, Polynomial, Quadratic and Radio Basis Function (RBF) SVMs. Each classifier was employed to carry out 1AA and 1A1 classification. The classification results for both 1AA and 1A1 were then imported into IDRISI for georeferencing, GIS integration, accuracy assessment and derivation of land cover maps. The following four parameters formed the basis upon which the two multiclassification approaches were compared: Number of unclassified pixels, number of mixed pixels, final accuracy assessment and the 95% level of significance of the difference between overall accuracies of the two approaches (i.e. \\(|\\text{Z}|>\\)1.96).
### Results, Discussion and Conclusion
Table 1 gives a summary of the unclassified and mixed pixels resulting from 1A1 and 1AA classification. From Table 1 it is evident that the 1AA approach to multiclass classification has exhibited a higher propensity for unclassified and mixed pixels than the 1A1 approach. A graphical consequence of this is evident in the derived land cover maps shown in Figures 1 - 4. Figures 1a, 2a, 3a and 4a are a result of adopting 1A1, while Figures 1b, 2b, 3b and 4b depict classification output following the use of the 1AA approach. The'sputtering' of black represent the unclassified pixels while that of red shows pixels belonging to more than one class. The nature of the 1AA is such that the exact class of the mixed pixels cannot be determined and for accurate analysis all such pixels ought to be grouped together. From Table 1 and the corresponding derived land cover maps, it is clear that the 1A1 has posted more aesthetic results.
Figure 10: 1AA - Linear
Figure 11: 1A1 - Linear
Figure 12: 1AA - Polynomial
Figure 13: 1AA - Quadratic
Figure 14: 1A1 - RBF
Figure 15: 1AA - Linear
Figure 16: 1AA - Linear
From Table 2, all accuracies would be classified as yielding very strong correlation with ground truth data. The individual performance of the SVM classifiers however show that overall classification accuracy reduced for the linear and RBF classifiers, stayed the same for the polynomial and increased for the quadratic classifier. Further analysis of these results show that these differences are pretty much insignificant at the 95% confidence interval. It can therefore be concluded that whereas one can be certain of high classification results with the 1A1 approach, the 1AA yields approximately as good classification accuracies. The choice therefore of which approach to adopt henceforth becomes a matter of preference.
## Acknowledgements
The authors would like to acknowledge the financial support of the University of the Witwatersrand and National Research Foundation of South Africa
## References
* Campbell (2000) Campbell, C. 2000. An Introduction to kernel Methods, Radial Basis Function Networks: Design and Applications. (Berlin: Springer Verlag).
* Christianini and Shawe-Taylor (2000) Christianini, N., and Shawe-Taylor, J. 2000. An introduction to support vector machines: and other kernel-based learning methods. (Cambridge and New York: Cambridge University Press).
* Gualtieri and Cromp (1998) Gualtieri, J. A., and Cromp, R. F. 1998. Support vector machines for hyperspectral remote sensing classification. In Proceedings of the 27\\({}^{\\text{th}}\\) AIPR Workshop: Advances in Computer Assisted Recognition, Washington, DC, Oct.14\\({}^{\\text{th}}\\) -16\\({}^{\\text{th}}\\) October, 1998 (Washington, DC: SPIE), pp. 221-232.
* Huang et al. (2002) Huang, C., Davis, L. S., and Townshed, J. R. G. 2002. An assessment of support vector machines for land cover classification. International Journal of Remote Sensing, 23, 725-749.
* Joachims (1998) Joachims, T. 1998. Text categorization with support vector machines--learning with many relevant features. In Proceedings of the 10\\({}^{\\text{th}}\\) European Conference on Machine Learning, Chemnitz, Germany. (Berlin: Springer), pp. 137-142.
* Kramer (2002) Kramer J. H., 2002. Observation of the earth and its environment: Survey of missions and sensors (4\\({}^{\\text{th}}\\) Edition). (Berlin: Springer).
Mahesh P., and Mather, P. M. 2003. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sens. Environ., vol. 86, pp. 554-565. Foody, M. G., and Mathur, A. 2004a. A Relative Evaluation of Multiclass Image Classification by Support Vector Machines. In: IEEE Transactions on Geoscience and Remote Sensing, 42, pp. 1335 - 1343.
* 29\\({}^{\\text{th}}\\) November 2004.
* An Introduction. (New York: John Wiley and sons).
* 407. Foody, M. G., and Mathur, A. 2004a. A Relative Evaluation of Multiclass Image Classification by Support Vector Machines. In: IEEE Transactions on Geoscience and Remote Sensing, 42, pp. 1335
* Vapnik (1995) Vapnik, V. N. 1995. The Nature of Statistical Learning Theory. (New York: Springer-Verlag). | Support Vector Machines (SVMs) are a relatively new supervised classification technique to the land cover mapping community. They have their roots in Statistical Learning Theory and have gained prominence because they are robust, accurate and are effective even when using a small training sample. By their nature SVMs are essentially binary classifiers, however, they can be adopted to handle the multiple classification tasks common in remote sensing studies. The two approaches commonly used are the One-Against-One (1A1) and One-Against-All (1AA) techniques. In this paper, these approaches are evaluated in as far as their impact and implication for land cover mapping. The main finding from this research is that whereas the 1AA technique is more predisposed to yielding unclassified and mixed pixels, the resulting classification accuracy is not significantly different from 1A1 approach. It is the authors conclusion therefore that ultimately the choice of technique adopted boils down to personal preference and the uniqueness of the dataset at hand. | Give a concise overview of the text below. |
arxiv-format/0711_3367v1.md | # The high density equation of state: constraints from accelerators and astrophysics
Institute of Theoretical Physics, University of Tubingen, Germany
E-mail:
A footnote may follow.
## 1 Introduction
The isospin dependence of the nuclear forces which at present is only little constrained by data will be explored by the forthcoming radioactive beam facilities at FAIR/GSI, SPIRAL2/GANIL and RIA. Since the knowledge of the nuclear equation-of-state (EoS) at supra-normal densities and extreme isospin is essential for our understanding of the nuclear forces as well as for astrophysical purposes, the determination of the EoS was already one of the primary goals when first relativistic heavy ion beams started to operate. A major result of the SIS100 program at the GSI is the observation of a soft EoS for symmetric matter in the explored density range up to 2-3 times saturation density. These accelerator based experiments are complemented by astrophysical observations.
In particular the stabilization of high mass neutron stars requires a stiff EoS at high densities. There exist several observations pointing in this direction, e.g. the large radius of \\(R>12\\) km for the isolated neutron star RX J1856.5-3754 (shorthand: RX J1856) [1]. Measurements of high masses are also reported for compact stars in low-mass X-ray binaries (LMXBs) as \\(M=2.0\\pm 0.1\\)\\(M_{\\odot}\\) for the compact object in 4U 1636-536 [2]. For another LMXB, EXO 0748-676, constraints for the mass \\(M\\geq 2.10\\pm 0.28\\)\\(M_{\\odot}\\)_and_ the radius \\(R\\geq 13.8\\pm 0.18\\) km have been reported [3]. Unfortunately, one of the most prominent high mass neutron star candidates, the J0751+1807 binary pulsar with an originally reported mass of \\(M=2.1\\pm 0.2\\)\\(M_{\\odot}\\)[4] has been revisited and corrected down to \\(M=1.26\\)\\(M_{\\odot}\\)[5]. However, very recently an extrodinary high value of \\(M=2.74\\pm 0.21\\)\\(M_{\\odot}\\) (\\(1\\sigma\\)) has been reported for the millisecond pulsar PSR J1748-2021B [6].
Contrary to a naive expectation, high mass neutron stars do, however, not stand in contradiction with the observations from heavy ion reactions, see e.g. [7, 8]. Moreover, we are in the fortunate situation that ab initio calculations of the nuclear many-body problem predict a density and isospin behavior of the EoS which is in agreement with both observations.
Hence the present contribution starts with short survey on the predictions from many-body theory, turns then to heavy ion reactions and discusses finally the application to neutron stars.
## 2 The EoS from ab initio calculations
In _ab initio_ calculations based on many-body techniques one derives the EoS from first principles, i.e. treating short-range and many-body correlations explicitly. This allows to make prediction for the high density behavior, at least in a range where hadrons are still the relevant degrees of freedom. A typical example for a successful many-body approach is Brueckner theory (for a recent review see [9]). In the following we consider non-relativistic Brueckner and variational calculations [10] as well as relativistic Brueckner calculations [11, 12, 13]. It is a well known fact that non-relativistic approaches require the inclusion of - in net repulsive - three-body forces in order to obtain reasonable saturation properties. In relativistic treatments part of such diagrams, e.g. virtual excitations of nucleon-antinucleon pairs are already effectively included. Fig. 1 compares now the predictions for nuclear and neutron matter from microscopic many-body calculations - DBHF [12] and the 'best' variational calculation with 3-BFs and boost corrections [10] - to phenomenological approaches (NL3 and DD-TW from [15]) and an approach based on chiral pion-nucleon dynamics [16] (ChPT+corr.). As expected the phenomenological functionals agree well at and below saturation density where they are constrained by finite nuclei, but start to deviate substantially atsupra-normal densities. In neutron matter the situation is even worse since the isospin dependence of the phenomenological functionals is less constrained. The predictive power of such density functionals at supra-normal densities is restricted. _Ab initio_ calculations predict throughout a soft EoS in the density range relevant for heavy ion reactions at intermediate and low energies, i.e. up to about 3 \\(\\rho_{0}\\). Since the \\(nn\\) scattering length is large, neutron matter at subnuclear densities is less model dependent. The microscopic calculations (BHF/DBHF, variational) agree well and results are consistent with 'exact' Quantum-Monte-Carlo calculations [17].
Fig. 2 compares the symmetry energy predicted from the DBHF and variational calculations to that of the empirical density functionals already shown in Fig. 1. In addition the relativistic DD-\\(\\rho\\delta\\) RMF functional [20] is included. Two Skyrme functionals, SkM\\({}^{*}\\) and the more recent Skyrme-Lyon force SkLya represent non-relativistic models. The left panel zooms the low density region while the right panel shows the high density behavior of \\(E_{\\rm sym}\\).
The low density part of the symmetry energy is in the meantime relatively well constraint by data. Recent NSCL-MSU heavy ion data in combination with transport calculations are consistent with a value of \\(E_{\\rm sym}\\approx 31\\) MeV at \\(\\rho_{0}\\) and rule out extremely \"stiff\" and \"soft\" density dependences of the symmetry energy [21]. The same value has been extracted [18] from low energy elastic and (p,n) charge exchange reactions on isobaric analog states p\\({}^{(6}He,^{6}Li^{*})\\)n measured at the HMI. At sub-normal densities recent data points have been extracted from the isoscaling behavior of fragment formation in low-energy heavy ion reactions with the corresponding experiments carried out at Texas A&M and NSCL-MSU [19].
However, theoretical extrapolations to supra-normal densities diverge dramatically. This is crucial since the high density behavior of \\(E_{\\rm sym}\\) is essential for the structure and the stability of neutron stars. The microscopic models show a density dependence which can still be considered as _asy-stiff_. DBHF [12] is thereby stiffer than the variational results of Ref. [10]. The density
Figure 1: EoS in nuclear matter and neutron matter. BHF/DBHF and variational calculations are compared to phenomenological density functionals (NL3, DD-TW) and ChPT+corr.. The left panel zooms the low density range. The Figure is taken from Ref. [14].
dependence is generally more complex than in RMF theory, in particular at high densities where \\(E_{\\rm sym}\\) shows a non-linear and more pronounced increase. Fig. 2 clearly demonstrates the necessity to better constrain the symmetry energy at supra-normal densities with the help of heavy ion reactions.
## 3 Constraints from heavy ion reactions
Experimental data which put constraints on the symmetry energy have already been shown in Fig. 2. The problem of multi-fragmentation data from low and intermediate energy reactions is that they are restricted to sub-normal densities up to maximally saturation density. However, from low energetic isospin diffusion measurements at least the slope of the symmetry around saturation density could be extracted [25]. This puts already an important constraint on the models when extrapolated to higher densities. It is important to notice that the slopes predicted by the ab initio approaches (variational, DBHF) shown in Fig. 2 are consistent with the empirical values. Further going attempts to derive the symmetry energy at supra-normal densities from particle production in relativistic heavy ion reactions [20, 26, 27] have so far not yet led to firm conclusions since the corresponding signals are too small, e.g. the isospin dependence of kaon production [28].
Firm conclusions could only be drawn on the symmetric part of the nuclear bulk properties. To explore supra-normal densities one has to increase the bombarding energy up to relativistic energies. This was one of the major motivation of the SIS100 project at the GSI where - according to transport calculation - densities between \\(1\\div 3\\ \\rho_{0}\\) are reached at bombarding energies between \\(0.1\\div 2\\) AGeV. Sensitive observables are the collective nucleon flow and subthreshold \\(K^{+}\\) meson production. In contrast to the flow signal which can be biased by surface effects and the momentum dependence of the optical potential, \\(K^{+}\\) mesons turned out to an excellent probe for the high density phase of the reactions. At subthreshold energies the necessary energy has to be provided
Figure 2: Symmetry energy as a function of density as predicted by different models. The left panel shows the low density region while the right panel displays the high density range. Data are taken from [18] and [19].
by multiple scattering processes which are highly collective effects. This ensures that the majority of the \\(K^{+}\\) mesons is indeed produced at supra-normal densities. In the following I will concentrate on the kaon observable.
Subthreshold particles are rare probes. However, within the last decade the KaoS Collaboration has performed systematic high statistics measurements of the \\(K^{+}\\) production far below threshold [24, 30]. Based on this data situation, in Ref. [22] the question if valuable information on the nuclear EoS can be extracted has been revisited and it has been shown that subthreshold \\(K^{+}\\) production provides indeed a suitable and reliable tool for this purpose. In subsequent investigations the stability of the EoS dependence has been proven [23, 29].
Excitation functions from KaoS [24, 31] are shown in Fig. 3 and compared to RQMD [22, 29] and IQMD [23] calculations. In both cases a soft (K=200 MeV) and a hard (K=380 MeV) EoS have been used within the transport approaches. Skyrme type forces supplemented by an empirical momentum dependence have been used. As expected the EoS dependence is more pronounced in the heavy Au+Au system while the light C+C system serves as a calibration. The effects become even more evident when the ratio \\(R\\) of the kaon multiplicities obtained in Au+Au over C+C reactions (normalized to the corresponding mass numbers) is built [22, 24]. Such a ratio has the advantage that possible uncertainties which might still exist in the theoretical calculations cancel out to large extent. Comparing the ratio shown in Fig. 4 to the experimental data from KaoS [24], where the increase of \\(R\\) is even more pronounced, strongly favors a soft equation of state. This result is in agreement with the conclusion drawn from the alternative flow observable [32, 33, 34, 35].
Figure 3: Excitation function of the \\(K^{+}\\) multiplicities in \\(Au+Au\\) and \\(C+C\\) reactions. RQMD [22] and IQMD [23] with in-medium kaon potential and using a hard/soft nuclear EoS are compared to data from the KaoS Collaboration [24]. The figure is taken from [14].
## 4 Constraints from neutron stars
Measurements of \"extreme\" values, like large masses or radii, huge luminosities etc. as provided by compact stars offer good opportunities to gain deeper insight into the physics of matter under extreme conditions. There has been substantial progress in recent time from the astrophysical side.
The most spectacular observation was probably the recent measurement [4] on PSR J0751+1807, a millisecond pulsar in a binary system with a helium white dwarf secondary, which implied a pulsar mass of \\(2.1\\pm 0.2\\left({+0.4\\atop-0.5}\\right){\\rm M}_{\\odot}\\) with \\(1\\sigma\\) (\\(2\\sigma\\)) confidence. This measurement has, however, been revisited and corrected down to \\(M=1.26\\ M_{\\odot}\\)[5].
There exist, however, several alternative observations pointing towards large masses, e.g. the large radius of \\(R>12\\) km for the isolated neutron star RX J1856.5-3754 (shorthand: RX J1856) [1]. Measurements of high masses are also reported for compact stars in low-mass X-ray binaries (LMXBs) as \\(M=2.0\\pm 0.1\\ M_{\\odot}\\) for the compact object in 4U 1636-536 [2]. For another LMXB, EXO 0748-676, constraints for the mass \\(M\\geq 2.10\\pm 0.28\\ M_{\\odot}\\)_and_ the radius \\(R\\geq 13.8\\pm 0.18\\) km for the same object have been reported [3]. Very recently even an extremely high mass value of \\(M=2.74\\pm 0.21\\ M_{\\odot}\\) (\\(1\\sigma\\)) has been reported for the millisecond pulsar PSR J1748-2021B [6]. According to the authors of Ref. [6] there exists only a 1 % probability that the pulsar mass is below 2 solar masses, and a 0.10 % probability that it lies within the range of conventional neutron stars, i.e. between 1.20 and 1.44 solar masses. Such an anomalously large mass would of course strongly constrain the equation of state for dense matter, even excluding many-body approaches which reach maximum masses around \\(M=2.3\\ M_{\\odot}\\).
In Ref. [37] we applied more conservative upper and lower limits for the maximum mass of \\(1.6-2.5\\ M_{\\odot}\\) (initiated by the originally reported \\(2\\sigma\\) range of PSR J0751+1807 [4]). However, even
Figure 4: Excitation function of the ratio \\(R\\) of \\(K^{+}\\) multiplicities obtained in inclusive Au+Au over C+C reactions. RQMD [22] and IQMD [23] calculations are compared to KaoS data [24]. Figure is taken from [29].
such a weaker condition limits the softness of the EoS in neutron star (NS) matter considerably. The corresponding figure, Fig. 5, shows the mass versus central density for compact star configurations obtained by solving the TOV equations for a compilation of different hadronic EsoS. These are relativistic mean field models and the microscopic DBHF model. For details see [37]. Crosses denote the maximum mass configurations, filled dots mark the critical mass and central density values where the DU cooling process becomes possible.
One might now be worried about an apparent contradiction between the constraints derived from neutron stars and those from heavy ion reactions. While heavy ion reactions favor a soft EoS, high neutron star masses require a stiff EoS. The corresponding constraints are, however, complementary rather than contradictory. Intermediate energy heavy-ion reactions, e.g. subthreshold kaon production, constrains the EoS at densities up to \\(2\\div 3\\ \\rho_{0}\\) while the maximum NS mass is more sensitive to the high density behavior of the EoS. Combining the two constraints implies that the EoS should be _soft at moderate densities and stiff at high densities_. Such a behavior is predicted by microscopic many-body calculations (see Fig. 6). DBHF, BHF or variational calculations, typically, lead to maximum NS masses between \\(2.1\\div 2.3\\ M_{\\odot}\\) and are therefore in accordance with most of the high mass neutron star measurement (except of masses around or above \\(2.4\\div 2.5\\ M_{\\odot}\\).)
This fact is illustrated in Fig. 6 which combines the results from heavy ion collisions and the maximal mass constraint. The figure shows the compression moduli for the EsoS in symmetric nuclear matter as well as the maximum neutron star mass obtained with the corresponding models,
Figure 5: Mass versus central density for compact star configurations obtained for various relativistic hadronic EsoS. Crosses denote the maximum mass configurations, filled dots mark the critical mass and central density values where the DU cooling process becomes possible. According to the DU constraint, it should not occur in “typical NSs” for which masses are expected from population synthesis [36] to lie in the lower grey horizontal band. The dark grey horizontal bands around indicate an conservative estimate for the possible range of maximal neutron star masses derived from recent observation.
i.e. non-relativistic Brueckner (BHF) as well as for relativistic Brueckner (DBHF) calculations [12, 13]. The BHF calculations differ essentially in the usage of different 3-body-forces (3-BFs). In particular the isospin dependence of 3-BFs is not yet well constrained by nuclear data which is reflected in the maximum masses obtained, not so much in the compression moduli. The DBHF calculations differ in the elementary NN interaction applied. However, here the results for both, compression moduli and maximum neutron star masses are rather stable.
Besides the maximum masses there exist several other constraints on the nuclear EoS which can be derived from observations of compact stars, see e.g. Refs. [37, 38]. Among these, the most promising one is the Direct Urca (DU) process which is essentially driven by the proton fraction inside the NS [39]. DU processes, e.g. the neutron \\(\\beta\\)-decay \\(n\\to p+e^{-}+\\bar{\
u}_{e}\\), are very efficient regarding their neutrino production, even in super-fluid NM and cool NSs too fast to be in accordance with data from thermally observable NSs. Therefore, one can suppose that no DU processes should occur below the upper mass limit for \"typical\" NSs, i.e. \\(M_{DU}\\geq 1.5\\)\\(M_{\\odot}\\) (1.35 \\(M_{\\odot}\\) in a weak interpretation). These limits come from a population synthesis of young, nearby NSs [36] and masses of NS binaries [4]. While the present DBHF EoS leads to too fast neutrino cooling this behavior can be avoided if a phase transition to quark matter is assumed [40]. Thus a quark phase is not ruled out by the maximum NS mass. However, corresponding quark EsoS have to be almost as stiff as typical hadronic EsoS [40].
## 5 Summary
Heavy ion reactions provide in the meantime reliable constraints on the isospin dependence of the nuclear EoS at sub-normal densities up to saturation density and for the symmetric part up to - as an conservative estimate - two times saturation density. These are complemented by astrophysical
Figure 6: Combination of the constraints on the EoS derived from the maximal neutron star mass criterium and the heavy ion collisions constraining the compression modulus. Values of various microscopic BHF and DBHF many-body calculations are show.
constraints derived from the measurements of extreme values for neutron star masses. As long as the neutron star mass is below 2.3 \\(M_{\\odot}\\), both, the heavy ion constraint as well as the astrophysical constraint is in fair agreement with the predictions from nuclear many-body theory. If, however, a maximum mass around or above 2.5 \\(M_{\\odot}\\)[6] will be established, this requires an extremely stiff EoS which demands for new physical pictures.
## References
* [1] J. E. Trumper, V. Burwitz, F. Haberl and V. E. Zavlin, Nucl. Phys. Proc. Suppl. **132**, 560 (2004).
* [2] D. Barret, J. F. Olive and M. C. Miller, Mon. Not. Roy. Astron. Soc. **361**, 855 (2005).
* [3] F. Ozel, Nature **441**, 1115 (2006)
* [4] D. J. Nice _et al._, _Astrophys. J._**634**, 1242 (2005).
* [5] Conference Contribution by D. Nice, 40 Years of Pulsars, Motral 2007.
* [6] P. C. C. Freire, S. M. Ransom, S. Begin, I. H. Stairs, J. W. T. Hessels, L. H. Frey and F. Camilo, arXiv:0711.0925 [astro-ph].
* [7] C. Fuchs, arXiv:0706.0130 [nucl-th].
* [8] I. Sagert, M. Wietoska, J. Schaffner-Bielich and C. Sturm, arXiv:0708.2810 [astro-ph].
* [9] M. Baldo and C. Maieron, _J. Phys._ G **34** R243, (2007).
* [10] A. Akmal, V. R. Pandharipande, D. G. Ravenhall, _Phys. Rev._ C **58**, 1804 (1998).
* [11] T. Gross-Boelting, C. Fuchs, A. Faessler, _Nucl. Phys._ A **648**, 105 (1999).
* [12] E. van Dalen, C. Fuchs, A. Faessler, _Nucl. Phys._ A **744** 227 (2004); _Phys. Rev._ C **72**, 065803 (2005); _Phys. Rev. Lett._**95**, 022302 (2005); _Eur. Phys. J._ A **31**, 29 (2007).
* [13] P. G. Krastev, F. Sammarrunca, _Phys. Rev._ C **74**, 025808 (2006).
* [14] C. Fuchs, H. H. Wolter, _Euro. Phys. J._ A **30**, 5 (2006).
* [15] S. Typel, H. H. Wolter, _Nucl. Phys._ A **656**, 331 (1999).
* [16] P. Finelli, N. Kaiser, D. Vretenar, W. Weise, _Nucl. Phys._ A **735**, 449 (2004).
* [17] J. Carlson _et al._, _Phys. Rev._ C **68**, 025802 (2003).
* [18] D. T. Khoa,W. von Oertzen, H. G. Bohlen and S. Ohkubo, _J. Phys._ G **33**, R111 (2007); D. T. Khoa _et al._, _Nucl. Phys._ A **759**, 3 (2005).
* [19] D. V. Shetty, S. J. Yennello and G. A. Souliotis, arXiv:0704.0471 [nucl-ex] (2007).
* [20] V. Baran, M. Colonna, V. Greco, M. Di Toro, _Phys. Rep._**410**, 335 (2005).
* [21] D. V. Shetty, S. J. Yennello and G. A. Souliotis, _Phys. Rev._ C **75**, 034602 (2007).
* [22] C. Fuchs, A. Faessler, E. Zabrodin, Y. E. Zheng, _Phys. Rev. Lett._**86**, 1974 (2001).
* [23] Ch. Hartnack, H. Oeschler, J. Aichelin, _Phys. Rev. Lett._**96**, 012302 (2006).
* [24] C. Sturm _et al._ [KaoS Collaboration], _Phys. Rev. Lett._**86**, 39 (2001).
* [25] L. W. Chen, C. M. Ko and B. A. Li, _Phys. Rev. Lett._**94**, 032701 (2005).
* [26] G. Ferini _et al._, _Phys. Rev. Lett._**97**, 202301 (2006).
* [27] T. Gaitanos _et al._, _Nucl. Phys._ A **732**, 24 (2004).
* [28] X. Lopez _et al._ [FOPI Collaboration], _Phys. Rev._ C **75**, 011901 (2007).
* [29] C. Fuchs, _Prog. Part. Nucl. Phys._**56**, 1 (2006).
* [30] A. Schmah _et al._ [KaoS Collaboraton], _Phys. Rev._ C **71**, 064907 (2005).
* [31] F. Laue _et al._ [KaoS Collaboration], _Phys. Rev. Lett._**82**, 1640 (1999).
* [32] P. Danielewicz, _Nucl. Phys._ A **673**, 275 (2000).
* [33] T. Gaitanos _et al._, _Eur. Phys. J._ A **12**, 421 (2001); C. Fuchs, T. Gaitanos, _Nucl. Phys._ A **714**, 643 (2003).
* [34] A. Andronic _et al._ [FOPI Collaboration], _Phys. Rev._ C **64**, 041604 (2001); _Phys. Rev._ C **67**, 034907 (2003).
* [35] G. Stoicea _et al._ [FOPI Collaboration], _Phys. Rev. Lett._**92**, 072303 (2004).
* [36] S. Popov, H. Grigorian, R. Turolla and D. Blaschke, _Astron. Astrophys._**448**, 327 (2006).
* [37] T. Kahn _et al._, _Phys. Rev._ C **74**, 035802 (2006).
* [38] A. W. Steiner, M. Prakash, J. M. Lattimer, P. J. Ellis, _Phys. Rep._**411**, 325 (2005).
* [39] J. M. Lattimer, C. J. Pethick, M. Prakash and P. Haensel, _Phys. Rev. Lett._**66**, 2701 (1991).
* [40] T. Kahn _et al._, _Phys. Lett._ B **654**, 170 (2007). | The nuclear equation of state (EoS) at high densities and/or extreme isospin is one of the long-standing problems of nuclear physics. In the last years substantial progress has been made to constrain the EoS both, from the astrophysical side and from accelerator based experiments. Heavy ion experiments support a soft EoS at moderate densities while the possible existence of high mass neutron star observations favors a stiff EoS. Ab initio calculations for the nuclear many-body problem make predictions for the density and isospin dependence of the EoS far away from the saturation point. Both, the constraints from astrophysics and accelerator based experiments are shown to be in agreement with the predictions from many-body theory. | Give a concise overview of the text below. |
arxiv-format/0712_3539v1.md | # Laser Ranging for Gravitational, Lunar, and Planetary Science
Stephen M. Merkowitz
Philip W. Dabney
Jeffrey C. Livas
Jan F. McGarry
Gregory A. Neumann
and Thomas W. Zagwodzki
NASA Goddard Space Flight Center, Greenbelt MD 20771, USA
Day Month Year
Day Month Year
Revised Day Month Year
Communicated by Managing Editor
## 1 Introduction
Over the past 35 years, lunar laser ranging (LLR) from a variety of observatories to retroreflector arrays placed on the lunar surface by the Apollo astronauts and the Soviet Luna missions have dramatically increased our understanding of gravitational physics along with Earth and Moon geophysics, geodesy, and dynamics. During the past few years, only the McDonald Observatory (MLRS) in Texas and the Observatoire de la Cte d'Azur (OCA) in France have routinely made lunar range measurements. A new instrument, APOLLO, at the Apache Point facility in New Mexico is expected to become operational within the next year with somewhat increased precision over previous measurements [1].
Setting up retroreflectors were a key part of the Apollo missions so it is natural to ask if future lunar missions should include them as well. The Apollo retroreflectors are still being used today, and the 35 years of ranging data has been invaluable for scientific as well as other studies such as orbital dynamics. However, the available retroreflectors all lie within 26 degrees latitude of the equator, and the most useful ones within 24 degrees longitude of the sub-earth meridian as shown in Fig. 1. This clustering weakens their geometrical strength. New retroreflectors placed atlocations other than the Apollo sites would enable more detailed studies, particularly those that rely on the measurement of lunar librations. In addition, more advanced retroreflectors are now available that will reduce some of the systematic errors associated with using the Apollo arrays.
In this paper we discuss the possibility of putting advanced retroreflectors at new locations on the lunar surface. In addition, we discuss several active lunar laser ranging instruments that have the potential for greater precision and can be adapted for use on Mars. These additional options include laser transponders and laser communication terminals.
## 2 Gravitational Science From Lunar Ranging
Gravity is the force that holds the universe together, yet a theory that unifies it with other areas of physics still eludes us. Testing the very foundation of gravitational theories, like Einstein's theory of General Relativity, is critical in understanding the nature of gravity and how it relates to the rest of the physical world.
The Equivalence Principle, which states the equality of gravitational and inertial mass, is central to the theory of General Relativity. However, nearly all alternative theories of gravity predict a violation of the Equivalence Principle. Probing the validity of the Equivalence Principle is often considered the most powerful way to search for new physics beyond the standard model [2]. A violation of the Equivalence Principle would cause the Earth and Moon to fall at different rates toward the Sun resulting in a polarization of the lunar orbit. This polarization shows up in LLR as a displacement along the Earth-Sun line with a 29.53 d synodic period. The current limit on the Equivalence Principle is given by LLR: \\(\\Delta((M_{G}/M_{I})_{EP}=(-1.0\\pm 1.4)\\times 10^{-13}\\) [3].
Figure 1: Location of the lunar retroreflector arrays. The three Apollo arrays are labeled AP and the two Luna arrays are labeled LUN. ORI and SHK show the potential locations of two additional sites that would aid in strengthening the geometric coverage.
General Relativity predicts that a gyroscope moving through curved spacetime will precess with respect to the rest frame. This is referred to as geodetic or de Sitter precession. The Earth-Moon system behaves as a gyroscope with a predicted geodetic precession of 19.2 msec/year. This is observed using LLR by measuring the lunar perigee precession. The current limit on the deviation of the geodetic precession is: \\(K_{gp}=(-1.9\\pm 6.4)\\times 10^{-3}\\)[3]. This measurement can also be used to set a limit on a possible cosmological constant: \\(\\Gamma<10^{-26}\\)km\\({}^{-2}\\)[4].
It is also useful to look at violations of General Relativity in the context of metric theories of gravity. Post-Newtonian Parameterization (PPN) provides a convenient way to describe simple deviations from General Relativity. The PPN parameters are usually denoted as \\(\\gamma\\) and \\(\\beta\\); \\(\\gamma\\) indicates how much spacetime curvature is produced per unit mass, while \\(\\beta\\) indicates how nonlinear gravity is (self-interaction). \\(\\gamma\\) and \\(\\beta\\) are identically one in General Relativity. Limits on \\(\\gamma\\) can be set from geodetic procession measurements, but the best limits come from measurements of the gravitational time delay of light, often referred to as the Shapiro effect. Ranging measurements to the Cassini spacecraft set the current limit on \\(\\gamma\\): \\((\\gamma-1)=(2.1\\pm 2.3)\\times 10^{-5}\\)[5], which combined with LLR data provides the best limit on \\(\\beta\\): \\((\\beta-1)=(1.2\\pm 1.1)\\times 10^{-4}\\)[3].
The strength of gravity is given by Newton's gravitational constant G. Some scalar-tensor theories of gravity predict some level of time variation in G. This will lead to an evolving scale of the solar system and a change in the mass of compact bodies due to a variable gravitational binding energy. This variation will also show up on larger scales, such as changes in the angular power spectrum of the cosmic microwave background [6]. The current limit on the time variation of G is given by LLR: \\(\\dot{G}/G=(4\\pm 9)\\times 10^{-13}\\)/year [3].
The above effects are the leading gravitational limits that have been set by LLR, but many more effects can be studied using LLR data at various levels. These include gravitomagnetism (frame-dragging), \\(1/r^{2}\\) force law, and even tests of Newton's third law [7].
## 3 Lunar Science From Lunar Ranging
Several areas of lunar science are aided by LLR. First, the orientation of the Moon can be used for geodetic mapping. The current IAU rotation model, with respect to which images and altimetry are registered, has errors at the level of several hundred meters. A more precise model, DE403 [8], is being considered that is based on LLR and dynamical integration, but will require updating since it uses data only through 1994. Errors in this model are believed to be several meters. Further tracking will quantify the reliability of this and future models for lunar exploration.
Second, LLR helps provide the ephemeris of the Moon and solar system bodies. The position of the lunar center-of-mass is perturbed by planetary bodies, particularly Venus and Jupiter, at the level of 100's of meters to more than 1 km. LLR is an essential constraint on the development of planetary ephemerides and navigation of spacecraft.
LLR can also be used to study the internal structure of the Moon, such as the possible detection of a solid inner core. The second-degree tidal lunar Love numbers are detected by LLR, as well as their phase shifts. From these measurements, a fluid core of 20% the Moon's radius is suggested. A lunar tidal dissipation of \\(Q=30\\pm 4\\) has been reported to have a weak dependence on tidal frequency. Evidence for the oblateness of the lunar fluid-core/solid-mantle boundary may be reflected in a century-scale polar wobble frequency. The lunar vertical and horizontal elastic tidal displacement Love numbers \\(h_{2}\\) and \\(l_{2}\\) are known to no better than 25% of their values, and the lunar dissipation factor \\(Q\\) and the gravitational potential tidal Love number \\(k_{2}\\) no better than 11%. These values have been inverted jointly for structure and density of the core,[9, 10] implying a semi-liquid core and regions of partial melt in the lunar mantle, but such inversions require stochastic sampling and yield probabilistic outcomes.
## 4 Rationale for Additional Lunar Ranging Sites
While a single Earth ranging station may in principle range to any of the four usable retroreflectors on the near side from any longitude during the course of an observing day, these observations are nearly the same in latitude with respect to the Earth-Moon line, weakening the geometric strength of the observations. Additional observatories improve the situation somewhat, but of stations capable of ranging to the Moon, only Mt. Stromlo in Australia is not situated at similar northern latitudes. The frequency and quality of observations varies greatly with the facility and power of the laser employed. Moreover, the reflector cross sections differ substantially. The largest reflector, Apollo 15, has 300 cubes and returns only a few photons per minute to MLRS. The other reflectors have 100 cubes or less, and proportionately smaller rates. Stations and reflectors are unevenly represented, so that in recent years, most ranging has occurred between one ground station and one reflector. Over the past six years, 85% of LLR data has been taken from MLRS and 15% from OCA. 81% of these were from the Apollo 15 reflector, 10% from Apollo 11, 8% from Apollo 14, and about 1% from Luna 2.[11] The solar noise background and thermal distortion makes ranging to some reflectors possible only around the quarter-moon phase. The APOLLO instrument should be capable of ranging during all lunar phases.
The first LLR measurements had a precision of about 20 cm. Over the past 35 years, the precision has increased only by a factor of 10. The new APOLLO instrument has the potential to gain another factor of 10, achieving mm level precision, but this capability has not yet been demonstrated.[12] Poor detection rates are a major limiting factor in past LLR. Not every laser pulse sent to the Moon results in a detected return photon, leading to poor measurement statistics. MLRS typically collects less than 100 photons per range measurement with a scatter of about 2 cm. The large collecting area of the Apache Point telescope and the efficient avalanche photodiode arrays used in the APOLLO instrument should result in thousands of detections (even multiple detections per pulse) leading to a potential statistical un certainty of about 1 mm. Going beyond this level of precision will likely require new lunar retroreflectors or laser transponders that are more thermally stable and are designed to reduce the error associated with the changing orientation of the array with respect to the Earth due to lunar librations.
Several tests of General Relativity and aspects of our understanding of the lunar interior are currently limited by present LLR capabilities. Simply increasing the precision of the LLR measurement, either through ground station improvement or through the use of laser transponders, will translate into improvements in these areas.
Additional ranging sites will also help improve the science gained through LLR. The structure and composition of the interior require dynamic measurements of the lunar librations, while tests of General Relativity require the position of the lunar center of mass. In all, six degrees of freedom are required to constrain the geometry of the Earth-Moon system (in addition to Earth orientation). A single ranging station and reflector is insufficient to accurately determine all six, even given the rotation of the Earth with respect to the Moon.
To illustrate the importance of adding high-cross-section reflectors (or transponders) near the lunar limb, we performed an error analysis based on the locations of two observing stations that are currently operating and the frequency with which normal points have been generated over the last 10 years. While data quality has improved over the years, it has reached a plateau for the last 15 years or so. The presently operating stations are comparable in quality, and we assume an average of 2.5 cm for all observations. The normal point accumulation is heavily weighted toward Apollo 15, and a negligible number of returns are obtained from Lunakhod 2. We anticipate that ranging to a reflector with 4x higher cross section than Apollo 15 would approach 1 cm quality, simply by the increased return rate.
The model assumes a fixed Earth-Moon geometry to calculate the sensitivity of the position determination jointly with lunar rotation along three axes parallel to Earth's X-Y-Z coordinate frame at a moment in time when the Moon lies directly along the positive X axis. The Z axis points North and Y completes the right-hand system. Partial derivatives of range are calculated with respect to perturbations in position and orientation, where orientation is scaled from radians to meters by an equatorial radius of 1738 km. The analysis makes no prior assumptions regarding the dynamical state of the Moon.
The normal equations are weighted by the frequency of observations at each pair of ground stations and reflectors over the last ten years. We then replace some of the observations with ranges to one or more new reflectors. The results are given in Table. 1.
The addition of one or more reflectors would improve the geometrical precision of a normal point by a factor of 1.5 to nearly 4 at the same level of ranging precision. Such improvements directly scale to improvements in measurement of ephemeris and physical librations. The uncertainty in Moon-Earth distance (X) is highly correlated with uncertainty in position relative to the E stations lie at similar latitudes. An advanced reflector with high cross section would enable southern hemisphere ground stations such as Mt. Stromlo to make more frequent and precise observations. The geometric sensitivity to position is dramatically improved by incorporating such a ground station, as shown in the last row of Table. 1.
## 5 Retroreflectors
Five retroreflector arrays were placed on the Moon in the period 1969 - 1973. Three were placed by US astronauts during the Apollo missions (11, 14, and 15), and two were sent on Russian Lunokhod landers. The Apollo 11 and 14 arrays consist of 100 fused silica \"circular opening\" cubes (diameter 3.8 cm each) with a total estimated lidar cross section of 0.5 billion square meters. Apollo 15 has 300 of these cubes and therefore about 3 times the lidar cross section and is the lunar array with the highest response. Because the velocity aberration at the Moon is small, the cube's reflective face angles were not intentionally spoiled (deviate from 90 degrees).
The two Lunokhod arrays consist of 14 triangular shaped cubes, each side 11cm. Shortly after landing, the Lunokhod 1 array ceased to be a viable target - no ground stations have since been able to get returns from it. It is also very difficult to get returns from Lunokhod 2 during the day. The larger size of the Lunokhod cubes makes them less thermally stable which dramatically reduces the optical performance when sunlit.
Since 1969, multiple stations successfully ranged to the lunar retroreflectors. Some of these stations are listed in Table. 2 along with their system characteristics. However, there have only been two stations continuously ranging to the Moon since the early 1970s: OCA in Grasse, France, and MLRS in Texas. The vast majority of their lunar data comes from the array with the highest lidar cross section - Apollo 15.
The difficulty in getting LLR data is due to the distance to the Moon coupled with the \\(1/r^{4}\\) losses in the signal, and the technology available at the ground stations. MLRS achieves an expected return rate from Apollo 15 of about one return per minute. Increasing the lidar cross section of the lunar arrays by a factor of 10
\\begin{table}
\\begin{tabular}{|l|c|c|c|c|c|c|} \\hline X & Y & Z & RotX & RotY & RotZ & \\\\ \\hline
0.265 & 6.271 & 23.294 & 15.958 & 0.179 & 0.225 & MLRS and OCA \\\\ \\hline
0.263 & 3.077 & 23.305 & 7.611 & 0.174 & 0.140 & 25\\% of observations to ORI \\\\ \\hline
0.259 & 2.840 & 23.271 & 4.692 & 0.114 & 0.198 & 25\\% of observations to SHK \\\\ \\hline
0.259 & 2.969 & 23.291 & 4.850 & 0.116 & 0.086 & both ORI and SHK \\\\ \\hline
0.030 & 2.501 & 2.902 & 4.244 & 0.050 & 0.078 & both ORI and SHK with 25\\% \\\\ & & & & & & additional observations from \\\\ & & & & & & Mt. Stromlo \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Additional ranging sites and stations increase the precision of the normal points for the same measurement precision due to better geometrical coverage. The precision in meters on the six degrees of freedom of a typical normal point is shown for several possible ranging scenarios.
would correspond to a factor of 10 increase in the return data rate. This can be achieved by making arrays with 10 times more cubes than Apollo 15 or by changing the design of the cubes. One possibility is increasing the cube size. The lidar cross section of a cube with a diameter twice that of an Apollo cube would be 16 times larger. However, simply making solid cubes larger increases their weight by the ratio of the diameter cubed. Additional size also adds to thermal distortions and decreases the cube's divergence: a very narrow divergence will cause the return spot to completely miss the station due to velocity aberration. Spoiling can compensate for the velocity aberration but reduces the effective lidar cross section. Changing the design of the cubes, such as making them hollow, may be a better alternative. For example, 300 unspoiled 5 cm beryllium hollow cubes would have a total mass less than that of Apollo 15 but would have 3x higher lidar cross section.
An option being investigated at Goddard is to replace solid glass cubes with hollow cubes which weigh much less than their solid counterparts. Thermal distortions are less, especially in hollow cubes made of beryllium, so the cubes can be made larger without sacrificing optical performance. Hollow cubes (built by PLX) flew on the Japanese ADEOS satellite and on the Air Force Relay Mirror Experiment, but are generally not used on satellites for laser ranging. This is due in part to the lack of optical performance test data on these cubes under expected thermal conditions, but also because of early investigations which showed that hollow cubes were unstable at high temperatures. Advances in adhesives and other techniques for bonding hollow cubes make it worthwhile to reinvestigate them. Testing that was done for Goddard by ProSystems showed that hollow cubes (with faces attached via a method that is being patented by ProSystems) can survive thermal cycles from room temperature to 150 degrees Celsius. Testing has not yet been done at cold temperatures. Preliminary mechanical analysis indicate that the optical performance of
Figure 2: Hollow retroreflectors can potentially be used to build large cross-section lightweight arrays.
-hollow Beryllium cubes would be more than sufficient for laser ranging.
## 6 Satellite Laser Ranging Stations
Satellite Laser Ranging began in 1964 at NASA's Goddard Space Flight Center. Since then it has grown into a global effort, represented by the International Laser Ranging Service (ILRS)[13] of which NASA is a participant. The ILRS includes ranging to Earth orbiting artificial satellites, ranging to the lunar reflectors, and is actively working toward supporting asynchronous planetary transponder ranging.
The ILRS lunar retroreflector capable stations have event timers with precisions of better than 50 picoseconds, and can tie their clocks to UTC to better than 100 nanoseconds. Most have arc-second tracking capabilities and large aperture telescopes (\\(>\\) 1 meter). Their lasers have very narrow pulse widths (\\(<\\) 200 psec) and most have high energy per pulse (\\(>\\) 50 mJ). All have the ability to narrow their transmit beam divergence to less than 50 rad. The detectors have a relatively high quantum efficiency (\\(>\\) 15%). All current LLR systems range at 532nm.
Clearly there is more than one way to increase the laser return rate from the Moon. One is to deploy higher response retroreflector arrays or transponders on the Moon. Another is to increase the capability of the ground stations. A third is to add more lunar capable ground stations. A combination of all these options would have the biggest impact.
The recent development of the Apache Point system, APOLLO[1], shows what a significant effect improving the ground station can make. Apache Point can theoretically achieve a thousand returns per minute from Apollo 15 versus the few per minute return rate from MLRS (see Table 2). Apache Point does this by using a very large aperture telescope, a somewhat higher laser output energy and fire rate, and a judicious geographical location (where the astronomical seeing is very good). Other areas that could also improve ground station performance are higher quantum efficiency single photon detectors ( \\(>\\) 30% QE at 532nm), higher repetition rate lasers (kilohertz versus tens of hertz), and the use of adaptive optics to maintain tight beam control.
\\begin{table}
\\begin{tabular}{|l l l l l l|} \\hline System & Telescope & Pulse energy & Laser fire & System & Apollo 15 \\\\ & aperture & exiting & rate (Hz) & transmission & photoelectrons/min \\\\ & (m) & system (J) & & link calculation \\\\ MLRS & 0.76 & 60 & 10 & 0.5 & 4 \\\\ OCA (France) & 1.54 & 60 & 10 & 0.22 & 20 \\\\ Matera (Italy) & 1.5 & 22 & 10 & 0.87 & 60 \\\\ Apache Point & 3.5 & 115 & 20 & 0.25 & 1728 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Lunar retroreflector capable laser ranging stations and their expected return rate from the Apollo 15 lunar array. Link calculations use 1 billion meters squared for Apollo 15’s cross section, a mount elevation of 30 degrees, and a standard clear atmosphere (transmission = 0.7 at zenith) for all but Apache Point where transmission = 0.85 at zenith. The laser divergence was taken to be 40 rad for MLRS and 20 arcsec for the other systems. The detector quantum efficiency was assumed to be 30% for all systems.
Higher cross section lunar retroreflectors may make it possible to use NASA's next generation of satellite laser ranging stations (SLR2000) for LLR. The prototype SLR2000 system is currently capable of single photon asynchronous laser transponder ranging, and will participate in both a 2-way asynchronous transponder experiment in 2007 and the 1-way laser ranging to the Lunar Reconnaissance Orbiter (LRO) in 2008-2009. Approximately ten SLR2000 stations are expected to be built and deployed around the world in the coming decade. Adding ten lunar laser ranging stations to the existing few would dramatically increase the volume of data as well as giving the data a wide geographical distribution. The global distribution of the new SLR2000 stations would be very beneficial to data collection from an asynchronous transponder on the Moon.
## 7 Laser Transponder
Laser transponders are currently being developed for satellite ranging, but they can also be deployed on the lunar surface. Transponders are active devices that detect an incoming signal, respond with a known or predictable response signal, and are used to either determine the existence of the device or positioning parameters, such as range and/or time. For extraterrestrial applications, a wide range of electromagnetic radiation, such as radio frequency (RF), are used for this signal. To date, most spacecraft are tracked using RF signals, particularly in the S and X bands of the spectrum. NASA and several other organizations routinely track Earth orbiting satellites using optical satellite laser ranging (SLR). Laser transponders have approximately a \\(1/r^{2}\\) link advantage over direct ranging loss of \\(1/r^{4}\\), essentially because the signal is propagating in only one direction before being regenerated. In fact, it is generally considered that ranging beyond lunar distances is not practical using direct optical ranging to cube-corner reflectors. Laser transponders are in general more energy and mass efficient than RF transponders since they can work at single photon detection levels with much smaller apertures and beam divergences. A smaller beam divergence has the added benefit that there is less chance of interference with other missions, as well as making the link more secure should that be necessary. With the development and inclusion of laser communications for spaceflight missions, it is logical to include an optical transponder that uses the same opto-mechanical infrastructure such that it has minimal impact on the mission resources.
The simplest conceptual transponder is the synchronous or echo transponder. An echo transponder works by sending back a timing signal with a fixed delay from the receipt of the base-station signal. This device has the potential for the lowest complexity and autonomous operations with no RF or laser based communications channel. To enable this approach, an echo pulse must be created with a fixed offset delay that has less than 500 ps jitter from the arrival of the Earth station signal. This is very challenging given the current state-of-the-art in space-qualifiable lasers. Furthermore, several rugged and simple laser types would be excluded as candidatesdue to the lack of precision control of the pulse generation. The synchronous/echo transponder has a total link probability that is the joint probability of each direction's link probability (approximately the product of each).
Asynchronous Laser Transponders (ALT) have been shown analytically[14] and experimentally[15] to provide the highest link probability since the total link is the root-sum-square of each one-way link probability. Furthermore, they allow the use of free-running lasers on the spacecraft that operate at their most efficient repetition rates, are simpler, and potentially more reliable. Fig. 3 shows a conceptual asynchronous laser transponder using an existing NASA SLR ground station that is already precisely located and calibrated in the solar reference frame, and a spacecraft transponder that receives green photons (532nm) and transmits near-infrared (NIR) photons (1064 nm). This diagram shows the spacecraft event times being down linked on the RF (S-band) channel but this could be done on the laser communication channel if one exists. This dual wavelength approach is being explored for reasons of technical advantage at the ground station, but may also be used to help remove atmospheric effects from the range data (due to its wavelength dependent index of refraction). Using the same wavelength for each direction is also possible. Expressions for recovering the range parameters from an asynchronous measurement can be found in reference 14 and in the parameter retrieval programs developed by Gregory Neumann for the Earth-MLA asynchronous transponder experiments[15, 16].
An ALT will likely have systematic errors that will limit its long-term accuracy. A retroreflector array located near the ALT's lunar site should allow the study and calibration of the ALT's systematic errors. Performing this experiment on the Moon will be particularly important should this technology be adapted for Mars or other
Figure 3: In an asynchronous laser transponder system the ground and remote stations fire independent of each other recording the pulse transmission and detection times. The data from the two sites are then combined to calculate the range.
bodies where retroreflectors cannot be used.
Recently, two interplanetary laser transponder experiments were successfully demonstrated from the NASA Goddard Geophysical and Astronomical Observatory (GGAO) SLR facility. The first utilized the non-optimized Mercury Laser Altimeter (MLA) on the Messenger spacecraft and the second utilized the Mars Orbiting Laser Altimeter (MOLA) on the Mars Orbiter spacecraft. The Earth-MOLA experiment was a one-way link that set a new distance record of 80 M-km for detected signal photons. The Earth-MLA experiment was a two way experiment that most closely resembles the proposed asynchronous laser transponder concept. This experiment demonstrated the retrieval of the clock offset, frequency drift, and range of 24 M-km using a small number of detected two-way events. These experiments have proven the concept of being able to point both transceivers, detect the photons, and retrieve useful parameters at low-link margins.
The Lunar Reconnaissance Orbiter (LRO) mission includes a GSFC developed laser ranger that will provide a one-way ranging capability. In this case the clock is assumed to be stable enough over one Lunar orbit. The result is a range profile that is extremely precise but far less accurate than what a two-way asynchronous transponder would provide with its full clock solution.
An ALT conceptual design was developed as part of the LRO laser ranger trade study. Link analyses performed on this design showed that it is possible to make more than 500 two-way range measurements per-second using a 20 mm aperture and a 10 micro-joule/pulse, 10-kHz laser at the Moon and the existing eye-safe SLR2000 telescope located at the GGAO. The ALT was not selected due to the need for a very high readiness design for LRO, but the analysis did show its feasibility. It was also shown that many of the international SLR systems could participate with nominal receiver and software upgrades thereby increasing the ranging coverage. The ALT increases the tracking/ranging availability of spacecraft since the link margins are higher than for direct ranging to reflector arrays.
## 8 Communication Terminal
A communications terminal conceptually represents the most capable kind of transponder ranging system and is at the other end of the spectrum in terms of complexity from the echo transponder. In general, a communications link of some kind is necessary to operate and recover data from a spacecraft or remote site. There are several potential benefits if the communications link can be made part of the ranging system, including savings in weight, cost, and complexity over implementations that use separate systems for each requirement.
As with other types of transponder systems, the active terminals for a full-duplex communications system mean that the loss budget for the ranging/communications link scales as \\(1/r^{2}\\) instead of \\(1/r^{4}\\), which is a substantial advantage. The communications link need not be symmetric in terms of data rate to achieve this benefit. Very often the uplink is relatively low bandwidth for command and control, and the downlink is at a higher data rate for dumping data. The ground antenna can be made substantially larger than the remote antenna both to make the receiver more sensitive and as a way to reduce the mass and the pointing and tracking requirements for the spacecraft, since a smaller antenna has a larger beam.
Forward Error Correction (FEC) is a technique that can improve the link budget and hence the range of the system. FEC is a signal processing technique that adds additional bits to the communications data stream through an algorithm that generates enough redundancy to allow these bits to be used to detect errors. There are a wide variety of possible FEC algorithms that can be used, but it is possible to get link budget gains of the order of 8 dB at the cost of 7% overhead on the data rate even at data rates of several Gbits/sec. Gains of the order of 10 dB and higher are available at lower data rates, but at the cost of higher overhead. Generally the FEC algorithm may be optimized for the noise properties of the link if they are known.
A synchronous communications terminal must maintain a precise clock to be able to successfully recover the data. A remote terminal will recover the clock from the incoming data stream and phase-lock a local oscillator. All modern wide area terrestrial communications networks use synchronous techniques, so the techniques and electronics are well known and generally available. The advantage of having a stable reference clock that is synchronized to the ground terminal is that long times (and therefore long distances) may be measured with a precision comparable to that of the clock simply by counting bits in the data stream and carefully measuring the residual timing offset at the ground station. A maximal-length pseudorandom code can be used to generate a pattern with very simple cross-correlation properties that may be used to unambiguously determine the range, and the synchronous nature of the signal plus any framing or FEC structure imposed on the data stream mean that even long times may be measured with the same precision as the clock.
For optical communications terminals, it is almost as cost-effective to run at a high data rate as it is at a low data rate. The data rate might then reasonably be chosen for the timing precision instead of for the data downlink requirements. For example, a 10 Gbps data rate has a clock period of 100 picoseconds, which translates to a sub-millimeter distance precision with some modest averaging - just based on the clock. Specialized modulation formats such as phase-shift keying offer the possibility of optical phase-locking in addition to electrical phase-locking, which may allow further increases in precision.
Spacecraft or satellites may be used as repeaters or amplifiers much as they are in terrestrial telecom applications, further extending the reach. Multiple communications terminals distributed around an object, such as a planet, offer the ability to measure more complicated motion than just a range and a change in range. A high data rate terminal might also be used as part of a communications network in space. In addition to serving as a fixed point for high precision ranging, it could also provide various communications functions such as switching and routing.
## 9 Conclusions
LLR has made great advances in the past 35 years. However, the amount of light returned by the current retroreflectors is so little that only the largest ranging stations can be used for this purpose; poor detection statistics remains the leading source of error. Thermal and orientation effects will ultimately limit range measurement to the Apollo retroreflectors. Measurements of the lunar librations are also limited by the poor geometric arrangement of the visible retroreflectors.
More precise range measurements to retroreflectors placed at sites far from the existing arrays will greatly improve the gravitational and lunar science discussed above. A number of improvements (such as higher cross section) can be made to the retroreflector designs to realize these gains. This natural extension to the Apollo instruments is likely to produce a solid incremental improvement to these scientific studies for many years to come.
To make a much larger leap in ranging accuracy, a laser transponder or communication terminal will most likely be required. The robust link margins will enable the use of much smaller ground stations, which would provide for more complete time and geometric coverage as more ranging stations could be used. An active system will also not be susceptible to the libration induced orientation errors.
An active laser ranging system can be considered a pathfinder for a Mars instrument, as it is likely to be the only way to exceed the meter level accuracy of current ranging data to Mars. Laser ranging to Mars can be used to measure the gravitational time delay as Mars passes behind the Sun relative to the Earth. With 1 cm precision ranging, the PPN parameter \\(\\gamma\\) can be measured to about \\(10^{-6}\\), ten times better than the Cassini result [17]. The Strong Equivalence Principle polarization effect is about 100 times larger for Earth-Mars orbits than for the lunar orbit. With 1 cm precision ranging, the Nordtvedt parameter, \\(\\eta=4\\beta-\\gamma-3\\), can be measured to between \\(6\\times 10^{-6}\\) and \\(2\\times 10^{-6}\\) for observations ranging between one and ten years [18]. Combined with the time delay measurements this leads to a measurement of PPN parameter \\(\\beta\\) to the \\(10^{-6}\\) level. Mars ranging can also be used in combination with lunar ranging to get more accurate limits on the time variation of the gravitational constant.
The ephemeris of Mars itself is known to meters in plane, but hundreds of meters out-of-plane [19]. Laser ranging would get an order of magnitude better estimate, significant for interplanetary navigation. Better measurements of Mars' rotational dynamics could provide estimates of the core size [20]. The elastic tidal Love number is predicted to be less than 10 cm, within reach of laser ranging. There is also an unexplained low value of \\(Q\\), inferred from the secular decay of Phobos' orbit, that is a constraint to the present thermal state of the Mars interior [21]. Laser ranging to Phobos would help solve this mystery.
## References
* [1] T. W. Murphy, Jr., J. D. Strasburg, C. W. Stubbs, E. G. Adelberger, J. Angle, K. Nordtvedt, J. G. Williams, J. O. Dickey, and B. Gillespie, in _12th International Workshop on Laser Ranging_ (2000).
* [2] T. Damour, Class. Quant. Grav. **13**, A33 (1996).
* [3] J. G. Williams, S. G. Turyshev, and D. H. Boggs, Phys. Rev. Lett. **93**, 261101 (2004).
* [4] M. Sereno and P. Jetzer, Phys. Rev. D **73**, 063004 (2006).
* [5] B. Bertotti, L. Iess, and P. Tortora, Nature **425**, 374 (2003).
* [6] J.-P. Uzan, Rev. Mod. Phys. **75**, 403 (2003).
* [7] K. Nordtvedt, Class. Quant. Grav. **18**, L133 (2001).
* [8] E. M. Standish, X. X. Newhall, J. G. Williams, and W. F. Folkner, JPL Planetary and Lunar Ephemerides, DE403/LE403, Jet Propulsion Laboratory internal report number JPL IOM 314.10-127 (1995).
* [9] A. Khan, K. Mosegaard, J. G. Williams, and P. Lognonne, J. Geophys. Res. **109**, E09007 (2004).
* [10] A. Khan and K. Mosegaard, Geophys. Res. Lett. **32**, L22203 (2005).
* [11] P. Shelus, personal conversation.
* [12] J. G. Williams, S. G. Turyshev, and T. W. Murphy, Jr., Int. J. Mod. Phys. D **13**, 567 (2004).
* [13] M. R. Pearlman, J. J. Degnan, and J. M. Bosworth, Adv. Space Res. **30**, 135 (2002).
* [14] J. J. Degnan, J. Geodyn. **34**, 551 (2002).
* [15] D. E. Smith, M. T. Zuber, X. Sun, G. A. Neumann, J. F. Cavanaugh, J. F. McGarry, and T. W. Zagwodzki, Science **311**, 53 (2006).
* [16] G. A. Neumann, J. Cavanaugh, D. B. Coyle, J. F. McGarry, D. E. Smith, X. Sun, T. W. Zagwodski, and M. T. Zuber, in _15th International Laser Ranging Workshop_ (2006).
* [17] S. G. Turyshev, J. G. Williams, M. Shao, J. D. Anderson, K. L. N. Jr, and T. W. M. Jr, in _The 2004 NASA/JPL Workshop on Physics for Planetary Exploration_ (2004).
* [18] J. D. Anderson, M. Gross, K. L. Nordtvedt, and S. G. Turyshev, Astrophys. J. **459**, 365 (1996).
* [19] A. S. Konopliv, C. F. Yoder, E. M. Standish, D.-N. Yuan, and W. L. Sjogren, Icarus **182**, 23 (2006).
* [20] W. M. Folkner, C. F. Yoder, D. N. Yuan, E. M. Standish, and R. A. Preston, Science **278**, 1749 (1997).
* [21] B. G. Bills, G. A. Neumann, D. E. Smith, and M. T. Zuber, J. Geophys. Res. **110**, E07004 (2005). | More precise lunar and Martian ranging will enable unprecedented tests of Einstein's theory of General Relativity and well as lunar and planetary science. NASA is currently planning several missions to return to the Moon, and it is natural to consider if precision laser ranging instruments should be included. New advanced retroreflector arrays at carefully chosen landing sites would have an immediate positive impact on lunar and gravitational studies. Laser transponders are currently being developed that may offer an advantage over passive ranging, and could be adapted for use on Mars and other distant objects. Precision ranging capability can also be combined with optical communications for an extremely versatile instrument. In this paper we discuss the science that can be gained by improved lunar and Martian ranging along with several technologies that can be used for this purpose.
Lunar Ranging, General Relativity, Moon, Mars | Summarize the following text. |
arxiv-format/0712_4172v1.md | # Developing an Efficient DMCIS with Next-Generation Wireless Networks
Al-Sakib Khan Pathan and Choong Seon Hong,
Manuscript received April 26, 2006. This work was supported in part by the MIC and ITRC projects. Dr. C. S. Hong is the corresponding author.Al-Sakib Khan Pathan is a graduate student and research assistant in the Networking Lab, Department of Computer Engineering, Kyung Hee University, South Korea (phone: +82 31 201-2987; fax: +82 31 204-9082; e-mail: [email protected]).Dr. Choong Seon Hong is a professor in the Department of Computer Engineering, Kyung Hee University, South Korea (phone: +82 31 201 2532; fax: +82 31 204-9082; e-mail: [email protected]).
## I Introduction
The increasing complexity of societies and growing specialization in hazard management clearly demonstrates that no authority or discipline could identify and address all of the significant consequences of hazards. This applies whether hazards be natural (earthquakes, extreme weather events, etc.), human-induced (such as nuclear or hazardous chemical accidents), or the interaction of both. The impact of hazards cuts across economic, social and political divisions in society so that the adequacy of the cumulative response is greatly influenced by the degree to which proactive as well as reactive actions can be effectively integrated and optimized. Successful integration of hazard reduction efforts, however, depends on the ability of organizations and individuals involved in all phases of the disaster management process (prevention, preparedness, response, and recovery/reconstruction) to work together to develop and implement solutions to commonly recognized problems. In this regard, key factor in effective mitigation is the information and communication infrastructure that contributes to building knowledge about hazards and the interpretive processes, which in turn contributes to the formulation of options for collective action. The basic building block of the communications infrastructure is formed by various types of networks. The staggering growth of the wireless networks, plummeting costs of various types of telecommunications devices, and emerging next-generation wireless technologies add new dimension as well as show great promise for efficient and faster deployable information and communications infrastructure.
The intent of this paper is to explore the potential of the next-generation wireless networks to develop an efficient Disaster Management Communications and Information System (DMCIS) which basically relies on the utilization of next generation wireless networks. In addition to this, the key issues for developing an efficient DMCIS are also discussed.
This paper is organized as follows: Following the section I, Section II gives an overview of Disaster Management Communications and Information System (DMCIS), Section III introduces the emerging wireless network technologies in brief and presents our proposed framework in detail, Section IV gives an analysis of the proposed system, Section V mentions some of the related works and Section VI concludes the paper mentioning the future works to be done.
## II Disaster Management Communications and Information System (DMCIS) - An Overview
It takes immense supremacy and courage to confront the situations, when man-made or natural disasters, such as earthquakes, floods, plane crashes, high-rise building collapses, major nuclear facility malfunctions etc. occur. In order to cope with such disasters in a fast and highly coordinated manner, the optimal provision of information concerning the situation is an essential pre-requisite. Police, fire departments, public health, civil defense and other organizations have to react not only efficiently and individually, but also in a coordinated manner [1]. For establishing a controlled system, various types of information need to be stored in and inter-communicated among the various hierarchy levels. Thus, the requirements for an integrated communications and information system formanaging disasters becomes an essential need for providing efficient, reliable and secure exchange and processing of relevant information.
Over the course of the past decade, tremendous changes to the global communication infrastructure have taken place, including the popular uptake of the Internet, the rapid growth and reduction of costs of mobile telecommunications, and the implementation of advanced space-based remote sensing and satellite communication systems. With these technologies recent advancements in the areas of the emerging wireless ad hoc and wireless sensor network technologies promise to transform the field of disaster management with an ambitious goal to enhance planning and reduce loss of life and property through improved communications.
In effect, two major developments have taken place within the last decade: a conceptual shift in disaster management towards more holistic and long-term risk reduction strategies, and a communication revolution that has increased dramatically both the accessibility of information and the functionality of communication technology for disaster management. While these shifts hold great promise for significantly reducing the impact of disasters, many issues remain to be addressed or resolved [2, 3, 4]. These include risk management and sustainable development, emergency telecommunications policy and appropriate technology transfer.
Two major categories, different but closely dependant on each other, involved in a Disaster Management Communications and Information System are:
_--Pre-disaster activities:_ analysis and research (to improve the existing knowledge base), risk assessment, prevention, mitigation and preparedness
_--Post-disaster activities:_ response, recovery, rehabilitation, and reconstruction.
Accordingly, there are two categories of disaster-related data:
_--Pre-disaster baseline data about the location and risks or warning data
_--Post-disaster real-time data about the impact of hazard and the resources available to combat it
Decision making of disaster management, having done proper risk analysis and discussion upon appropriate counter measures, can be greatly enhanced by the cross-sectional integration of information. For example, to understand the full short and long term implications of floods and to plan accordingly requires the analysis of combined data on meteorology, topography, soil characteristics, vegetation, hydrology, settlements, infrastructure, transportation, population, socio-economics and material resources. These information come from many different sources and it is often difficult in most countries to bring them all together.
The components of a DMCIS involves a number of databases that store various sorts of data and information about vulnerability assessment, demographic data, available resources, disaster impact factor of various types of disasters etc. The usage of DMCIS could be in three contexts
_--Preparedness planning
_--Mitigation _--Response & recovery_
For all of these tasks, the primary task is to provide valid, accurate and timely data from the disaster affected areas to the decision making centers which in turn would take the measures for mitigation and recovery of the loss caused by disasters. Efficient networks could play the major role for this data hunting and co-ordination. In fact, next-generation wireless networks could effectively be used for this purpose.
## III An Efficient DMCIS Aided with Emerging Wireless Networks
### _Wireless Sensor and Ad Hoc Networks_
Wireless sensor network and wireless ad hoc networks are two emerging technologies that show great promise for various futuristic public applications. In this sub-section, we mention the major characteristics of these two networks in brief.
Wireless sensor network [5, 6] is a combination of hundreds and thousands of small sensing devices or sensors, also known as wireless integrated network sensors (WINS) [7]. An integration of sensing circuitry, processing power, memory and wireless transceiver makes a smart wireless sensor. These tiny devices are considered to contribute significantly to the field of networking and expected to be used in abundance for various practical purposes in future as they are suitable for sensing the change of many significant parameters like heat, pressure, light, sound, soil makeup, movement of objects etc. The sensors could use acoustic, seismic, infrared, thermal or visual mechanism for detecting the incidents. Hence a wireless sensor network consisting of these devices could effectively be used in the hazardous environments for moving object detection, environmental monitoring, surveillance etc. especially for collecting warning data from natural or in some cases from human-induced disasters.
Wireless ad hoc networks, on the other hand are self-organizing, dynamic topology networks formed by a collection of mobile nodes through radio links [8]. Minimal configuration, absence of infrastructure and quick deployment make them convenient for emergency situations. Major features of wireless ad hoc networks are the wireless connectivity, mobile nodes in the network, ease of deployment, speed of deployment and anytime, anywhere deployment e.g. ad hoc nature.
### _Details of our Framework_
To achieve the goal of timely, accurate and reliable data collection about disaster hotspots and emergency situations we use both wireless ad hoc and wireless sensor networks in our framework. The framework is basically divided into four distinct levels which interact with each other to increase the efficiency of the Disaster Management Communications and Information System. In this section, we present the details of our proposed framework.
_Level One - Deployment of Sensors and SDCCs._ Wireless sensor networks underpin the level one of the framework. The primary task of level one is to collect raw data (e.g. sensor data) using the wireless sensor networks. Sensors are deployed in the crucial parts of the disaster prone areas like river banks, seashore, hilly areas and other areas of interest. The sensors monitor the change of certain parameters and if significant changes are detected (e.g., level of water in the rivers which could help for flood warning, earthquakes, tsunami, cyclones etc.) they send them to the sink or Sensor Data Collection Center (SDCC). Each of the disaster-prone areas has at least one SDCC located nearby, which is equipped with computers for storing acquired data. These SDCCs are also equipped with wireless transceivers. Here, we assume that, the wireless sensors remain relatively fixed once they are deployed. The sensors in the network could be of different types (acoustic, magnetic, seismic etc.). A clustering approach like [9] could be used for the formation of clusters of sensors. Though the major task of SDCC is to store data, SDCCs could incorporate some local data processing mechanisms. For such processing, there is a threshold value \\(\\tau\\) representing the number of sensors that should send their sensor data to the SDCC. Let, each sensor is assigned an id \\(i\\) and is represented by s\\({}_{i}\\) where \\(i\\)=1,2,3, N. Hence, for a particular area assigned to one SDCC,
\\[\\tau\\ \\leq\\ \\sum_{i=1}^{N}{{\\it{S}}_{i}}\\quad\\textit{where,}\\ \\sum_{i=1}^{N}{{\\it{S}}_{i}}\\ >>1 \\tag{1}\\]
In addition to the data collected by the wireless sensor networks, other necessary data like demographic data, health care information, available facilities and resources in the area etc. could be manually inserted into the SDCC. Figure 1 shows level one data acquisition phase.
_Level Two - Wireless Ad Hoc Networks for Data Transmission to DPC._ In level two, Mobile Access Points (MAPs) play the major role. A MAP is a vehicle mounted wireless access point which uses low-cost Wi-Fi (Wireless Fidelity) technology [10, 11]. When a MAP comes near to an SDCC, a wireless ad hoc network is automatically formed and all the raw data or partially processed data are downloaded into the MAP. The 802.11b Wi-Fi technology operates in the 2.4 GHz range offering data speeds up to 11 megabits per second [12]. There are two other specifications that offer up to five times the raw data rate, or 54 Mbps. One is 802.11g which operates on the same 2.4 GHz frequency band as 802.11b. The other alternative 802.11a, occupies frequencies in the 5 GHz band. If offers less range of coverage than either 802.11b and 802.11g but offers up to 12 non-overlapping channels, compared to three for 802.11b or 802.11g, so it can handle more traffic than it's 2.4 GHz counterpart [13]. Any of these specifications is chosen for a particular area for transmitting stored data from an SDCC to the MAPs. These data are then taken by the MAPs and delivered to the DPCs (Data Processing Center) located at nearby areas; which are considered to be relatively safer from the disaster hotspots.
Several _MAP\\({}_{j}\\)_ where \\(j\\)=1,2,3, J (\\(J\\) is the maximum limit of the ids of the MAPs) are associate with each pair of SDCC and Data Processing Center (DPC).
\\[\\text{Here,}\\ \\sum_{j=1}^{J}{{\\it MAP}_{j}}\\ \\geq\\ \\sum_{r=1}^{R}{{\\it SDCC}_{r}}\\] \\[\\text{and,}\\ \\sum_{j=1}^{J}{{\\it MAP}_{j}}\\ \\geq\\ \\sum_{t=1}^{T}{{\\it DPC}_{t}}\\]
but not necessarily, \\(\\sum_{r=1}^{R}{{\\it SDCC}_{r}}=\\sum_{t=1}^{T}{{\\it DPC}_{t}}\\) (2)
Here, T and R are the maximum numbers of DPC and SDCC ids for a particular region. SDCC and DPC pairs could be crossly inter-connected or overlap for reliability of the acquired data. The MAPs move around the SDCC(s) and collect data from them.
So, a Wi-Fi enabled MAP operates in two ways:
1) Forms wireless ad hoc network when comes close to the SDCC and collects data from the SDCC using Wi-Fi radio transceivers.
2) Again, forms wireless ad hoc network when comes close to the DPCs and delivers raw data (or partially processed data) to the DPCs using Wi-Fi radio transceivers.
The major task of the MAPs is to ensure quick acquisition and delivery of raw or partially processed data about possible disaster and to bridge the gap between the areas under threat and the safer areas, from where the warning, preparedness and recovery instructions would be issued.
Now, if the SDCC and DPC represent the same center e.g.
\\[\\text{if,}\\ {{\\it SDCC}_{r}}={{\\it DPC}_{t}}\\text{for some }r\\text{ and }t\\] \\[\\text{or, if}\\ \\parallel{{\\it SDCC}_{r}}{{\\it DPC}_{t}}\\parallel<\\delta\\]
where \\(\\delta\\) is the minimum threshold distance needed to be maintained between a DPC-SDCC pair and \\(\\parallel\\)x, \\(\\parallel\\) denote the distance between two locations \\(x\\) and \\(y\\), then Mobile Access Points (MAP) are not required for data collection and delivery as, the concerned SDCC and DPC can directly communicate using their wireless transceivers or within it. Also formation of ad hoc networks is not necessary for the same reason.
_Level Three - Processing Acquired Data in Data Processing Centers._ As DPCs are capable of wireless communications all the incoming data from the MAPs are
Fig. 1: Data Acquisition using Wireless Sensor Networksstored and processed in the DPCs. For data integrity and authenticity, all the DPCs are networked among themselves using wireless or wired connections or combination of both. A DPC also contain detailed past records or history of disasters for the areas which are associated with it. For certain types of disasters the past records are very crucial to judge or estimate the risk of occurrence of that type of disaster again. For example, issuing a flood warning not only needs current data about the water level, flow path of the river, characteristics of the river etc. but also needs to compare all of these information with the records of past few years for that particular region.
This data processing step could be minimal in case of some disasters like, earthquake, building collapse as these sorts of disasters could happen without proper notice and hence, response, recovery and relief become most important tasks. The operation process of a DPC is shown in Figure 2. In this level, the related data from other DPCs could also be collected using wired or wireless transmission. After processing the data, confidence threshold is checked. Error detection is defined by the predetermined threshold and if necessary sent back to processing mechanism for further processing. Once processed data is ready, they are transmitted to the CDC (Central Data Center) for the next level. Wireless or wired transmission could be used for this data transfer.
_Level Four - Data Distribution and Response._ In this level, the processed data from the DPCs are gathered in the CDC. Of course,
\\[\\sum_{a=1}^{A}\\ \\sum_{t=1}^{T}DPC_{t}\\ >>\\ c \\tag{3}\\]
Where \\(c\\) is the number of CDC(s) and \\(a\\) is the particular area id.
Now, the task of CDC is to check the similarities of the information with past records of already occurred disasters for that particular area. A reference database of past disasters helps in this case. Depending upon finding the probability of occurring similar events, CDC requests DCC (Decision and Command Center) for taking responsive actions which in turn could call the emergency departments like police, fire, medical etc. Figure 3 represents a CDC module. In this way, timely, processed and accurate data from the target areas could be supplied and necessary preventive or reactive actions could be taken. To warn people about the possibilities of imminent disaster(s), the DCC sends information to the local mobile phone service providers to disseminate the warning via SMS (Short Message Service) to their mobile subscribers. This sort of warning could be very helpful for facing the disasters like cyclones or tsunamis. In addition to this, the DCC could also use Internet messaging or other web services.
Depending upon the gravity of the data, for example, emergency situations like, tornados, flash floods, earthquakes, landslides, building collapse etc., the MAPs or the DPCs could use the wireless communications to directly call the emergency or other services bypassing the CDC and DCC.
## IV Analysis of Our Framework
In this section, we analyze our framework for determining the efficiency of the DMCIS. The initial task of raw data collection is done by wireless sensor networks and at least \\(\\tau\\) number of sensor inputs is necessary. As mentioned in equation (1), in level one, \\(\\tau\\leq\\ \\sum_{i=1}^{N}{{\\it{S}}_{i}}\\) _where, \\(\\sum_{i=1}^{N}{{\\it{S}}_{i}}\\ >>1\\)_. Depending upon the type of disaster to be dealt with, the value of \\(\\tau\\) could be set very close to the value of \\(\\sum_{i=1}^{N}{{\\it{S}}_{i}}\\) or equal. For flood or tsunami warning, it could be required that each and every sensor in the total wireless sensor network in that region must contribute for better data analysis. This is done to make sure that wrong reports do not lead to issuing a wrong
Fig. 3: CDC Module and DCC using Wireless Networks
Fig. 2: Operation Process for DPCwarning by the CDC. As the sensors are deployed along the river banks and seashores, some of them might send wrong readings about the water levels. In fact, only a rise of water-level of a particular part of the river or sea may not cause flood or tsunami. For example; water could be agitated by the movements of the boats/ships and thus could lead the threshold number (\\(\\tau\\)) of sensors to report wrongly. In such cases, \\(\\tau\\) should be set the value as close as possible to the value of N or exactly N. However, if we consider the failure or damage of some of the sensor nodes, it becomes impractical to set the value of \\(\\tau\\) exactly equal to \\(\\sum_{i=1}^{N}{S_{i}}\\). Also, to prevent the generation of false data in level one, detailed analysis should be done before deployment of the wireless sensors in their respective positions.
In level two Mobile Access Point carriers are used because; in many cases it is difficult to set up wired networks for doing the same task. The using of MAPs could definitely increase the cost-efficiency for raw data transmission to DPCs. The DPCs are set up in the areas considered relatively safer from the areas where SDCCs are located. The formation of wireless ad hoc networks between SDCC and MAP or MAP and DPC could efficiently transfer data. Once the data are received by the DPCs, fully processed, confidence threshold is checked and sent to the CDC, CDC does not have to think about the fidelity of the data. It can quickly check the disaster reference database, store the data as future reference and send request to the DCC which in turn sends the warning messages to all the concerned units. Table I represents various facets of our framework at a glance. The use of the existing mobile phone networks for delivering SMS is a cost-effective solution to provide warning messages to the public, prior happening most of the natural disaster.
Also the condition, \\(\\sum_{a=1}^{A}\\ \\sum_{t=1}^{T}{DPC_{t}}\\ >>\\ c\\) Indicates that as the total number of DPCs is huge compared to that of CDC, data processing could be done in several DPCs in parallel when the amount of data is huge. In fact, disasters like tsunami or earthquake in the hilly areas require a lot of data to be processed before an appropriate action could be taken. As a whole we believe that, our framework promises fast delivery and processing of data as well as could warn people about possible disastrous situations in advance using wireless communications.
## V Related Works
[14] discusses some general issues on disaster management and presents a disaster area architecture which was developed to crystallize and capture information requirements for emergency support functions managers in the field. Lee et. al. [15] described a template-driven design methodology for disaster management information system which aims at archiving past-disaster relief operations. This template-based methodology could be useful for generating the disaster reference database. Kuwata et. al. [16] propose a work flow model for a disaster response system which consists of mainly four steps: data acquisition, data analysis, decision support and command and control. In [17] the author reviews emergency telecommunications and explores the role of information and communication technologies for disaster mitigation and humanitarian relief. In [18] the authors present a framework for data collection using sensor networks in disaster situations. Kamegawa et. al. [19] developed an algorithm termed ADES which can share victims' information with all shelters and the purpose of their work is to form a wireless network system for particularly rescue activities in the distressed areas.
Our work differs from all of these works as we propose a detailed framework which incorporates various emerging wireless networks and database systems to work together for disaster prevention, mitigation and damage control. Also it is possible to implement the framework for a specific type of disaster.
## VI Conclusions
Communications and Information Technologies, skills, and media are essential to link scientists, disaster mitigation officials, and the public; to educate the public about disaster preparedness; to track approaching hazards; to alert authorities; to warn the people most likely to be affected; to assess damage; to collect information, supplies, and other resources; to coordinate rescue and relief activities; to account for missing people; and to motivate public, political or institutional responses.
In this paper we proposed a detailed framework of an efficient Disaster Management Communication and Information System which takes the advantage of the next-generation wireless networks. While the networks would help for better, quick and reliable data delivery from the disaster hotspots, other associated technologies like disaster prediction or forecasting, databases, web services, intelligent systems, image processing etc. should work collaboratively for tackling disasters successfully. Moreover, acquiring secured data at every step is very crucial. The base of our framework is the wireless sensor networks (WSN) and security in WSNs is still a hot research issue. We are currently working on secured transmission of data at each level and between two distinct levels, starting from the level one of our framework.
## References
* [1] Meissner, A, Luckenbach, T, and Kirste, T., \"Design Challenges for an Integrated Disaster Management Communication and Information System\", _DIREN 2002 (co-located with IEEE INFOCOM 2002)_, New York City, June 24, 2002.
* [2] Chavez, E., Ide, R., Kirste, T., \"Interactive applications of personal situation aware assistants\". _Computers & Graphics_, Vol. 23, No. 6, 1999, pp. 903-915.
* [3] Vatsa, K. S., \"Technological Challenges of the Disaster Management Plan for the State of Maharastra\", _The Disaster Management Plan for the State of Maharastra_, Chapter 3, pp. 25-36. available online at: [http://ungan1.un.org/intradoc/groups/public/documents/APCITY/UNPA](http://ungan1.un.org/intradoc/groups/public/documents/APCITY/UNPA) N019012.pdf
* [4] \"Harnessing Information and Technology for Disaster Management\", _Disaster Information Task Force Report_, GDIN, November 1997. [http://www.westerndisastercenter.org/DOCUMENTS/DITF_Report.pdf](http://www.westerndisastercenter.org/DOCUMENTS/DITF_Report.pdf)
* [5] Saffo, P., \"Sensors: the next wave of innovation\", _Communications of the Act_, Vol. 40, No.2, Feb. 1997, pp. 92-97.
* [6] Akyildiz, I. F., Su, T., Sankarasubramanian, Y, and Cayirci, E., \"Wireless Sensor Networks: A Survey\", _Computer Networks_, 38, 2002, pp. 393-422.
* [7] Agre, J. and Clare, J., \"An integrated architecture for co-operative sensing networks\", _IEEE Computer_, Volume 33 Issue 5, 2000, pp. 106-108.
* [8] Pathan, A-S. K., Alam, M., Monowar, M., and Rabbi, F., \"An Efficient Routing Protocol for Mobile Ad Hoc Networks with Neighbor Awareness and Multicasting\", _in Proc. IEEE E-Tech_, Karachi, Pakistan, 31 July, 2004, pp. 97-100.
* [10] Balachandran, A., Voelker, G. M., and Bahl, P., \"Wireless Hotspots: Current Challenges and Future Directions\", _in WMASH'03_, San Diego, California, USA, September 19 2003, ACM, pp. 1-9.
* Perfect Synergy\", in _CHI 2004_, Vienna, Austria, April 24-29, 2004, ACM, pp. 1004-1018.
* [12] From [http://www.mobilecomms-technology.com/projects/ieeee802/](http://www.mobilecomms-technology.com/projects/ieeee802/)
* [13] Practical strategies for deploying Wi-Fi@Clients, Broadcom Corporation, Irvine, California, 2003, White paper found at, [http://www.dell.com/downloads/global/shared/broadcom_strategies.pdf](http://www.dell.com/downloads/global/shared/broadcom_strategies.pdf)
* [14] Phillip, G. and Hodge, R., \"Disaster area architecture: telecommunications support to disaster response and recovery\", _in Proc. IEEE Military Communications Conference, MILCOM 1995_, Volume 2, 5-8 November 1995, pp. 833-837.
* [15] Lee, J. and Bui, T., \"A template-based methodology for disaster management information systems\", _in Proc. of the 33rd Annual Hawaii International Conference on System Sciences_, vol. 2, January 4-7 2000, pp. 1-7.
* [17] Oh, E. H., \"Information and communication technology in the service of disaster mitigation and humanitarian relief\", _in Proc. of the 9th Asia-Pacific Conference on Communications_, APCC 2003, Volume 2, 21-24 Sept. 2003, pp. 730-733.
* [19] Kamegawa, M., Kawamoto, M., Shigeyasu, T., Urakami, M., and Matsuno, H., \"A new wireless networking system for rescue activities in disasters -system overview and evaluation of wireless node\", _in Proc. of the 19th International Conference on Advanced Information Networking and Applications_, AINA 2005, Volume 2, 28-30 March 2005, pp. 68-71. | The impact of extreme events across the globe is extraordinary which continues to handicap the advancement of the struggling developing societies and threatens most of the industrialized countries in the globe. Various fields of Information and Communication Technology have widely been used for efficient disaster management; but only to a limited extent though, there is a tremendous potential for increasing efficiency and effectiveness in coping with disasters with the utilization of emerging wireless network technologies. Early warning, response to the particular situation and proper recovery are among the main focuses of an efficient disaster management system today. Considering these aspects, in this paper we propose a framework for developing an efficient Disaster Management Communications and Information System (DMCIS) which is basically benefited by the exploitation of the emerging wireless network technologies combined with other networking and data processing technologies.
DMCIS, Efficient, Framework, Network, Wired, Wireless | Summarize the following text. |
arxiv-format/0801_0368v1.md | # Quantum Monte Carlo, Density Functional Theory, and Pair Potential Studies of Solid Neon
N. D. Drummond
R. J. Needs
TCM Group, Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom
November 3, 2021
## I Introduction
One of the most important goals of _ab initio_ computational electronic-structure theory is the development of accurate methods for describing interatomic bonding. Quantum Monte Carlo (QMC) techniques are useful in this regard as they can provide a highly accurate description of electron correlation effects. Although QMC methods are computationally expensive, they can be applied to systems which are large enough to model condensed matter.
In this study we have considered solid neon, in which the bonding arises from the competition between short-range repulsion and the van der Waals attraction between the atoms. High-quality experimental measurements of the equation of state (EOS) exist, which can be used as reference data. Furthermore, various neon pair potentials have been developed using experimental and theoretical data, which can also be used for comparison purposes. Solid neon is therefore an ideal system in which to test the descriptions of van der Waals bonding and short-range repulsion offered by various theoretical methods.
We have calculated theoretical EOS's for crystalline neon using the QMC and density-functional theory (DFT) _ab initio_ electronic-structure methods as well as various interatomic pair potentials. Standard DFT methods do not describe van der Waals bonding accurately, but they might be expected to work quite well at high densities, where the short-range repulsion dominates. The high-pressure properties of neon are of some experimental interest, because neon is often used as a pressure-conducting medium in diamond-anvil-cell experiments.[1] We have therefore extended the range of our QMC EOS for neon to very high pressures (about 400 GPa).
The zero-point energy (ZPE) of the lattice-vibration modes makes a small but important contribution to the total energy of solid neon. We have therefore studied the lattice dynamics of solid neon within the quasiharmonic-phonon approximation and within the Einstein approximation, using DFT methods and pair potentials.
For some time there has been considerable interest in developing neon pair potentials in order to test the accuracy of theoretical methods for calculating the properties of materials.[2] We have performed a direct calculation of the neon pair potential using QMC. We compare the accuracy of the EOS predicted by this pair potential with the results obtained using other pair potentials, including one obtained from coupled-cluster CCSD(T) calculations.[3]
Detailed information about our computational methodologies is given in Sec. II and DFT calculations of the phase stability and band gap of solid neon are reported in Sec. III. The calculation of a neon pair potential using QMC is described in Sec. IV. The lattice dynamics of solid neon are studied in Sec. V. We compare the EOS's obtained using different methods in Sec. VI. Finally, we draw our conclusions in Sec. VII.
Hartree atomic units (a.u.) are used throughout, in which the Dirac constant, the magnitude of the electronic charge, the electronic mass, and \\(4\\pi\\) times the permittivity of free space are unity: \\(\\hbar=|e|=m_{e}=4\\pi\\epsilon_{0}=1\\).
Methodology
### DFT calculations
#### ii.1.1 DFT total-energy calculations
Our DFT calculations were performed using the castep plane-wave-basis code.[4] The local-density approximation (LDA) and Perdew-Burke-Ernzerhof (PBE) generalized-gradient-approximation[5] exchange-correlation functionals were used. The Ne\\({}^{8+}\\) ionic cores were represented by ultrasoft pseudopotentials.[4] The EOS calculations were performed using a \\(4\\times 4\\times 4\\) Monkhorst-Pack \\(\\mathbf{k}\\)-point mesh and a plane-wave cutoff energy of 200 a.u., for which the DFT energies have converged to about 7 significant figures. The self-consistent-field calculations were judged to have converged when the fractional change in the energy was less than \\(10^{-11}\\). The DFT band-gap calculations reported in Sec. III.2 were performed using the same parameters, except that the plane-wave cutoff energies ranged from 100 a.u. for the lowest densities to 800 a.u. for the highest densities.
#### ii.1.2 DFT force-constant calculations
We used the quasiharmonic approximation[6] to evaluate the DFT ZPE of the lattice-vibration modes and we used the method of finite displacements and the Hellmann-Feynman theorem to evaluate the density-dependent force constants. Symmetry and Newton's third law were imposed iteratively on the matrix of force constants.[7] The DFT force-constant calculations were carried out using a plane-wave cutoff energy of 60 a.u., a \\(3\\times 3\\times 3\\) Monkhorst-Pack \\(\\mathbf{k}\\)-point mesh, and ultrasoft pseudopotentials.[4] The force constants were converged to about \\(10^{-6}\\) a.u. with respect to the plane-wave cutoff energy and the \\(\\mathbf{k}\\)-point mesh. In the production force-constant calculations, the displacement of the neon atom from its equilibrium position was 2.12% of the nearest-neighbor distance in each case, which ensures that anharmonic effects are negligible. The force-constant calculations were carried out in both \\(2\\times 2\\times 2\\) and \\(3\\times 3\\times 3\\) supercells of the primitive unit cell, and the difference in the resulting ZPE's was found to be negligible. The dispersion curves shown in Sec. V were produced using a \\(3\\times 3\\times 3\\) supercell, while the ZPE's that were combined with the static-lattice EOS's were calculated in a \\(2\\times 2\\times 2\\) supercell.
#### ii.1.3 DFT orbital-generation calculations
DFT-LDA calculations were performed in order to generate orbitals for the trial wave functions used in the QMC calculations. The QMC calculations made use of relativistic Hartree-Fock neon pseudopotentials,[8; 9] and these were also used in the DFT orbital-generation calculations. The Hartree-Fock pseudopotentials are much harder than the ultrasoft pseudopotentials. Plane-wave cutoffs in excess of 250 a.u. were used in each orbital-generation calculation, so the DFT energy was converged to around \\(10^{-3}\\) a.u. This cutoff is very large by the normal standards of DFT calculations, but there is evidence that using large basis sets reduces the variance of the energy in QMC calculations.[10]
### QMC calculations
#### ii.2.1 VMC and DMC methods
In the variational quantum Monte Carlo (VMC) method, expectation values are calculated using an approximate trial wave function, the integrals being performed by a Monte Carlo technique. In diffusion quantum Monte Carlo[11; 12] (DMC) the imaginary-time Schrodinger equation is used to evolve an ensemble of electronic configurations towards the ground state. The fermionic symmetry is maintained by the fixed-node approximation,[13] in which the nodal surface of the wave function is constrained to equal that of a trial wave function. Furthermore, the use of nonlocal pseudopotentials to represent the Ne\\({}^{8+}\\) cores necessitates the use of the locality approximation,[14] which leads to errors that are second order in the quality of the trial wave function.[15]
Our QMC calculations were performed using the casino code.[16] The trial wave functions were of Slater-Jastrow form, with the orbitals in the Slater wave function being taken from DFT calculations and the free parameters in the Jastrow factor being optimized by minimizing the unreweighted variance of the energy.[17; 18] The DFT-generated orbitals were represented numerically using splines on a grid in real space rather than an expansion in plane waves in order to improve the scaling of the QMC calculations with system size.[19; 20] The Jastrow factors consisted of isotropic electron-electron, electron-nucleus, and electron-electron-nucleus terms.[21] The electron-electron terms describe long-ranged correlations and therefore play the most important role in describing van der Waals forces.
#### ii.1.2 Finite-size bias
The QMC simulations of crystalline neon were carried out in supercells of finite size subject to periodic boundary conditions. The electrostatic energy of each electron configuration was calculated using the Ewald method.[22] The QMC energy per atom obtained in a finite cell differs from the energy per atom of the infinite crystal due to _single-particle_ finite-size effects and _Coulomb_ finite-size effects. The former result from the fact that the allowed \\(\\mathbf{k}\\) points for the Bloch orbitals form a discrete lattice, so that the single-particle energy components change when the size of the simulation supercell is changed. The latter, which are the more important in insulators, are caused by the interaction of the charged particles with their periodic images. At any given instant, each electron feels itself to be part of an infinite crystal of electrons.[23; 24] The resulting bias is negative, and is generally believed to fall off as \\(1/N\\), where \\(N\\) is the number of atoms in the simulation cell.[25; 26; 11; 27]
In order to eliminate the finite-size bias, simulations were carried out in supercells consisting of \\(3\\times 3\\times 3\\) and \\(4\\times 4\\times 4\\) primitive unit cells. The error in the DFT results arising from the use of a \\(3\\times 3\\times 3\\)\\(\\mathbf{k}\\)-point mesh is small (about 0.0001 a.u.), so we conclude that single-particle finite-size effects are negligible. The assumed form of the Coulomb finite-size bias was therefore used to extrapolate the results to infinite system size. The static-lattice energy per atom in the infinite-system limit is given by
\\[E_{\\infty}^{\\mathrm{SL}}(V)=E_{N}^{\\mathrm{SL}}(V)+\\frac{b(V)}{N}, \\tag{1}\\]
where \\(E_{N}^{\\mathrm{SL}}(V)\\) is the Vinet fit (see Sec. II.4) to the DMC static-lattice energy-volume data obtained in a set of \\(N\\)-atom simulation supercells, \\(V\\) is the primitive-cell volume, and \\(b(V)\\) is a parameter determined by fitting. Since we only have energy-volume data for two different system sizes, \\(N\\) and \\(M\\), we may eliminate \\(b(V)\\) and write
\\[E_{\\infty}^{\\mathrm{SL}}(V)=\\frac{NE_{N}^{\\mathrm{SL}}(V)-ME_{M}^{\\mathrm{SL} }(V)}{N-M}. \\tag{2}\\]
The pressure due to the static-lattice energy at infinite system size is given by
\\[p_{\\infty}^{\\mathrm{SL}}(V)\\equiv-\\frac{dE_{\\infty}^{\\mathrm{SL}}}{dV}=\\frac{ Np_{N}^{\\mathrm{SL}}(V)-Mp_{M}^{\\mathrm{SL}}(V)}{N-M}, \\tag{3}\\]
where \\(p_{N}^{\\mathrm{SL}}(V)\\equiv-dE_{N}^{\\mathrm{SL}}/dV\\) is the static-lattice pressure in an \\(N\\)-atom simulation supercell.
The zero-temperature, static-lattice energy-volume curves of neon, calculated using DMC in different sizes of simulation supercell are shown in Figs. 1 and 2. Vinet EOS's are fitted to the data. The corresponding pressure-volume data are shown in Figs. 3 and 4. It can be seen that the DMC pressure-volume curves converge steadily with system size, and that the DMC pressure extrapolated to infinite system size using Eq. (3) is close to the pressure of the \\(4\\times 4\\times 4\\) supercell. This implies that the error introduced by the extrapolation is small, because the extrapolation is itself a small correction.
#### ii.1.3 Time-step bias
The fixed-node DMC Green's function is only exact in the limit of zero time step; the use of a nonzero time step biases the DMC energy. An example of the bias in the DMC energy of a pseudoneon crystal is shown in Fig. 5. On the other hand, as shown by the \\(2\\times 2\\times 2\\) supercell results in Figs. 3 and 4, the pressure is very insensitive to the time step. We used time steps of 0.005 a.u. and 0.02 a.u. in our production calculations for the \\(3\\times 3\\times 3\\) and \\(4\\times 4\\times 4\\) supercells, respectively. A time-step of 0.002 a.u. was used in the DMC pair-potential calculations. The target population was at least 320 configurations in each case, while a target population of 1000 configurations was used for the pair-potential-generation calculations.
### Pair-potential calculations
#### ii.3.1 Forms of neon pair potential
We have used the following forms of pair potential: (i) the HFD-B potential proposed by Aziz and Chen[28] with the parameter values given by Aziz and Slaman[29; 30]; (ii) the form of potential proposed for helium by Korona _et al._,[31] containing the parameter values determined by Cybulski and Toczylowski[3] using all-electron double-excitation coupled-cluster theory with a non-iterative perturbational treatment of triple excitations (CCSD(T)) and an av5z+ Gaussian basis set; and (iii) a fit of the potential of Korona _et al.[31]_ to our DMC energy data, as described in Sec. IV. We believe the HFD-B pair potential to be the most accurate neon pair potential in the literature to date.
#### ii.3.2 Static-lattice energy-volume curve using pair potentials
Let the pair potential between two neon atoms at \\(\\mathbf{R}\\) and \\(\\mathbf{R}^{\\prime}\\) be \\(\\phi(|\\mathbf{R}-\\mathbf{R}^{\\prime}|)\\). Let \\(A\\) be a large radius. We evaluate the static-lattice energy per atom as
\\[E^{\\mathrm{SL}}\\approx\\frac{1}{2}\\left(\\sum_{0<|\\mathbf{R}|<A}\\phi(|\\mathbf{R }|)+\\frac{N_{\\mathrm{Ne}}}{V}\\int_{A}^{\\infty}4\\pi r^{2}\\phi(r)\\,dr\\right), \\tag{4}\\]
where the \\(\\{\\mathbf{R}\\}\\) are the lattice sites and \\(N_{\\mathrm{Ne}}/V\\) is the number density of neon atoms. This expression becomes exact as \\(A\\) goes to infinity. The integral in Eq. (4) was evaluated analytically for each pair potential, while the sum was evaluated by brute force. \\(A\\) was increased until \\(E^{\\mathrm{SL}}\\) converged.
#### ii.3.3 Force-constant calculations using pair potentials
Pair potentials were used to generate force-constant data for a quasiharmonic[6] calculation of the ZPE of neon. The method of finite displacements[7] was used to generate the force constants in a finite supercell subject to periodic boundary conditions. It was ensured that the force constants were highly converged with respect to the size of the
Figure 1: (Color online) Low-density static-lattice DMC energy of FCC neon as a function of volume, evaluated in simulation supercells consisting of \\(n\\times n\\times n\\) primitive unit cells using different time steps \\(\\tau\\).
displacements and the number of periodic images of the neon atoms that contributed to the force constants. Following the evaluation of the force constants, the calculation of the ZPE proceeded as described in Sec. II.1.2.
### EOS models
Let \\(E(V)\\) be the total energy of a neon crystal as a function of primitive-cell volume \\(V\\). It has previously been noticed [32] that a Vinet EOS of the form
\\[E(V)=-\\frac{4B_{0}V_{0}}{(B_{0}^{\\prime}-1)^{2}}\\left(1-\\frac{3}{2}(B_{0}^{ \\prime}-1)\\left(1-\\left(\\frac{V}{V_{0}}\\right)^{1/3}\\right)\\right)\\exp\\left( \\frac{3}{2}(B_{0}^{\\prime}-1)\\left(1-\\left(\\frac{V}{V_{0}}\\right)^{1/3}\\right) \\right)+C, \\tag{5}\\]
where the zero-pressure volume \\(V_{0}\\), bulk modulus \\(B_{0}\\), pressure-derivative of the bulk modulus \\(B_{0}^{\\prime}\\), and integration constant \\(C\\) are fitting parameters, gives a better fit than a third-order Birch-Murnaghan EOS,
\\[E(V)=-\\frac{9}{16}B_{0}\\left((4-B_{0}^{\\prime})\\frac{V_{0}^{3}}{V^{2}}-(14-3B_ {0}^{\\prime})\\frac{V_{0}^{7/3}}{V^{4/3}}+(16-3B_{0}^{\\prime})\\frac{V_{0}^{5/3 }}{V^{2/3}}\\right)+C, \\tag{6}\\]
to DFT results for solid neon. In some cases the Vinet EOS gives a lower \\(\\chi^{2}\\) value when fitted to our DMC data than the Birch-Murnaghan EOS; in others it gives a higher \\(\\chi^{2}\\) value. For example, using DMC data obtained in simulation cells consisting of \\(2\\times 2\\times 2\\) primitive cells and a time step of 0.01 a.u., the Vinet and Birch-Murnaghan EOS models give \\(\\chi^{2}\\) values of 10.0914 and 35.3561, respectively, whereas at a time step of 0.0025 a.u. the EOS models give \\(\\chi^{2}\\) values of 20.6609 and 4.8967, respectively. The resulting pressure-volume curves are essentially indistinguishable in each case, however. To be consistent, we have fitted Vinet EOS's to all of our theoretical data.
## III DFT study of phase stability and band gap
### Phase transitions in solid neon
We have compared the DFT energies of face-centered cubic (FCC) and hexagonal close-packed (HCP) phases of solid neon. For HCP neon the lattice-parameter ratio \\(c/a\\) was optimized, but the optimal ratio always turned out to
Figure 2: (Color online) Same as Fig. 1, but at higher densities.
be \\(\\sqrt{8/3}\\), which is the ratio appropriate for an ideal HCP lattice. The DFT energy difference between the FCC and HCP phases is typically less than 0.0005 a.u.: too small for us reliably to identify any phase transition. Experimentally, Hemley _et al._[33] have found that solid neon adopts the FCC phase up to pressures of at least 110 GPa at 300 K. We have therefore used the FCC lattice in all of our calculations, apart from those described in this section.
Figure 4: (Color online) Same as Fig. 3, but at higher densities.
Figure 3: (Color online) Low-density static-lattice pressure obtained by fitting Vinet EOS’s to the DMC energies obtained in simulation supercells consisting of \\(n\\times n\\times n\\) primitive unit cells and at different time steps \\(\\tau\\). The pressure extrapolated to infinite system size is also shown.
### Band gap of solid neon
The band gap of solid neon, calculated using DFT, is shown in Fig. 6. The band gap is large at the equilibrium volume, and increases significantly when the material is compressed. The DFT calculations predict that neon is still an insulator when it is compressed to a primitive-cell volume of 2 a.u., corresponding to a pressure of about 366 TPa. The use of the ultrasoft neon pseudopotential (with a core radius of 0.9 a.u.) probably causes the DFT results to become unreliable at such high densities; nevertheless, our results indicate that the metalization pressure of neon is of the order of hundreds of TPa. Hemley _et al._[33] concluded that neon remains a wide-gap insulator over the range of pressures that they studied using diamond-anvil cells (up to 110.4 GPa), while Hawke _et al._[34] used a magnetic-flux compression device to show that solid neon remains an insulator up to at least 500 GPa.
The DFT-LDA and DFT-PBE band gaps at the experimental equilibrium primitive-cell volume (150 a.u.) are 11.85 eV and 12.04 eV, respectively, compared with the experimentally determined value of 21.51 eV.[35] As usual, DFT substantially underestimates the band gap. The _GW_ band gap of 20.04 eV, calculated by Galamic-Mulaomerovic and Patterson, is relatively accurate.[36] The DMC method can also be used to perform highly accurate band-gap calculations,[37; 38] although we have not done this for neon.
## IV DMC-calculated pair potential for neon
The difference between the DMC pair potential, evaluated as the fixed-nucleus total energy of a neon dimer, and the HFD-B potential is shown in Fig. 7. The DMC energy data have been offset by a constant that was determined by fitting the data to a pair-potential model.[39] We have used the form of potential proposed by Korona _et al._ for helium,[31] which can be written as
\\[\\phi(r)=A\\exp\\left(-\\alpha r+\\beta r^{2}\\right)+\\sum_{n=3}^{8}f_{2n}(r,b)\\frac{ C_{2n}}{r^{2n}}, \\tag{7}\\]
where \\(r\\) is the separation of the neon atoms. The dispersion coefficients \\(C_{2n}\\) are taken from Cybulski and Toczylowski[3] (\\(C_{6}=6.28174\\), \\(C_{8}=90.0503\\), \\(C_{10}=1679.45\\), \\(C_{12}=4.18967\\times 10^{4}\\), \\(C_{14}=1.36298\\times 10^{6}\\), and \\(C_{16}=5.62906\\times 10^{7}\\)) and \\(f_{2n}(r,b)\\) is the damping function proposed by Tang and Toennies,[40]
\\[f_{2n}(r,b)=1-\\exp(-br)\\sum_{k=0}^{2n}\\frac{(br)^{k}}{k!}. \\tag{8}\\]
Figure 5: DMC energy of solid neon plotted against time step for a simulation supercell consisting of \\(3\\times 3\\times 3\\) FCC primitive unit cells. The primitive-cell volume is 85.75 a.u. 960 configurations were used in the DMC simulations.
\\(A\\), \\(\\alpha\\), \\(\\beta\\), and \\(b\\) are adjustable parameters, which were determined by a \\(\\chi^{2}\\) fit to the DMC data, as was the constant offset. The fitted parameter values (in a.u.) are \\(A=84.956788\\), \\(\\alpha=2.0683266\\), \\(\\beta=-0.11767673\\), and \\(b=2.6899868\\). The difference of the resulting pair potential with the HFD-B potential is shown in Fig. 7, as is the corresponding curve for the CCSD(T) data. It can be seen that over a wide range of separations the DMC pair potential lies closer to the HFD-B pair potential than the CCSD(T)-generated pair potential.
Figure 6: DFT band gap of FCC neon against the primitive-cell volume, calculated using the LDA and PBE exchange-correlation functionals.
Figure 7: (Color online) Neon pair potential, calculated using DMC. The statistical error bars on the DMC results are smaller than the symbols. A fit of the form of pair potential proposed by Korona _et al._[31] to the DMC data is also shown, as is the pair potential generated by Cybulski and Toczylowski using CCSD(T) theory.[3] All are plotted relative to the HFD-B pair potential of Aziz and Slaman.[29]
## V Lattice dynamics and zero-point energy
By comparing the results obtained using a Lennard-Jones potential in the harmonic approximation with the VMC [41] results obtained using the same potential by Hansen,[42] Pollock _et al._[43] have demonstrated that the harmonic approximation is valid for solid neon at high pressures. We consider two methods for calculating the ZPE: (i) the ZPE of quasiharmonic phonons can be evaluated in a supercell of several primitive cells, or (ii) the ZPE can be computed within the Einstein approximation by evaluating the quadratic potential felt by each atom as it is displaced from its equilibrium position with all the other atoms held fixed.
Examples of phonon dispersion curves at two different densities are shown in Figs. 8 and 9. Inelastic neutron-scattering data [44] are also shown in Fig. 9. At high density the DFT-LDA and DFT-PBE dispersion curves are in good agreement, but at low density the DFT-LDA phonon frequencies are significantly lower than the DFT-PBE frequencies. Unstable (imaginary) phonon modes start to occur at a primitive-cell volume of about 133 a.u. in the LDA. By contrast, there are no unstable phonon modes, even at a primitive-cell volume of 182.25 a.u., when the PBE functional is used. The DFT and pair-potential results are in agreement at high densities, indicating that the DFT results are accurate in this regime. Overall, the DFT-PBE dispersion curves appear to be more accurate (that is, closer to the HFD-B and experimental results) than the DFT-LDA dispersion curves.
The pressure arising from the ZPE of solid neon as calculated using different methods is plotted relative to the HFD-B results in Fig. 10. Within DFT, the Einstein approximation is excellent. It can be seen that the difference between the LDA and PBE results is appreciable at low densities, but that the difference between the Einstein and quasiharmonic zero-point pressures is more significant at high densities. As expected from examination of the dispersion curves, the HFD-B zero-point pressure is closer to the DFT-PBE results than the DFT-LDA ones; nevertheless, all the zero-point-pressure results are in good agreement.
The DFT quasiharmonic zero-point pressures have been added to the corresponding static-lattice pressures to give the final EOS's. The DFT-PBE quasiharmonic zero-point pressure has been added to the DMC static-lattice pressure to give the final DMC EOS. For the pair potentials, the quasiharmonic zero-point pressure calculated using each pair potential has been added to the corresponding static-lattice pressure in order to obtain the final EOS.
Figure 8: (Color online) Phonon dispersion curves calculated using DFT and pair potentials for FCC neon at a primitive-cell volume of 41.59375 a.u. The Einstein frequencies evaluated using DFT-LDA, DFT-PBE, the HFD-B pair potential, the CCSD(T) pair potential, and the DMC pair potential are 83.10234 meV, 87.57110 meV, 63.77843 meV, 65.73911 meV, and 63.00858 meV, respectively. The quasiharmonic ZPE evaluated using the HFD-B potential is 0.003321155 a.u.
Figure 10: (Color online) Difference of zero-point pressure of FCC neon calculated using various methods and the HFD-B results for the zero-point pressure. (The noise is due to the fact that Monte Carlo methods were used to sample the first Brillouin zone when calculating the zero-point energy, and the resulting curve was differentiated numerically to obtain the zero-point pressure.)
Figure 9: (Color online) The same as Fig. 8, but with a primitive-cell volume of 149.06894 a.u. (close to the experimental equilibrium density). Experimental data from Ref. [44] are also shown. The imaginary frequencies of unstable modes are plotted as negative numbers. The Einstein frequencies evaluated using DFT-PBE, the HFD-B pair potential, the CCSD(T) pair potential, and the DMC pair potential are 5.37853 meV, 4.07895 meV, 6.73975 meV, and 6.06113 meV, respectively. The quasiharmonic ZPE evaluated using the HFD-B potential is 0.000216603 a.u.
## VI Zero-temperature EOS of Neon
Zero-temperature EOS's for solid neon, calculated using DFT-LDA, DFT-PBE, DMC [extrapolated to infinite system size using Eq. (3)], and pair potentials are shown in Fig. 11, and the differences of the theoretical EOS's with the experimental EOS are plotted in Fig. 12. (The low-density experimental pressure-volume data of Anderson _et al._[45] shown in Fig. 11 were obtained at 4.2 K, while the high-density experimental data of Hemley _et al._[33] were obtained at 300 K. Hemley _et al._ reduced their pressure-volume data to the zero-temperature isotherm using a Mie-Gruneisen model and fitted their results to a third-order Birch-Murnaghan EOS, with the zero-pressure primitive-cell volume and bulk modulus being the values obtained by Anderson _et al._ The resulting EOS[46] is valid at both low and high densities and is regarded as being the definitive experimental EOS.) The parameter values for a Vinet fit (Eq. (5)) to our DMC data (including the DFT-PBE ZPE) are \\(V_{0}=128.52597\\) a.u., \\(B_{0}=2.7539319\\) GPa, and \\(B^{\\prime}_{0}=7.6510744\\).
At low densities the DFT-LDA and DFT-PBE EOS's differ markedly. The strong dependence of the DFT results on the choice of exchange-correlation functional implies that the description of van der Waals bonding within DFT is unreliable, as one would expect, given the local nature of the approximations to the exchange-correlation functional. DMC produces a considerably more accurate EOS than DFT, suggesting that DMC is capable of giving a proper description of van der Waals bonding. The HFD-B pair potential[29] gives an EOS of similar accuracy to the DMC EOS at low to intermediate densities. At higher densities, the EOS calculated using DMC is better than any of the pair-potential EOS's. Although the difference between the DMC and experimental pressures is significant at high densities, it should be emphasized that the fractional error remains small.
The pair potential calculated using CCSD(T) theory[3] gives a poorer EOS than the DMC-generated pair potential. On the other hand, the EOS obtained using the DMC pair potential is significantly poorer than the HFD-B EOS. Taken together with the fact that the direct DMC EOS is excellent, this suggests that many-body interactions play a significant role in solid neon, and that such interactions are included to some extent in the HFD-B potential.
## VII Conclusions
We have performed DMC calculations of the energy of FCC solid neon as a function of the lattice constant and the energy of the neon dimer as a function of atomic separation. Other calculations using DFT methods and pair potentials have been performed to evaluate the ZPE and for comparison purposes.
We have calculated the phonon dispersion curves of solid neon using the DFT-LDA and DFT-PBE methods, the HFD-B pair potential, and CCSD(T)- and DMC-derived pair potentials. We believe the results obtained with the HFD-B pair potential are likely to be the most accurate. DFT-PBE gives more accurate dispersion curves than DFT
Figure 11: (Color online) EOS of FCC neon, obtained by experiment and various theoretical techniques.
LDA, for which the phonon frequencies are too low. The dispersion curves obtained with the DMC pair potential are more accurate than those obtained using either DFT or the CCSD(T) pair potential. We have calculated the ZPE of solid neon using the DFT-LDA and DFT-PBE methods, and the HFD-B, CCSD(T), and DMC pair potentials, within the quasiharmonic approximation. At low pressures the ZPE depends on the calculation method used, but the contribution to the EOS is small, while at high pressures the dependence on the calculation method is relatively weak, although the contribution of the ZPE to the EOS is significant. The Einstein model gives ZPE's in very good agreement with the quasiharmonic values over the pressure range considered.
We have calculated the zero-temperature EOS of solid neon using the DFT and DMC methods, including corrections for the ZPE. We have shown that the DFT results depend strongly on the choice of exchange-correlation functional, while the DMC results are close to the experimental EOS. We therefore have evidence that DMC gives a better description of van der Waals bonding in real materials than DFT. At high pressures the DMC EOS is closer to the experimental results than the EOS obtained using the HFD-B pair potential. However, the statistical errors of about 0.0002 a.u. in the DMC energy data for solid neon are too large to determine an accurate value for the lattice constant of the solid. We have shown that the neon pair potential determined by DMC calculations gives a more accurate EOS than the pair potential determined by CCSD(T) calculations, although the DMC pair-potential results are not as accurate as those obtained using the semiempirical HFD-B potential. Overall, our results demonstrate the accuracy and reliability of the DMC method and the high quality of the neon pseudopotentials that we have used.
## VIII Acknowledgments
Financial support has been provided by the Engineering and Physical Sciences Research Council (EPSRC), UK. Computing resources have been provided by the Cambridge-Cranfield High Performance Computing Facility. We thank J. R. Trail for providing the relativistic Hartree-Fock pseudopotential used in this work.
## References
* (1) E. Gregoryanz, R. J. Hemley, H.-K. Mao, and P. Gillet, Phys. Rev. Lett. **84**, 3117 (2000); J. Li, H.-K. Mao, Y. Fei, E. Gregoryanz, M. Eremets, and C. S. Zha, Phys. Chem. Minerals **29**, 166 (2002); S. D. Jacobsen, H. Spetzler, H. J. Reichmann, and J. R. Smyth, Proc. Natl. Acad. Sci. USA **101**, 5867 (2004); J.-F. Lin, V. V. Struzhkin, S. D. Jacobsen, M. Y. Hu, P. Chow, J. Kung, H. Liu, H.-K. Mao, and R. J. Hemley, Nature **436**, 377 (2005).
* (2)_Rare Gas Solids_, ed. M. L. Klein and J. A. Venables, Academic (1977).
Figure 12: (Color online) Comparison of theoretical EOS’s of FCC neon. The difference of the calculated pressure is plotted against the experimental pressure as a function of volume.
* (3) S. M. Cybulski and R. R. Toczylowski, J. Chem. Phys. **111**, 10520 (1999).
* (4) M. D. Segall, P. J. D. Lindan, M. J. Probert, C. J. Pickard, P. J. Hasnip, S. J. Clark, and M. C. Payne, J. Phys. Condens. Matter **14**, 2717 (2002).
* (5) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. **77**, 3865 (1996).
* (6) D. C. Wallace, _Thermodynamics of crystals_, John Wiley and Sons, New York (1972).
* (7) G. J. Ackland, M. C. Warren, and S. J. Clark, J. Phys. Condens. Matter **9**, 7861 (1997).
* (8) J. R. Trail and R. J. Needs, J. Chem. Phys. **122**, 014112 (2005).
* (9) J. R. Trail and R. J. Needs, J. Chem. Phys. **122**, 174109 (2005).
* (10) D. Alfe, M. Alfredsson, J. Brodholt, M. J. Gillan, M. D. Towler, and R. J. Needs, Phys. Rev. B **72**, 014114 (2005).
* (11) D. M. Ceperley and B. J. Alder, Phys. Rev. Lett. **45**, 566 (1980).
* (12) W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, Rev. Mod. Phys. **73**, 33 (2001).
* (13) J. B. Anderson, J. Chem. Phys. **65**, 4121 (1976).
* (14) M. M. Hurley and P. A. Christiansen, J. Chem. Phys. **86**, 1069 (1987).
* (15) L. Mitas, E. L. Shirley and D. M. Ceperley, J. Chem. Phys. **95**, 3467 (1991).
* (16) R. J. Needs, M. D. Towler, N. D. Drummond, and P. Lopez Rios, casino version 2.0 User Manual, University of Cambridge, Cambridge (2005).
* (17) C. J. Umrigar, K. G. Wilson, and J. W. Wilkins, Phys. Rev. Lett. **60**, 1719 (1988).
* (18) N. D. Drummond and R. J. Needs, Phys. Rev. B **72**, 085124 (2005).
* (19) D. Alfe and M. J. Gillan, Phys. Rev. B **70**, 161101(R) (2004).
* (20) A. J. Williamson, R. Q. Hood, and J. C. Grossman, Phys. Rev. Lett. **87**, 246406 (2001).
* (21) N. D. Drummond, M. D. Towler, and R. J. Needs, Phys. Rev. B **70**, 235119 (2004).
* (22) P. P. Ewald, Ann. Physik **64**, 253 (1921).
* (23) D. Ceperley, Phys. Rev. B **18**, 3126 (1978).
* (24) P. R. C. Kent, R. Q. Hood, A. J. Williamson, R. J. Needs, W. M. C. Foulkes, and G. Rajagopal, Phys. Rev. B **59**, 1917 (1999).
* (25) F. H. Zong, C. Lin, and D. M. Ceperley, Phys. Rev. E **66**, 036703 (2002).
* (26) D. M. Ceperley and B. J. Alder, Phys. Rev. B **36**, 2092 (1987).
* (27) G. Rajagopal, R. J. Needs, A. James, S. D. Kenny, and W. M. C. Foulkes, Phys. Rev. B **51**, 10591 (1995).
* (28) R. A. Aziz and H. H. Chen, J. Chem. Phys. **67**, 5719 (1977).
* (29) R. A. Aziz and M. J. Slaman, Chem. Phys. **130**, 187 (1989).
* (30) The short-range modifications to the HFD-B potential proposed by Aziz and Slaman are irrelevant in this work, because the potential is only modified at a separation of less than 2.36 a.u., corresponding to an FCC primitive-cell volume of 9.32 a.u.
* (31) T. Korona, H. L. Williams, R. Bukowski, B. Jeziorski, and K. Szalewicz, J. Chem. Phys. **106**, 5109 (1997).
* (32) T. Tsuchiya and K. Kawamura, J. Chem. Phys. **117**, 5859 (2002).
* (33) R. J. Hemley, C. S. Zha, A. P. Jephcoat, H. K. Mao, L. W. Finger, and D. E. Cox, Phys. Rev. B **39**, 11820 (1989).
* (34) P. S. Hawke, T. J. Burgess, D. E. Duerre, J. G. Huebel, R. N. Keeler, H. Klapper, and W. C. Wallace, Phys. Rev. Lett. **41**, 994 (1978).
* (35) M. Runne and G. Zimmerer, Nucl. Instr. Meth. B **101**, 156 (1995).
* (36) S. Galamic-Mulaomerovic and C. H. Patterson, Phys. Rev. B **71**, 195103 (2005).
* (37) A. J. Williamson, R. Q. Hood, R. J. Needs, and G. Rajagopal, Phys. Rev. B **57**, 12140 (1998).
* (38) M. D. Towler, R. Q. Hood, and R. J. Needs, Phys. Rev. B **62**, 2330 (2000).
* (39) We have determined the constant offset to the energy data during the fitting procedure instead of subtracting the energy of two isolated neon atoms in order to improve the accuracy of the fit. The fixed-node error in a neon dimer is larger than the fixed-node error in an isolated neon atom. Therefore a direct evaluation of the absolute value of the pair potential is a (slight) systematic overestimate.
* (40) K. T. Tang and J. P. Toennies, J. Chem. Phys. **80**, 3726 (1984).
* (41) While Hansen\\({}^{42}\\) used the VMC method to calculate the ground-state of the _atoms_, interacting with Lennard-Jones potentials, we use the VMC method to calculate the ground state of the _electrons_ within the Born-Oppenheimer approximation.
* (42) J.-P. Hansen, Phys. Rev. **172**, 919 (1968).
* (43) E. L. Pollock, T. A. Bruce, G. V. Chester, and J. A. Krumhansl, Phys. Rev. B **5**, 4180 (1972).
* (44) J. Skalyo, Jr., V. J. Minkiewicz, G. Shirane, and W. B. Daniels, Phys. Rev. B **6,** 4766 (1972).
* (45) M. S. Anderson, R. Q. Fugate, and C. A. Swenson, J. Low Temp. Phys. **10**, 345 (1973).
* (46) The experimental EOS, taken from Ref. [33], is given by Eq. (6) with \\(B_{0}=1.097\\) GPa, \\(V_{0}=150.09153\\) a.u., and \\(B_{0}^{\\prime}=9.23\\). | We report quantum Monte Carlo (QMC), plane-wave density-functional theory (DFT), and interatomic pair-potential calculations of the zero-temperature equation of state (EOS) of solid neon. We find that the DFT EOS depends strongly on the choice of exchange-correlation functional, whereas the QMC EOS is extremely close to both the experimental EOS and the EOS obtained using the best semiempirical pair potential in the literature. This suggests that QMC is able to give an accurate treatment of van der Waals forces in real materials, unlike DFT. We calculate the QMC EOS up to very high densities, beyond the range of values for which experimental data are currently available. At high densities the QMC EOS is more accurate than the pair-potential EOS. We generate a different pair potential for neon by a direct evaluation of the QMC energy as a function of the separation of an isolated pair of neon atoms. The resulting pair potential reproduces the EOS more accurately than the equivalent potential generated using the coupled-cluster CCSD(T) method.
pacs: 64.30.+t,71.10.-w | Write a summary of the passage below. |
arxiv-format/0801_1653v3.md | # Nuclear constraints on the momenta of inertia of neutron stars
Aaron Worley, Plamen G. Krastev and Bao-An Li
Department of Physics, Texas A&M University-Commerce, Commerce, TX 75429, U.S.A. [email protected], [email protected], [email protected]
## 1 Introduction
Neutron stars exhibit a large array of extreme characteristics. Their properties and structure are determined by the equation of state (EOS) of neutron-rich stellar matter at densities up to an order of magnitude higher than those found in ordinary nuclei, see, e.g. Weber(1999) and Lattimer & Prakash (2004). Therefore, the detailed knowledge about the EOS of neutron-rich nuclear matter over a wide range of densities is necessary for the study of neutron stars. For isospin asymmetric nuclear matter, various theoretical studies have shown that the energy per nucleon can be well approximated by
\\[E(\\rho,\\delta)=E(\\rho,\\delta=0)+E_{\\rm sym}(\\rho)\\delta^{2}+O(\\delta^{4}), \\tag{1}\\]
in terms of the baryon density \\(\\rho=\\rho_{n}+\\rho_{p}\\), the isospin asymmetry \\(\\delta=(\\rho_{n}-\\rho_{p})/(\\rho_{n}+\\rho_{p})\\), the energy per nucleon in symmetric nuclear matter \\(E(\\rho,\\delta=0)\\), and the bulk nuclear symmetry energy \\(E_{\\rm sym}(\\rho)\\). Here we report the results of a study on the moment of inertia and the core-crust transition density of neutron stars within well established formalisms in the literature using several EOSs constrained by the latest terrestrial heavy-ion reaction experiments.
Presently, besides the possibilities of phase transitions into various non-nucleonic states the behavior of nuclear matter under extreme densities, pressures and/or isospin-asymmetry is still highly uncertain and relies upon, often, rather controversial theoretical predictions. This circumstance introduces corresponding uncertainties in the EOS of neutron-rich nuclear matter and thus limits our ability to understand many key issues in astrophysics. While astrophysical observations can also limit the EOS of neutron-rich nuclear matter, terrestrial laboratories experiments provide complementary information and have their unique advantages. In this regard, it is especially interesting to mention that the collective flow and particle production in relativistic heavy-ion collisions have constrained the EOS of symmetric nuclear matter \\(E(\\rho,\\delta=0)\\) up to about five times the normal nuclear matter density to a narrow range (Danielewicz, Lacey, & Lynch, 2002). However, there are still many challenges and uncertainties in pinning down precisely the EOS of neutron-rich nuclear matter. One of the major remaining uncertainties is the density dependence of the nuclear symmetry energy \\(E_{sym}(\\rho)\\), see e.g. Refs. (Lattimer & Prakash, 2004; Steiner et al., 2005; Chen et al., 2007) for recent reviews. To constrain the density dependence of the symmetry energy, many terrestrial nuclear experiments have been carried out recently or planned. Depending on the techniques used, some experiments are more useful for exploring the symmetry energy at low densities while others are more effective at high densities. For instance, heavy-ion reactions, especially those involving radioactive beams, provide a unique means to probe the \\(E_{sym}(\\rho)\\) over a broad density range (Li et al., 1998; Li & Udo Schroeder, 2001; Baran et al., 2005; Chen et al., 2007). In fact, some significant progress has been made very recently by studying the isospin diffusion (Shi & Danielewicz, 2003; Tsang et al., 2004; Chen et al., 2005; Li & Chen, 2005) and isoscaling (Tsang et al., 2001; Shetty, Yennello & Souliotis, 2007) in heavy-ion reactions at intermediate energies. The analysis of these phenomena based on transport theories of heavy-ion reactions and thermodynamical models of nuclear multifragmentation has limited the \\(E_{sym}(\\rho)\\) in a range much narrower than that spanned by various forms of the \\(E_{sym}(\\rho)\\) currently used in astrophysical studies in the literature. Moreover,the lower bound of the \\(E_{sym}(\\rho)\\) extracted from the heavy-ion reactions is consistent with the RMF prediction using the FSUGold interaction that can reproduce not only saturation properties of nuclear matter but also structure properties and giant resonances of many finite nuclei (Piekraewicz, 2007).
It is also well known that the sizes of neutron skins in heavy nuclei are sensitive to the symmetry energy at subsaturation densities, see, e.g., Refs. (Brown, 2000; Horowitz & Piekarewicz, 2001, 2002; Dieperink et al., 2003; Furnstahl, 2002; Steiner et al., 2005; Todd-Rutel & Piekarewicz, 2005; Steiner & Li, 2005; Chen et al., 2005). However, available data of neutron-skin thickness obtained using hadronic probes are not accurate enough to constrain significantly the symmetry energy. Interestingly, the parity radius experiment (PREX) at the Jefferson Laboratory aiming to measure the neutron radius in \\({}^{208}Pb\\) via parity violating electron scattering (Jefferson Laboratory Experiment E-00-003) (Horowitz et al., 2001) hopefully will provide much more precise data and thus constrain the symmetry energy at low densities more tightly in the near future. On the other hand, at supranormal densities, a number of potential probes of the symmetry energy have been proposed (Li et al., 1997; Li, 2000, 2002; Li et al., 1998; Chen et al., 2007). Moreover, several experiments to probe the high density behavior of the symmetry energy with high energy radioactive beams have been planned at the CSR/Lanzhou, FAIR/GSI, RIKEN and the NSCL/MSU.
While the EOS of neutron-rich nuclear matter has not been completely determined yet, it is still very interesting to examine astrophysical implications of the EOS constrained by the latest terrestrial laboratory experiments mentioned above. Global properties of spherically symmetric static (non-rotating) neutron stars have been studied extensively over many years, for recent reviews, see, e.g., Refs. (Lattimer & Prakash, 2000, 2004; Prakash et al., 2001; Yakovlev & Pethick, 2004; Heiselberg & Pandharipande, 2000; Heiselberg & Hjorth-Jensen, 2000; Steiner et al., 2005). However, properties of (rapidly) rotating neutron stars have been investigated to lesser extent. Models of (rapidly) rotating neutron stars have been constructed only by several research groups with various degree of approximation (Hartle, 1967; Hartle & Thorne, 1968; Friedman et al., 1986; Bombaci et al., 2000; Lattimer et al., 1990; Komatsu et al., 1989; Cook et al., 1994; Stergioulas & Friedman, 1995, 1998; Bonazzola et al., 1993, 1998; Weber, 1999; Ansorg et al., 2002) (see Stergioulas (2003) for a review). In a recent work (Krastev et al., 2008) we have reported predictions on the gravitational masses, radii, maximal rotational (Kepler) frequencies, and thermal properties of (rapidly) rotating neutron stars. In this work, using the nuclear constrained EOSs we calculate the momenta of inertia for both spherically-symmetric (static) and (rapidly) rotating neutron stars using well established formalisms in the literature. Such studies are important and timely as they are related to the astrophysical observations in the near future. In particular, the moment of inertia of pulsar \\(A\\) in the extremely relativistic neutron star binary PSR J0737-3039(Burgay et al., 2003) may be determined in a few years through detailed measurements of the periastron advance (Bejger et al., 2005).
The equation of state of neutron-rich nuclear matter constrained by recent data from terrestrial heavy-ion reactions
In this section, we first outline the theoretical tools one uses to extract information about the EOS of neutron-rich nuclear matter from heavy-ion collisions. We put the special emphasis on exploring the density-dependence of the symmetry energy as the study on the EOS of symmetric nuclear matter with heavy-ion reactions is better known to the astrophysical community and it has been extensively reviewed, see e.g., Refs. (Danielewicz, Lacey, & Lynch, 2002; Steiner et al., 2005; Lattimer & Prakash, 2007) for recent reviews. We will then summarize the latest constraints on the density dependence of the symmetry energy extracted from studying isospin diffusion and isoscaling in heavy-ion reactions at intermediate energies. Finally, we address the question of what kind of isospin-asymmetry, especially for dense matter, can be reached in heavy-ion reactions.
Heavy-ion reactions are a unique means to create in terrestrial laboratories dense nuclear matter similar to those found in the core of neutron stars. Depending on the beam energy, impact parameter and the reaction system, various hadrons and/or partons may be created during the reaction. To extract information about the EOS of dense matter from heavy-ion reactions requires careful modelling of the reaction dynamics and selection of sensitive observables. Among the available tools, computer simulations based on the Boltzmann-Uehling-Uhlenbeck (buu) transport theory have been very useful, see, e.g., Refs. (Bertsch & Das Gupta, 1988; Danielewicz, Lacey, & Lynch, 2002) for reviews. The evolution of the phase space distribution function \\(f_{i}(\\vec{r},\\vec{p},t)\\) of nucleon \\(i\\) is governed by both the mean field potential \\(U\\) and the collision integral \\(I_{collision}\\) via the buu equation
\\[\\frac{\\partial f_{i}}{\\partial t}+\\vec{\
abla}_{p}U\\cdot\\vec{\
abla}_{r}f_{i} -\\vec{\
abla}_{r}U\\cdot\\vec{\
abla}_{p}f_{i}=I_{collision}. \\tag{2}\\]
Normally, effects of the collision integral \\(I_{collision}\\) via both elastic and inelastic channels including particle productions, such as pions, are modelled via Monte Carlo sampling using either free-space experimental data or calculated in-medium cross sections for the elementary hadron-hadron scatterings (Bertsch & Das Gupta, 1988). Information about the EOS is obtained from the underlying mean-field potential U which is an input to the transport model. By comparing experimental data on some carefully selected observables with transport model predictions using different mean-field potentials corresponding to various EOSs, one can then constrain the corresponding EOS. The specific constrains on the density de pendence of the nuclear symmetry energy that we are using in this work were obtained by analyzing the isospin diffusion data (Tsang et al., 2004) within the IBUU04 version of an isospin and momentum dependent transport model (Li et al., 2004). In this model, an isospin and momentum-dependent interaction (MDI) (Das et al., 2003) is used. With this interaction, the potential energy density \\(V(\\rho,T,\\delta)\\) at total density \\(\\rho\\), temperature \\(T\\) and isospin asymmetry \\(\\delta\\) is
\\[V(\\rho,T,\\delta) = \\frac{A_{u}\\rho_{n}\\rho_{p}}{\\rho_{0}}+\\frac{A_{l}}{2\\rho_{0}}( \\rho_{n}^{2}+\\rho_{p}^{2})+\\frac{B}{\\sigma+1}\\frac{\\rho^{\\sigma+1}}{\\rho_{0}^{ \\sigma}}(1-x\\delta^{2}) \\tag{3}\\] \\[+ \\sum_{\\tau,\\tau^{\\prime}}\\frac{C_{\\tau,\\tau^{\\prime}}}{\\rho_{0}} \\int\\int d^{3}pd^{3}p^{\\prime}\\frac{f_{\\tau}(\\vec{r},\\vec{p})f_{\\tau^{\\prime}} (\\vec{r},\\vec{p}^{\\prime})}{1+(\\vec{p}-\\vec{p}^{\\prime})^{2}/\\Lambda^{2}}\\]
In the mean field approximation, Eq. (3) leads to the following single particle potential for a nucleon with momentum \\(\\vec{p}\\) and isospin \\(\\tau\\)
\\[U_{\\tau}(\\rho,T,\\delta,\\vec{p},x) = A_{u}(x)\\frac{\\rho_{-\\tau}}{\\rho_{0}}+A_{l}(x)\\frac{\\rho_{\\tau} }{\\rho_{0}}+B\\left(\\frac{\\rho}{\\rho_{0}}\\right)^{\\sigma}(1-x\\delta^{2}) \\tag{4}\\] \\[- 8\\tau x\\frac{B}{\\sigma+1}\\frac{\\rho^{\\sigma-1}}{\\rho_{0}^{ \\sigma}}\\delta\\rho_{-\\tau}+\\sum_{t=\\tau,-\\tau}\\frac{2C_{\\tau,t}}{\\rho_{0}} \\int d^{3}\\vec{p}^{\\prime}\\frac{f_{t}(\\vec{r},\\vec{p}^{\\prime})}{1+(\\vec{p}- \\vec{p}^{\\prime})^{2}/\\Lambda^{2}},\\]
where \\(\\tau=1/2\\) (\\(-1/2\\)) for neutrons (protons), \\(x\\), \\(A_{u}(x)\\), \\(A_{\\ell}(x)\\), \\(B\\), \\(C_{\\tau,\\tau}\\),\\(C_{\\tau,-\\tau}\\), \\(\\sigma\\), and \\(\\Lambda\\) are all parameters given in Ref. (Das et al., 2003). The last two terms in Eq. (4) contain the momentum dependence of the single-particle potential, including that of the symmetry potential if one allows for different interaction strength parameters \\(C_{\\tau,-\\tau}\\) and \\(C_{\\tau,\\tau}\\) for a nucleon of isospin \\(\\tau\\) interacting, respectively, with unlike and like nucleons in the background fields. It is worth mentioning that the nucleon isoscalar potential estimated from \\(U_{isoscalar}\\approx(U_{n}+U_{p})/2\\) agrees with the prediction of variational many-body calculations for symmetric nuclear matter (Wiringa, 1988) in a broad density and momentum range (Li et al., 2004). Moreover, the EOS of symmetric nuclear matter for this interaction is consistent with that extracted from the available data on collective flow and particle production in relativistic heavy-ion collisions up to five times the normal nuclear matter (Danielewicz, Lacey, & Lynch, 2002; Krastev et al., 2008). On the other hand, the corresponding isovector (symmetry) potential can be estimated from \\(U_{sym}\\approx(U_{n}-U_{p})/2\\delta\\). At normal nuclear matter density, the MDI symmetry potential agrees very well with the Lane potential extracted from nucleon-nucleus and (n,p) charge exchange reactions available for nucleon kinetic energies up to about 100 MeV (Li et al., 2004). At abnormal densities and higher nucleon energies, however, there is no experimental constrain on the symmetry potential available at present.
The different \\(x\\) values in the MDI interaction are introduced to vary the density dependence of the nuclear symmetry energy while keeping other properties of the nuclear equation of state fixed. Specifically, choosing the incompressibility \\(K_{0}\\) of cold symmetric nuclear matter at saturation density \\(\\rho_{0}\\) to be 211 MeV leads to the dependence of the parameters \\(A_{u}\\) and \\(A_{l}\\) on the \\(x\\) parameter according to
\\[A_{u}(x)=-95.98-x\\frac{2B}{\\sigma+1},\\ A_{l}(x)=-120.57+x\\frac{2B}{\\sigma+1}, \\tag{5}\\]
with \\(B=106.35\\) MeV.
With the potential contribution in Eq. 3 and the well-known contribution from nucleon kinetic energies in the Fermi gas model, the EOS and the symmetry energy at zero temperature can be easily obtained. As shown in Fig. 1, adjusting the parameter \\(x\\) leads to a broad range of the density dependence of the nuclear symmetry energy, similar to those predicted by various microscopic and/or phenomenological many-body theories. As demonstrated by Li & Chen (2005) and Li & Steiner (2006), only equations of state with \\(x\\) between -1 and 0 have symmetry energies in the sub-saturation density region consistent with the isospin diffusion data and the available measurements of the skin thickness of \\({}^{208}Pb\\) using hadronic probes. Moreover, it is interesting to note that the symmetry energy extracted very recently from the isoscaling analyses of heavy-ion reactions is consistent with the MDI calculation using \\(x=0\\)(Shetty, Yennello & Souliotis, 2007). The \\(E_{sym}(\\rho)\\) with \\(x=0\\) is also consistent with the RMF prediction using the FSUGold interaction (Piekraewicz, 2007). We thus consider only the two limiting cases with \\(x=0\\) and \\(x=-1\\) as boundaries of the symmetry energy consistent with the available terrestrial nuclear laboratory data.
Figure 1: The density dependence of the nuclear symmetry energy for different values of the parameter \\(x\\) in the MDI interaction. Taken from (Li et al., 2005).
To ease comparisons with other models in the literature, it is useful to parameterize the \\(E_{sym}(\\rho)\\) from the MDI interaction and list its characteristics. Within phenomenological models it is customary to separate the symmetry energy into the kinetic and potential parts, see, e.g. (Prakash et al., 1988),
\\[E_{sym}(\\rho)=(2^{2/3}-1)\\frac{3}{5}E_{F}^{0}(\\rho/\\rho_{0})^{2/3}+E_{sym}^{ \\rm pot}(\\rho). \\tag{6}\\]
With the MDI interaction, the potential part of the nuclear symmetry energy can be well parameterized by
\\[E_{sym}^{\\rm pot}(\\rho)=F(x)\\rho/\\rho_{0}+(18.6-F(x))(\\rho/\\rho_{0})^{G(x)}, \\tag{7}\\]
with \\(F(x)\\) and \\(G(x)\\) given in Table 1 for \\(x=1\\), 0, \\(-1\\) and \\(-2\\). The MDI parameterizations for the \\(E_{sym}^{\\rm pot}(\\rho)\\) is similar but significantly different from those used by Prakash et al. (1988). Also shown in Table 1 are other characteristics of the symmetry energy, including its slope parameter \\(L\\) and curvature parameter \\(K_{sym}\\) at \\(\\rho_{0}\\), as well as the isospin-dependent part \\(K_{\\rm asy}\\) of the isobaric incompressibility of asymmetric nuclear matter (Chen et al., 2005). The symmetry energy in the subsaturation density region with x=0 and -1 can be roughly approximately by \\(E_{sym}(\\rho)\\approx 31.6(\\rho/\\rho_{0})^{0.69}\\) and \\(E_{sym}(\\rho)\\approx 31.6(\\rho/\\rho_{0})^{1.05}\\), respectively.
The MDI EOS has been recently applied to constrain the mass-radius correlations of both static and rapidly rotating neutron stars (Li & Steiner, 2006; Krastev et al., 2008). In addition, it has been also used to constrain a possible time variation of the gravitational constant \\(G\\)(Krastev & Li, 2007) via the gravitochemical heating formalism developed by Jofre et al. (2006). For comparisons, in this work we apply also EOSs from variational calculations with the \\(A18+\\delta v+UIX*\\) interaction (APR) Akmal et al. (1998), and recent Dirac-Brueckner-Hartree-Fock (DBHF) calculations (Alonso & Sammarruca, 2003; Krastev & Sammarruca, 2006) (DBHF+Bonn B) with Bonn B One-Boson-Exchange (OBE) potential (Machleidt, 1989). Below the baryon density of approximately \\(0.07fm^{-3}\\) the equations of state are supplemented by a crustal EOS, which is more suitable for the low density regime. Namely,we apply the EOS by Pethick, Ravenhall & Lorenz (1995) for the inner crust and the one by Haensel & Pichon (1994) for the outer crust. At the highest densities we assume a continuous functional for the EOSs employed in this work. (See (Krastev & Sammarruca, 2006) for a detailed description of the extrapolation procedure for the DBHF+Bonn B EOS.) The saturation properties of the nuclear equations of state used in this paper are summarized in Table 2.
What is the maximum isospin asymmetry reached, especially in the supra-normal density regions, in typical heavy-ion reactions? How does it depend on the symmetry energy? Do both the density and isospin asymmetry reached have to be high simultaneously in order to probe the symmetry energy at supra-normal densities with heavy-ion reactions? The answers to these questions are important for us to better understand the advantages and limitations of using heavy-ion reactions to probe the EOS of neutron-rich nuclear matter and properly evaluate their impacts on astrophysics.
To answer these questions we first show in Fig. 2 the central baryon density (upper window) and the average \\((n/p)_{\\rho\\geq\\rho_{0}}\\) ratio (lower window) of all regions with baryon densities _higher than_\\(\\rho_{0}\\) in the reaction of \\({}^{132}Sn+^{124}Sn\\) at a beam energy of 400 MeV/nucleon and an impact parameter of 1 fm. It is seen that the maximum baryon density is about 2 times normal nuclear matter density. Moreover, the compression is rather insensitive to the symmetry energy because the latter is relatively small compared to the EOS of symmetric nuclear matter around this density. The high density phase lasts for about 15 fm/c from 5 to 20 fm/c for this reaction. It is interesting to see in the lower window that the isospin asymmetry of the high density region is quite sensitive to the density dependence of the symmetry energy used in the calculation. The soft (e.g., \\(x=1\\)) symmetry energy
\\begin{table}
\\begin{tabular}{l c c c c c} EOS & \\(\\rho_{0}(fm^{-3})\\) & \\(E_{s}(MeV)\\) & \\(\\kappa(MeV)\\) & \\(e_{sym}(\\rho_{0})(MeV)\\) & \\(m^{*}(\\rho_{0})/m\\) \\\\ \\hline \\hline MDI(x=0) & 0.160 & -16.08 & 211.00 & 31.62 & 0.67 \\\\ MDI(x=-1) & 0.160 & -16.08 & 211.00 & 31.62 & 0.67 \\\\ APR & 0.160 & -16.00 & 266.00 & 32.60 & 0.70 \\\\ DBHF+Bonn B & 0.185 & -16.14 & 259.04 & 33.71 & 0.65 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Saturation properties of the nuclear EOSs (for symmetric nuclear matter) employed in this work. Taken from (Krastev et al., 2008).
leads to a significantly higher value of \\((n/p)_{\\rho\\geq\\rho_{0}}\\) than the stiff one (e.g., \\(x=-2\\)). This is consistent with the well-known isospin fractionation phenomenon in asymmetric nuclear matter (Muller & Serot, 1995; Li & Ko, 1997). Because of the \\(E_{sym}(\\rho)\\delta^{2}\\) term in the EOS of asymmetric nuclear matter, it is energetically more favorable to have a higher isospin asymmetry \\(\\delta\\) in the high density region with a softer symmetry energy functional \\(E_{sym}(\\rho)\\). In the supra-normal density region, as shown in Fig. 1, the symmetry energy changes from being soft to stiff when the parameter \\(x\\) varies from 1 to -2. Thus the value of \\((n/p)_{\\rho\\geq\\rho_{0}}\\) becomes lower as the parameter \\(x\\) changes from 1 to -2. It is worth mentioning that the initial value of the quantity \\((n/p)_{\\rho\\geq\\rho_{0}}\\) is about 1.4 which is less than the average n/p ratio of 1.56 of the reaction system. This is because of the neutron-skins of the colliding nuclei, especially that of the projectile \\({}^{132}Sn\\). In the neutron-rich nuclei, the n/p ratio on the low-density surface is much higher than that in their interior. Also because of the \\(E_{sym}(\\rho)\\delta^{2}\\) term in the EOS, the isospin-asymmetry in the low density region is much lower than the supra-normal density region as long as the symmetry increases with density. In fact, as shown in Fig. 2 of Ref. (Li & Steiner, 2006), the isospin-asymmetry of the low density region can become much higher than the isospin asymmetry of the reaction system.
Figure 2: Central baryon density (upper panel) and isospin asymmetry (lower panel) of high density region in the reaction of \\({}^{132}\\rm{Sn}+^{124}\\rm{Sn}\\) at a beam energy of 400 MeV/nucleon and an impact parameter of 1 fm. Taken from Ref. (Li et al., 2005).
It is clearly seen that the dense region can become either neutron-richer or neutron-poorer with respect to the initial state depending on the symmetry energy functional \\(E_{sym}(\\rho)\\) used. As long as the symmetry energy increases with the increasing density, the isospin asymmetry of the supra-normal density region is always lower than the isospin asymmetry of the reaction system. Thus, even with radioactive beams, the supra-normal density region can not be both dense and neutron-rich simultaneously, unlike the situation in the core of neutron stars, unless the symmetry energy starts decreasing at high densities. The high density behavior of the symmetry energy is probably among the most uncertain properties of dense matter as stressed by (Kutschera et al., 1993; Kutschera, 2000). Indeed, some predictions show that the symmetry energy can decrease with increasing density above certain density and may even finally becomes negative. This extreme behavior was first predicted by some microscopic many-body theories, see e.g., Refs. (Pandharipande & Garde, 1972; Wiringa, 1988a; Krastev & Sammarruca, 2006). It has also been shown that the symmetry energy can become negative at various high densities within the Hartree-Fock approach using the original Gogny force (Chabanat et al., 1997), the density-dependent M3Y interaction (Khoa et al., 1996; Basu et al., 2006) and about 2/3 of the 87 Skyrme interactions that have been widely used in the literature (Stone et al., 2003). The mechanism and physical meaning of a negative symmetry energy are still under debate and certainly deserve more studies.
Isospin effects in heavy-ion reactions are determined mainly by the \\(E_{sym}(\\rho)\\delta^{2}\\) term in the EOS. One expects a larger effect if the isospin-asymmetry is higher. Thus, ideally, one would like to have situations where both the density and isospin asymmetry are sufficiently high simultaneously as in the cores of neutron stars in order to observe the strongest effects due to the symmetry energy at supra-normal densities. However, since it is the product of the symmetry energy and the isospin-asymmetry that matters, one can still probe the symmetry energy at high densities where the isospin asymmetry is generally low with symmetry energy functionals that increase with density. Therefore, even if the high density region may not be as neutron-rich as in neutron stars, heavy-ion collisions can still be used to probe the symmetry energy at high densities useful for studying properties of neutron stars.
The moment of inertia of neutron stars
Employing the EOSs described briefly in Section 2, we compute the neutron star moment of inertia with the \\(RNS\\)1 code developed and made available to the public by Nikolaos Stergioulas (Stergioulas & Friedman 1995). The code solves the hydrostatic and Einstein's field equations for mass distributions rotating rigidly under the assumption of stationary and axial symmetry about the rotational axis, and reflectional symmetry about the equatorial plane. \\(RNS\\) calculates the angular momentum \\(J\\) as (Stergioulas 2003)
Footnote 1: Thanks to Nikolaos Stergioulas the \\(RNS\\) code is available as a public domain program at [http://www.gravity.phys.uwm.edu/rns/](http://www.gravity.phys.uwm.edu/rns/)
\\[J=\\int T^{\\mu\
u}\\xi^{\
u}_{(\\phi)}dV, \\tag{8}\\]
where \\(T^{\\mu\
u}\\) is the energy-momentum tensor of stellar matter
\\[T^{\\mu\
u}=(\\epsilon+P)u^{\\mu}u^{\
u}+Pg^{\\mu\
u}, \\tag{9}\\]
\\(\\xi^{\
u}_{(\\phi)}\\) is the Killing vector in azimuthal direction reflecting axial symmetry, and \\(dV=\\sqrt{-g}d^{3}x\\) is a proper 3-volume element (\\(g\\equiv\\det(g_{\\alpha\\beta})\\) is the determinant of the 3-metric). In Eq. (9) \\(P\\) is the pressure, \\(\\epsilon\\) is the mass-energy density, and \\(u^{\\mu}\\) is the unit time-like four-velocity satisfying \\(u^{\\mu}u_{\\mu}=-1\\). For axial-symmetric stars it takes the form \\(u^{\\mu}=u^{t}(1,0,0,\\Omega)\\), where \\(\\Omega\\) is the star's angular velocity. Under this condition Eq. (8) reduces to
\\[J=\\int(\\epsilon+P)u^{t}(g_{\\phi\\phi}u^{\\phi}+g_{\\phi t}u^{t})\\sqrt{-g}d^{3}x \\tag{10}\\]
It should be noted that the moment of inertia cannot be calculated directly as an integral quantity over the source of gravitational field (Stergioulas 2003). In addition, there exists no unique generalization of the Newtonian definition of the moment of inertia in General Relativity and therefore \\(I=J/\\Omega\\) is a natural choice for calculating this important quantity.
For rotational frequencies much lower than the Kepler frequency (the highest possible rotational rate supported by a given EOS), i.e. \\(\
u/\
u_{k}<<1\\) (\\(\
u=\\Omega/(2\\pi)\\)), the deviations from spherical symmetry are very small, so that the moment of inertia can be approximated from spherical stellar models. In what follows we review briefly this slow-rotation approximation, see e.g. (Hartle 1967). In the slow-rotational limit the metric can be written in spherical coordinates as (in geometrized units \\(G=c=1\\))
\\[ds^{2}=-e^{2\\phi(r)}dt^{2}+\\left(1-\\frac{2m(r)}{r}\\right)^{-1}dr^{2}-2\\omega r ^{2}\\sin^{2}\\theta dtd\\phi+r^{2}(d\\theta^{2}+\\sin^{2}\\theta d\\phi^{2}) \\tag{11}\\]In the above equation \\(m(r)\\) is the total gravitational mass within radius \\(r\\) satisfying the usual equation
\\[\\frac{dm(r)}{dr}=4\\pi\\epsilon(r)r^{2} \\tag{12}\\]
and \\(\\omega(r)\\equiv(d\\phi/dt)_{ZAMO}\\) is the Lense-Thirring angular velocity of a zero-angular-momentum observer (ZAMO). Up to first order in \\(\\omega\\) all metric functions remain spherically symmetric and depend only on \\(r\\)(Morrison et al., 2004). In the stellar interior the Einstein's field equations reduce to
\\[\\frac{d\\phi(r)}{dr}=m(r)\\left[1+\\frac{4\\pi r^{3}P(r)}{m(r)}\\right]\\left[1- \\frac{2m(r)}{r}\\right]^{-1}\\quad(r<R_{star}) \\tag{13}\\]
and
\\[\\frac{1}{r^{3}}\\frac{d}{dr}\\left(r^{4}j(r)\\frac{d\\bar{\\omega}(r)}{dr}\\right)+4 \\frac{dj(r)}{dr}\\bar{\\omega}(r)=0\\quad(r<R_{star}), \\tag{14}\\]
with \\(\\bar{\\omega}\\equiv\\Omega-\\omega\\) the dragging angular velocity (the angular velocity of the star relative to a local inertial frame rotating at \\(\\omega\\)) and
\\[j\\equiv\\left(1-\\frac{2m(r)}{r}\\right)^{1/2}e^{-\\phi(r)} \\tag{15}\\]
Outside the star the metric functions become
\\[e^{2\\phi}=\\left(1-\\frac{2M}{r}\\right)\\quad(r>R_{star}) \\tag{16}\\]
and
\\[\\omega=\\frac{2J}{r^{3}}\\quad(r>R_{star}), \\tag{17}\\]
where \\(M=m(r=R)=4\\pi\\int_{0}^{R}\\epsilon(r^{\\prime})r^{\\prime 2}dr^{\\prime}\\) is the total gravitational mass and \\(R\\) is the stellar radius defined as the radius at which the pressure drops to zero (\\(P(r=R)=0\\)). At the star's surface the interior and exterior solutions are matched by satisfying the appropriate boundary conditions
\\[\\bar{\\omega}(R)=\\Omega-\\frac{R}{3}\\left(\\frac{d\\bar{\\omega}}{dr}\\right)_{r=R} \\tag{18}\\]
and
\\[\\phi(r)=\\frac{1}{2}\\ln\\left(1-\\frac{2M}{R}\\right) \\tag{19}\\]
The moment of inertia \\(I=J/\\Omega\\) then can be computed from Eq. (10). With \\(\\Omega=u^{\\phi}/u^{t}\\) and retaining only first order terms in \\(\\omega\\) and \\(\\Omega\\), the moment of inertia reads (Morrison et al., 2004; Lattimer & Prakash, 2000)
\\[I\\approx\\frac{8\\pi}{3}\\int_{0}^{R}(\\epsilon+P)e^{-\\phi(r)}\\left[1-\\frac{2m(r )}{r}\\right]^{-1}\\frac{\\bar{\\omega}}{\\Omega}r^{4}dr \\tag{20}\\]This slow-rotation approximation for the neutron-star moment of inertia neglects deviations from spherical symmetry and is independent of the angular velocity \\(\\Omega\\)(Morrison et al., 2004). For neutron stars with masses greater than \\(1M_{\\odot}\\)Lattimer & Schutz (2005) found that, for slow-rotations, the momenta of inertia computed through the above formalism (Eq. (20)) can be approximated very well by the following empirical relation:
\\[I\\approx(0.237\\pm 0.008)MR^{2}\\left[1+4.2\\frac{Mkm}{M_{\\odot}R}+90\\left(\\frac{ Mkm}{M_{\\odot}R}\\right)^{4}\\right] \\tag{21}\\]
The above equation is shown (Lattimer & Schutz, 2005) to hold for a wide class of EOSs except for ones with appreciable degree of softening, usually indicated by achieving a maximum mass of \\(\\sim 1.6M_{\\odot}\\) or less. Since none of the EOSs employed in this paper exhibit such pronounced softening, Eq. (21) is a good approximation for the momenta of inertia of _slowly_ rotating stars.
## 4 Results and discussion
We calculate the neutron star moment of inertia applying several nucleonic EOSs (see Section 2) considering both slow- and rapid-rotation regimes. In Fig. 3 we show stellar sequences computed with the \\(RNS\\) code for spherical and maximally rotating models. As seen
Figure 3: (Color online) Mass-radius relation for static and maximally rotating neutron stars. Solid lines correspond to static and broken lines to maximally rotating stellar models. Taken form (Krastev et al., 2008).
in the figure, rapid rotation alters significantly the mass-radius relation of rapidly rotating stars (with respect to static configurations). Generally, for a given EOS it increases the maximum possible mass by \\(\\sim 15\\%\\), while reducing/increasing the polar/circumferential radius by several kilometers, leading to an overall oblate shape of the rotating star. The degree to which the neutron star properties are impacted by rapid rotation depends on the details of the EOS: it is greater for models from stiffer EOS which produce less centrally condensed and gravitational bound neutron stars (Friedman et al., 1984). In view of these considerations, one should expect similar changes in the moment of inertia of rapidly rotating neutron stars (with respect to static models). We address these and other implications next.
### Slow rotation
If the rotational frequency is much smaller than the Kepler frequency, the deviations from spherical symmetry are negligible and the moment of inertia can be calculated applying the slow-rotation approximation discussed briefly in Section 3. For this case Lattimer & Schutz (2005) showed that the moment of inertia can be very well approximated by Eq. (21). In Fig. 4 we display the moment of inertia as a function of stellar mass for slowly rotating neutron stars as computed with the empirical relation (21). As shown in Fig. 3, above \\(\\sim 1.0M_{\\odot}\\) the neutron star radius remains approximately constant before reaching the maximum mass
Figure 4: (Color online) Total moment of inertia of neutron stars estimated with Eq. (21).
supported by a given EOS. The moment of inertia (\\(I\\sim MR^{2}\\)) thus increases almost linearly with stellar mass for all models. Right before the maximum mass is achieved, the neutron star radius starts to decrease (Fig. 3), which causes the sharp drop in the moment of inertia observed in Fig. 4. Since \\(I\\) is proportional to the mass and the square of the radius, it is more sensitive to the density dependence of the nuclear symmetry energy, which determines the neutron star radius. Here we recall that the \\(x=-1\\) EOS has much stiffer symmetry energy (with respect to the one of the \\(x=0\\) EOS), which results in neutron star models with larger radii and, in turn, momenta of inertia. For instance, for a \"canonical\" neutron star (\\(M=1.4M_{\\odot}\\)), the difference in the moment of inertia is more than 30% with the \\(x=0\\) and the \\(x=-1\\) EOSs. In Fig. 5 we take another view of the moment of inertia where \\(I\\) is scaled by \\(M^{3/2}\\) as a function of the stellar mass (after (Lattimer & Schutz 2005)).
The discovery of the extremely relativistic binary pulsar PSR J0737-3039A,B provides an unprecedented opportunity to test General Relativity and physics of pulsars (Burgay et al. 2003). Lattimer & Schutz (2005) estimated that the moment of inertia of the A component of the system should be measurable with an accuracy of about 10%. Given that the masses of both stars are already accurately determined by observations, a measurement of the moment
Figure 5: (Color online) The moment of inertia scaled by \\(M^{3/2}\\) as a function of the stellar mass \\(M\\). The shaded band illustrates a 10% error of hypothetical \\(I/M^{3/2}\\) measurement of 50 \\(km^{2}\\)\\(M_{\\odot}^{-1/2}\\). The error bar shows the specific case in which the mass is \\(1.34M_{\\odot}\\) (after (Lattimer & Schutz 2005)).
of inertia of even one neutron star could have enormous importance for the neutron star physics (Lattimer & Schutz 2005). (The significance of such a measurement is illustrated in Fig. 5. As pointed by Lattimer & Schutz (2005), it is clear that very few EOSs would survive these constraints.) Thus, theoretical predictions of the moment of inertia are very timely. Calculations of the moment of inertia of pulsar A (\\(M_{A}=1.338M_{\\odot}\\), \\(\
u_{A}=44.05Hz\\)) have been reported by Morrison et al. (2004) and Bejger et al. (2005). In Table 3 we show the moment of inertia and (other selected quantities) of PSR J0737-3039A computed with the \\(RNS\\) code using the EOSs employed in this study. Our results with the APR EOS are in very good agreement with those by Morrison et al. (2004) (\\(I^{APR}=1.24\\times 10^{45}g~{}cm^{2}\\)) and Bejger et al. (2005) (\\(I^{APR}=1.23\\times 10^{45}g~{}cm^{2}\\)). In the last column of Table 3 we also include results computed with the empirical relation (Eq. (21)). From a comparison with the results from the exact numerical calculation we conclude that Eq. (21) is an excellent approximation for the moment of inertia of slowly-rotating neutron stars. (The average uncertainty of Eq. (21) is \\(\\sim 2\\%\\), except for the DBHF+BonnB EOS for which it is \\(\\sim 8\\%\\).) Our results (with the MDI EOS) allowed us to constrain the moment of inertia of pulsar A to be in the range \\(I=(1.30-1.63)\\times 10^{45}(g~{}cm^{2})\\).
### Rapid rotation
In this subsection we turn our attention to the moment of inertia of rapidly rotating neutron stars. In Fig. 6 we show the moment of inertia as a function of stellar mass for neutron star models spinning at the mass-shedding (Kepler) frequency. The numerical calculation is performed with the \\(RNS\\) code. We observe that the momenta of inertia of rapidly rotating neutron stars are significantly larger than those of slowly rotating models (for a fixed mass).
Figure 6: (Color online) Total moment of inertia for Keplerian models. The neutron star sequences are computed with the \\(RNS\\) code.
Figure 7: (Color online) Total moment of inertia as a function of stellar mass for models rotating at 716Hz (upper frame) and 1122 Hz (lower frame).
This is easily understood in terms of the increased (equatorial) radius (Fig. 3).
We also compute the momenta of inertia of neutron stars rotating at 716 (Hessels et al., 2006) and 1122Hz (Kaaret et al., 2007) which are the rotational frequencies of the fastest pulsars of today. The numerical results are presented in Fig. 7. As demonstrated by Bejger et al. (2007) and most recently by Krastev et al. (2008), the range of the allowed masses supported by a given EOS for rapidly rotating neutron stars becomes narrower than the one for static configurations. The effect becomes stronger with increasing frequency and depends upon the EOS. This is also illustrated in Fig. 7, particularly in the lower panel. Additionally, the moment of inertia shows increase with rotational frequency at a rate dependent upon the details of the EOS. This is best seen in Fig. 8 where we display the moment of inertia as a function of the rotational frequency for stellar models with a fixed mass (\\(M=1.4M_{\\odot}\\)). The neutron star sequences shown in Fig. 8 are terminated at the mass-shedding frequency. At the lowest frequencies the moment of inertia remains roughly constant for all EOSs (which justifies the application of the slow-rotation approximation and Eq. (21)). As the stellar models approach the Kepler frequency, the moment of inertia exhibits a sharp rise. This is attributed to the large increase of the circumferential radius as the star approaches the \"mass-shedding point\". As pointed by Friedman et al. (1984), properties of rapidly rotating neutron stars display greater deviations from those of spherically symmetric (static) stars for models computed with stiffer EOSs. This is because such models are less centrally condensed and gravitationally bound. This also explains why the momenta of inertia of rapidly
Figure 8: (Color online) Total moment of inertia as a function of rotational frequency for stellar models with mass \\(M=1.4M_{\\odot}\\).
rotating neuron star configurations from the \\(x=-1\\) EOS show the greatest deviation from those of static models.
### Fractional moment of inertia of the neutron star crust
As it was discussed extensively by Lattimer & Prakash (2000) (and others), the neutron star crust thickness might be measurable from observations of pulsar glitches, the occasional disrupts of the otherwise extremely regular pulsation from magnetized, rotating neutron stars. The canonical model of Link et al. (1999) suggests that glitches are due to the angular momentum transfer from superfluid neutrons to normal matter in the neutron star crust, the region of the star containing nuclei and nucleons that have dripped out of nuclei. This region is bound by the neutron drip density at which nuclei merge into uniform nucleonic matter. Link et al. (1999) concluded from the observations of the Vela pulsar that at least 1.4% of the total moment of inertia resides in the crust of the Vela pulsar. For slowly rotating neutron stars, applying several realistic hadronic EOSs that permit maximum masses of at least \\(\\sim 1.6M_{\\odot}\\)Lattimer & Prakash (2000) found that the fractional moment of inertia, \\(\\Delta I/I\\), can be expressed approximately as
\\[\\frac{\\Delta I}{I}\\simeq\\frac{28\\pi P_{t}R^{3}}{3Mc^{2}}\\frac{(1-1.67\\beta-0.6 \\beta^{2})}{\\beta}\\left[1+\\frac{2P_{t}(1+5\\beta-14\\beta^{2})}{\\rho_{t}m_{b}c^{2 }\\beta^{2}}\\right]^{-1} \\tag{22}\\]
In the above equation \\(\\Delta I\\) is the moment of inertia of the neutron star crust, \\(I\\) is the total moment of inertia, \\(\\beta=GM/Rc^{2}\\) is the compactness parameter, \\(m_{b}\\) is the average nucleon mass, \\(\\rho_{t}\\) is the transition density at the crust-core boundary, and \\(P_{t}\\) is the transition pressure. The determination of the transition density itself is a very complicated problem. Different approaches often give quite different results. Similar to determining the critical density for the spinodal decomposition for the liquid-gas phase transition in nuclear matter, for uniform \\(npe\\)-matter, Lattimer & Prakash (2000) and more recently Kubis (2007) have evaluated the crust transition density by investigating when the incompressibility of \\(npe\\)-matter becomes negative, i.e
\\[K_{\\mu}=\\rho^{2}\\frac{d^{2}E_{0}}{d\\rho^{2}}+2\\rho\\frac{dE_{0}}{d\\rho}+\\delta^ {2}\\left[\\rho^{2}\\frac{d^{2}E_{sym}}{d\\rho^{2}}+2\\rho\\frac{dE_{sym}}{d\\rho}-2E _{sym}^{-1}\\left(\\rho\\frac{dE_{sym}}{d\\rho}\\right)^{2}\\right]<0 \\tag{23}\\]
(see Fig. 9) where \\(E_{0}(\\rho)\\) is the EOS of symmetric nuclear matter, \\(E_{sym}\\) is the nuclear symmetry energy, and \\(\\delta=(\\rho_{n}-\\rho_{p})/(\\rho_{n}+\\rho_{p})\\) is the asymmetry parameter. Using this approach and the MDI interaction, Kubis (2007) found the transition density of \\(0.119,0.092,0.095\\) and \\(0.160fm^{-3}\\) for the \\(x\\) parameter of \\(1,0,-1\\) and \\(-2\\), respectively. Similarly, we have calculated the transition densities and pressures for the EOSs employed in this work. Ourresults are summarized in Table 4. We find good agreement between our results and those by Kubis (2007) with the MDI interaction. It is interesting to notice that the transition densities predicted by all EOSs are in the same density range explored by heavy-ion reactions at intermediate energies. The MDI interaction with \\(x=0\\) and \\(x=-1\\) constrained by the available data on isospin diffusion in heavy-ion reaction at intermediate energies thus limits the transition density rather tightly in the range of \\(\\rho_{t}=[0.091-0.093](fm^{-3})\\).
The fractional momenta of inertia \\(\\Delta I/I\\) of the neutron star crusts are shown in Fig. 10 as computed through Eq. (22) with the parameters listed in Table 4. It is seen that the condition \\(\\Delta I/I>0.014\\) extracted from studying the glitches of the Vela pulsar does put a strict lower limit on the radius for a given EOS. It also limits the maximum mass to be less than about \\(2M_{\\odot}\\) for all of the EOSs considered. Similar to the total momenta of inertia the ratio \\(\\Delta I/I\\) changes more sensitively with the radius as the EOS is varied.
\\begin{table}
\\begin{tabular}{l c c c c} EOS & MDI(x=0) & MDI(x=-1) & APR & DBHF+Bonn B \\\\ \\hline \\hline \\(\\rho_{t}(fm^{-3})\\) & 0.091 (0.095) & 0.093 (0.092) & 0.087 & 0.100 \\\\ \\(P_{t}(MeV~{}fm^{-3})\\) & 0.645 & 0.982 & 0.513 & 0.393 \\\\ \\hline \\end{tabular} The first row identifies the equation of state. The remaining rows exhibit the following quantities: transition density, transition pressure. The numbers in the parenthesis are the transition densities calculated by Kubis (2007).
\\end{table}
Table 4: Transition densities and pressures for the EOSs used in this paper.
Figure 9: (Color online) The incompressibility, \\(K_{\\mu}\\), as a function of baryon density \\(\\rho\\).
## 5 Summary
Recent experiments in terrestrial nuclear laboratories have narrowed down significantly the range of the EOS of neutron-rich nuclear matter although there are still many remaining challenges and uncertainties. In particular, the EOS for symmetric nuclear matter was constrained up to about five times the normal nuclear matter density by collective flow and particle production in relativistic heavy-ion reactions. The density dependence of the symmetry energy was constrained at subsaturation densities by isospin diffusion and isoscaling in heavy-ion reactions at intermediate energies. Applying the EOSs constrained by the heavy-ion reaction data we have studied the neutron star momenta of inertia of both slowly and rapidly rotating models within well established formalisms. We found that the moment of inertia of PSR J0737-3039A is limited in the range of \\(I=(1.30-1.63)\\times 10^{45}(g~{}cm^{2})\\). The neutron star crust-core transition density falls in a very narrow interval of \\(\\rho_{t}=[0.091-0.093](fm^{-3})\\). The corresponding fractional momenta of inertia \\(\\Delta I/I\\) of the neutron star crust are also constrained. It is also found that the moment of inertia increases with rotational frequency at a rate strongly dependent upon the EOS used.
Figure 10: (Color online) The fractional moment of inertia of the neutron star crust as a function of the neutron star mass (left panel) and radius (right panel) estimated with Eq. (22). The constraint from the glitches of the Vela pulsar is also shown.
## Acknowledgements
We would like to thank Lie-Wen Chen, Wei-Zhou Jiang and Jun Xu for helpful discussions. Plamen Krastev acknowledges the hospitality of the Institute for Nuclear Theory (INT) at the University of Washington where parts of this work were accomplished. This work was supported by the National Science Foundation under Grant No. PHY0652548 and the Research Corporation under Award No. 7123.
## References
* (1) Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. 1998, Phys. Rev., C58, 1804
* (2) Alonso, D. & Sammarruca, F. 2003, Phys. Rev., C67, 054301
* (3) Ansorg, M., Kleinwachter, A., & Meinel, R. 2002, A&A, 381, L49
* (4) Baran, V., Colonna, M., Greco, V., & Di Toro, M. 2005, Phys. Rept., 410, 335
* (5) Basu, D.N., Chowdhury,P.R., Samanta, C., 2006, Acta Phys. Polon. B37, 2869; Basu, D.N. and Mukhopadhyay, T., 2007, _ibid_, B38, 169; Mukhopadhyay T. and Basu,D.N., 2007, _ibid_, B38, 3225
* (6) Bertsch, G.F., and Das Gupta, S., 2005, Phys. Rept., 160, 189
* (7) Bejger, M., Bulik, T., & Haensel, P. 2005, MNRAS, 364, 635
* (8) Bejger, M., Haensel, P., & Zdunik, J. L. 2007, A&A, 464, L49
* (9) Bombaci, I., Thampan, A. V., & Datta, B. 2000, Astrophys. J., 541, L71
* (10) Bonazzola, S., Gourgoulhon, E., & Marck, J.-A. 1998, Phys. Rev. D, 58, 104020
* (11) Bonazzola, S., Gourgoulhon, E., Salgado, M., & Marck, J. A. 1993, A&A, 278, 421
* (12) Brown, B.A., 2000, Phys. Rev. Lett. 85, 5296
* (13) Burgay, M., D'Amico, N., Possenti, A., Manchester, R. N., Lyne, A. G., Joshi, B. C., McLaughlin, M. A., Kramer, M., Sarkissian, J. M., Camilo, F., Kalogera, V., Kim, C., & Lorimer, D. R. 2003, Nature, 426, 531
* (14) Chabanat. E., et al., 1997, Nucl. Phys. A 627, 710; 1998, _ibid_, 635, 231
* (15) Chen, L. W., Ko, C. M., & Li, B.-A. 2005, Phys. Rev. Lett., 94, 032701Chen, L. W., Ko, C. M., & Li, B.-A. 2005, Phys. Rev. C72, 064309
* () Chen, L. W., Ko, C. M., Li, B. A., Yong, G. C., 2007, Frontiers of Physics in China 2, 327
* () Cook, G. B., Shapiro, S. L., & Teukolsky, S. A. 1994, ApJ, 424, 823
* () Danielewicz, P., Lacey, R., & Lynch, W. G. 2002, Science, 298, 1592
* () Das, C. B., Gupta, S. D., Gale, C., & Li, B.-A. 2003, Phys. Rev., C67, 034611
* () Dieperink, A.E.L.. Dewulf, Y., Van Neck, D.,Waroquier,M., Rodin, V., 2003, Phys. Rev. C 68, 064307
* () Friedman, J. L., Parker, L., & Ipser, J. R. 1984, Nature, 312, 25
* () Friedman, J. L., Parker, L., & Ipser, J. R. 1986, Astrophys. J., 304, 115
* () Furnstahl, R.J., 2002, Nucl. Phys. A 706, 85
* () Haensel, P. & Pichon, B. 1994, Astron. Astrophys., 283, 313
* () Hartle, J. B. 1967, Astrophys. J., 150, 1005
* () Hartle, J. B. & Thorne, K. S. 1968, Astrophys. J., 153, 807
* () Heiselberg, H. & Hjorth-Jensen, M. 2000, Phys. Rept., 328, 237
* () Heiselberg, H. & Pandharipande, V. 2000, Ann. Rev. Nucl. Part. Sci., 50, 481
* () Hessels, J. W. T., Ransom, S. M., Stairs, I. H., et al. 2006, Science, 311, 1901
* () Horowitz, C. J., Pollock, S. J., Souder, P. A., and Michaels, R. 2001, Phys. Rev., C63, 025501
* () Horowitz, C. J. & Piekarewicz, J. 2001, Phys. Rev. Lett., 86, 5647
* () Horowitz, C. J. & Piekarewicz, J. 2002, Phys. Rev., C66, 055803
* () Jofre, P., Reisenegger, A., & Fernandez, R. 2006, Phys. Rev. Lett., 97, 131102
* () Kaaret, P., Prieskorn, J., in't Zand, J. J. M., et al. 2007, Astrophys. J., 657, L97
* () Khoa, D. T., Von Oertzen, W., & Ogloblin, A. A. 1996, Nucl. Phys., A602, 98
* () Komatsu, H., Eriguchi, Y., & Hachisu, I. 1989, MNRAS, 237, 355
* () Krastev, P. G. & Sammarruca, F. 2006, Phys. Rev., C74, 025808
* ()Krastev, P. G. & Li, B.-A. 2007, Phys. Rev. C76, 055804
* () Krastev, P. G., Li, B.-A. & Worley, A. 2008, Astrophys. J., in press (0709.3621)
* () Kubis, S. 2007, Phys. Rev., C76, 025801
* () Kutschera, M., 1994, Phys. Lett. B340, 1; 1994, Z. Phys. A348, 263; 1989, Phys. Lett. B223, 11; 1993, Phys. Rev. C47, 1077
* () Kutschera, M., Niemiec, J., 2000, Phys. Rev. C62, 025802
* () Lattimer, J. M. & Prakash, M. 2007, Phys. Rept. 442, 109
* () Lattimer, J. M., Schutz, B. F. 2005, Astrophys. J., 629, 979
* () Lattimer, J. M. & Prakash, M. 2000, Phys. Rept., 333, 121
* () Lattimer, J. M. & Prakash, M. 2004, Science, 304, 536
* () Lattimer, J. M., Prakash, M., Masak, D., & Yahil, A. 1990, ApJ, 355, 241
* () Li, B.-A. 2000, Phys. Rev. Lett., 85, 4221
* () Li, B.-A. 2002, Phys. Rev. Lett., 88, 192701
* () Li, B.-A. & Chen, L.-W. 2005, Phys. Rev., C72, 064611
* () Li, B.-A., Ko, C. M., & Bauer, W. 1998, Int. J. Mod. Phys., E7, 147
* () Li, B.-A., Ko, C.M., 1997, Nucl. Phys. A618, 498
* () Li, B.-A., Ko, C. M., & Ren, Z.-Z. 1997, Phys. Rev. Lett., 78, 1644
* () Li, B.-A., Das, C.B., Das Gupta, S., Gale, C., 2004, Phys. Rev. C69, 011603 (R); 2004, Nucl. Phys. A735, 563
* () Li, B.-A., Yong, G. C., & Zuo, W. 2005, Phys. Rev. C71, 014608
* () Li, B.-A. & Steiner, A. W. 2006, Phys. Lett., B642, 436
* () Li, B.-A. & Udo Schroeder, W. 2001, Isospin Physics in Heavy-Ion Collisions at Intermediate Energies (New York: Nova Science)
* () Link, B., Epstein, R.I. & Lattimer, J.M. 1999, Phys. Rev. Lett., 83, 3362
* () Machleidt, R. 1989 Adv. Nucl. Phys., 19, 189
* ()Morrison, I. A., Baumgarte, T. W., Shapiro, S. L., & Pandharipande, V. R. 2004, Astrophys. J, 617, L135
* () Muller, H.,Serot, B., 1995, Phys. Rev. C52, 2072
* () Pandharipande, V.R., Garde, V.K., 1972, Phys. Lett. B39, 608
* () Pethick, C. J., Ravenhall, D. G., & Lorenz, C. P. 1995, Nucl. Phys., A584, 675
* () Piekraewicz, J., 2007, Phys. Rev. C76, 064310.
* () Prakash, M., Ainsworth, T.L., Lattimer, J.M., 1988, Phys. Rev. Lett. 61, 2518
* () Prakash, M., Lattimer, J. M., Sawyer, R. F., & Volkas, R. R. 2001, Ann. Rev. Nucl. Part. Sci., 51, 295
* () Prakash, M., Prakash, M., Lattimer, J. M., & Pethick, C. J. 1992, ApJ, 390, L77
* () Shetty, D., Yennello, S.J. and Souliotis, G.A., 2007, Phys. Rev. C75, 034602
* () Shi, L. & Danielewicz, P. 2003, Phys. Rev., C68, 064604
* () Steiner, A. W. & Li, B.-A. 2005, Phys. Rev., C72, 041601
* () Steiner, A. W., Prakash, M., Lattimer, J. M., & Ellis, P. J. 2005, Phys. Rept., 411, 325
* () Stergioulas, N. 2003, Living Rev. Rel., 6, 3
* () Stergioulas, N. & Friedman, J. L. 1995, Astrophys. J., 444, 306
* () Stergioulas, N. & Friedman, J. L. 1998, Astrophys. J., 492, 301
* () Stone,J.R., Miller, J.C., Koncewicz,R., Stevenson, P.D., Strayer, M.R., 2003, Phys. Rev. C68, 034324
* () Todd-Rutel, B. G. & Piekarewicz, J. 2005, Phys. Rev. Lett., 95, 122501
* () Tsang, M.B. et al., 2001, Phys. Rev. Lett. 86, 5023
* () Tsang, M.B. et al., 2004, Phys. Rev. Lett. 92, 062701
* () Weber, F. 1999, Pulsars as Astrophysical Laboratories for Nuclear and Particle Physics (Bristol, Great Britan: IOP Publishing)
* () Wiringa, R.B., 1988, Phys. Rev. C38, 2967
* ()* () Wiringa, R.B., Fiks, V., Fabrocini, A., 1988, Phys. Rev. C 38, 1010
* () Yakovlev, D. G. & Pethick, C. J. 2004, Ann. Rev. Astron. Astrophys., 42, 169 | Properties and structure of neutron stars are determined by the equation of state (EOS) of neutron-rich stellar matter. While the collective flow and particle production in relativistic heavy-ion collisions have constrained tightly the EOS of symmetric nuclear matter up to about five times the normal nuclear matter density, the more recent experimental data on isospin-diffusion and isoscaling in heavy-ion collisions at intermediate energies have constrained considerably the density dependence of the nuclear symmetry energy at subsaturation densities. Although there are still many uncertainties and challenges to pin down completely the EOS of neutron-rich nuclear matter, the heavy-ion reaction experiments in terrestrial laboratories have limited the EOS of neutron-rich nuclear matter in a range much narrower than that spanned by various EOSs currently used in astrophysical studies in the literature. These nuclear physics constraints could thus provide more reliable information about properties of neutron stars. Within well established formalisms using the nuclear constrained EOSs we study the momenta of inertia of neutron stars. We put the special emphasis on the component A of the extremely relativistic double neutron star system PSR J0737-3039. Its moment of inertia is found to be between 1.30 and 1.63 (\\(\\times 10^{45}g~{}cm^{2}\\)). Moreover, the transition density at the crust-core boundary is shown to be in the narrow range of \\(\\rho_{t}=[0.091-0.093](fm^{-3})\\).
dense matter -- equation of state -- stars: neutron -- stars: rotation | Summarize the following text. |
arxiv-format/0801_2428v1.md | # Magnetorotational instability in protoplanetary discs: The effect of dust grains
Raquel Salmeron\\({}^{1,2}\\) & Mark Wardle\\({}^{3}\\)
\\({}^{1}\\)Planetary Science Institute, Research School of Astronomy & Astrophysics and Research School of Earth Sciences,
Australian National University, Canberra ACT 2611, Australia
\\({}^{2}\\)Department of Astronomy & Astrophysics, The University of Chicago, Chicago IL 60637, USA
\\({}^{3}\\)Physics Department, Macquarie University, Sydney NSW 2109, Australia
######
We find that when no grain are present, or they are \\(\\gtrsim 1\\)\\(\\mu\\)m in radius, the midplane of the disc remains magnetically coupled for field strengths up to a few gauss at both radii. In contrast, when a population of small grains (\\(a=0.1\\mu\\)m) is mixed with the gas, the section of the disc within two tidal scaleheights from the midplane is magnetically inactive and only magnetic fields weaker than \\(\\sim 50\\) mG can effectively couple to the fluid. At 5 AU, Ohmic diffusion dominates for \\(z/H\\lesssim 1\\) when the field is relatively weak (\\(B\\lesssim\\) a few milligauss), irrespective of the properties of the grain population. Conversely, at 10 AU this diffusion term is unimportant in all the scenarios studied here. High above the midplane (\\(z/H\\gtrsim 5\\)), ambipolar diffusion is severe and prevents the field from coupling to the gas for all \\(B\\). Hall diffusion is dominant for a wide range of field strengths at both radii when dust grains are present.
The growth rate, wavenumber and range of magnetic field strengths for which MRI-unstable modes exist are all drastically diminished when dust grains are present, particularly when they are small (\\(a\\sim 0.1\\mu\\)m). In fact, MRI perturbations grow at 5 AU (10 AU) for \\(B\\lesssim 160\\) mG (130 mG) when 3 \\(\\mu\\)m grains are mixed with the gas. This upper limit on the field strength is reduced to only \\(\\sim 16\\) mG (10 mG) when the grain size is reduced to 0.1 \\(\\mu\\)m. In contrast, when the grains are assumed to have settled, MRI unstable modes are found for \\(B\\lesssim 800\\) mG at 5 AU and 250 mG at 10 AU (Salmeron & Wardle, 2005). Similarly, as the typical size of the dust grains diminishes, the vertical extent of the dead zone increases, as expected. For 0.1 \\(\\mu\\)m grains, the disk is magnetically inactive within two scaleheights of the midplane at both radii, but perturbations grow over the entire section of the disk for grain sizes of 1 \\(\\mu\\)m or larger. When dust grains are mixed with the gas, perturbations that incorporate Hall diffusion grow faster, and are active over a more extended cross section of the disc, than those obtained under the ambipolar diffusion approximation.
We conclude that in protoplanetary discs, the magnetic field is able to couple to the gas and shear over a wide range of fluid conditions even when small dust grains are well mixed with the gas. Despite the low magnetic coupling, MRI modes grow for an extended range of magnetic field strengths and Hall diffusion largely determines the properties of the perturbations in the inner regions of the disc.
keywords: accretion, accretion discs - instabilities - magnetohydrodynamics - stars: formation.
Introduction
Magnetic fields may regulate the 'disc accretion phase' of star formation by providing means of transporting away the excess angular momentum of the disc, enabling matter to accrete. The more generally relevant mechanisms associated with this are MHD turbulence induced by the magnetorotational instability (MRI; Balbus & Hawley 1991, 1998) and outflows driven centrifugally from the disc surfaces (Blandford & Payne 1982, Wardle & Konigl 1993; see also the review by Konigl & Pudritz 2000). These processes are, in turn, thought to play key roles in the dynamics and evolution of astrophysical accretion discs. Magnetically driven turbulence is likely to impact disc chemistry (e.g. Semenov, Wiebe & Henning 2006, Ilgner & Nelson 2006) as well as the properties - and evolution - of dust grains mixed with the gas (e.g. Turner et al. 2006). Magnetic resonances have been shown to modify the net tidal torque exerted by the disc on a forming planet and thus, alter the speed and direction of the planet's migration through the disc (Terquem 2003; Johnson, Goodman & Menou 2006). Finally, the magnetically inactive (dead) zones (Gammie 1996; Wardle 1997) are not only the regions where planets are thought to form, but they have also been invoked as a possible mechanism to stop their inward migration (e.g. Matsumura & Pudritz 2005).
In protoplanetary discs, however, the magnetic diffusivity can be high enough to limit - or even suppress - these processes. The specific role magnetic fields are able to play in these environments is, therefore, largely determined by the degree of coupling between the field and the neutral gas. A critical parameter for this analysis is the ionisation fraction of the fluid, which reflects the equilibrium between ionisation and recombination processes taking place in the disc. Ionisation processes outside the disc innermost sections (\\(R\\gtrsim 0.1\\) AU) are non-thermal, driven by interstellar cosmic rays, X-rays emitted by the magnetically active protostar and radioactive decay (Hayashi 1981; Glassgold, Najita & Igea 1997; Igea & Glassgold 1999; Fromang, Terquem & Balbus 2002). On the other hand, free electrons are lost through recombination processes which, in general, take place both in the gas phase and on grain surfaces (e.g. Nishi, Nakano & Umebayashi 1991).
Dust grains affect the level of magnetic coupling in protoplanetary discs when they are well mixed with the gas (e.g. in relatively early stages of accretion and/or when turbulence prevents them from settling towards the midplane). They do so in two ways. First, they reduce the ionisation fraction by providing additional pathways for electrons and ions to recombine on their surfaces. Second, charged dust particles can become important species in high density regions (Umebayashi & Nakano 1990; Nishi, Nakano & Umebayashi 1991). For example, at 1 AU in a disc where 0.1 \\(\\mu\\)m grains are present, positively charged particles are the most abundant ionised species within two scaleheights from the midplane (see Fig. 6 of Wardle 2007; hereafter W07). As these particles generally have large cross sections, collisions with the neutrals are important and they become decoupled (or partially decoupled) to the magnetic field at densities for which smaller species, typically ions and electrons, would still be well tied to it.
Both of these mechanisms act to lower the conductivity of the fluid, especially near the midplane where the density is high and ionisation processes are inefficient. In the minimum-mass solar nebula disc (Hayashi 1981, Hayashi et el. 1985), for example, X-rays are completely attenuated below \\(z/H\\sim 1.7\\) (see Fig. 1 of Salmeron & Wardle 2005; hereafter SW05). As a result, in the disc inner sections the magnetic coupling may be insufficient to provide adequate link between the neutral particles and the field. Moreover, recent calculations for a minimum-mass solar nebula disk exposed to X-ray and cosmic ray ionising fluxes (W07) indicate that near the surface (above 3 - 4 tidal scaleheights) the magnetic diffusivity can also be severe, even though in these regions the ionising flux is strongest and the electron fraction is significantly larger than it is in the disc interior. This effect results from the strong decline in the number density of charged particles high above the midplane. These considerations suggest that magnetic activity in the inner regions of weakly ionised discs may well be confined to intermediate heights above the midplane (\\(z/H\\sim 2-4\\)).
It is clear that the level of magnetic diffusion is strongly dependent on the presence, and size distribution, of dust particles suspended in the gas phase. In fact, once dust grains have settled, the ionisation fraction may be enough to produce adequate magnetic coupling over the entire vertical extension of the disk even at \\(R\\lesssim 1\\) AU (W07). A realistic study of the properties of the MRI in these discs must, therefore, incorporate a consistent treatment of dust dynamics and evolution (unless they are assumed to have settled, a good approximation to model relatively late accretion stages, as was the case in SW03 and SW05). This analysis is further complicated because dust grains have complex spatial and size distributions (e.g. Mathis, Rumpl & Nordsieck 1977, Umebayashi & Nakano 1990, D'Alessio et al. 2006), determined by the competing action of processes involving sticking, shattering, coagulation, growth (and/or sublimation) of ice mantles and settling to the midplane (e.g. Weidenschilling & Cuzzi 1993). Previous results (SW03, SW05) highlight also the importance of incorporating in these studies all three diffusion mechanisms between the magnetic field and the neutral fluid (namely, the Ohmic, Hall and Ambipolar diffusivities), as Hall diffusion largely determines the growth and structure of MRI perturbations, particularly in the disc inner regions (e.g. within distances of the order of a few AU from the central protostar; SW05). This is implemented here via a diffusivity tensor formalism (Cowling 1957, Norman & Heyvaertz 1985, Nakano & Umebayashi 1986, Wardle & Ng 1999, W07).
Dust grains can affect the structure and dynamics of accretion discs via two additional mechanisms: Dust opacity can modify the radiative transfer within the disc - which, in turn, can dramatically alter its structure - and dust particles may become dynamically important if their abundance is sufficiently high. In this study both effects are small because the disc is vertically isothermal and grains constitute only a small fraction of the mass of the gas (see below).
In this paper we study the vertical structure and linear growth of the MRI in a disc where dust grains are well mixed with the gas over its entire vertical dimension. Results are presented for two representative distances (\\(R=5\\) and 10 AU) from the central protostar. For simplicity, we assume that all particles have the same radius - \\(a=0.1\\), 1 or 3 \\(\\mu\\)m - and constitute 1% of the total mass of the gas, a typical assumption in studies of molecular clouds (Umebayashi & Nakano 1990). This fraction is constant with height, which means that we have also assumed that no sedimentation has occurred. Although this is a very simplified picture, the results illustrate the importance of dust particles in the delicate ionisation equilibrium of discs, and consequently, on their magnetic activity.
The paper is organised as follows. The adopted disc model is described in section 2. Section 3 briefly summarises the formulation and methodology, which are based on SW05. We refer the reader to that study - and references therein - for further details. This section also includes a discussion of the typical dependence of the components of the diffusivity tensor and magnetic coupling with height with (and without) grains for the two radial positions of interest. Section 4 then presents the vertical structure and linear growth of unstable MRI modes at these radii and compares solutions incorporating different diffusion mechanisms and assumptions regarding the presence, and size, of dust grains. These results, and possible implications for the dynamics and evolution of low conductivity discs, are discussed in section 5. Our main conclusions are summarised in section 6.
## 2 Disc Model
Our fiducial disc, assumed to be geometrically thin and vertically isothermal, is based on the minimum-mass solar nebula model (Hayashi 1981, Hayashi et al 1985). Our formulation incorporates the disc vertical stratification but - following common practice - it neglects radial gradients. This is appropriate, as these gradients typically occur over a much larger length scale than those in the vertical direction. Under these assumptions, the equilibrium structure of the disc is the result of the balance between the vertical component of the gravitational force exerted by the central object and the pressure gradient within the disc. The vertical profile of the density is then given by
\\[\\frac{\\rho(r,z)}{\\rho_{0}(r)}=\\exp\\left[-\\frac{z^{2}}{2H^{2}(r)}\\right]\\,, \\tag{1}\\]
where \\(\\rho_{0}\\) is the midplane density and \\(H\\equiv c_{\\rm s}/\\Omega\\) is the tidal scaleheight of the gas. The neutral gas is assumed to be composed of molecular hydrogen and helium, such that \\(n_{\\rm He}=0.2n_{\\rm H_{2}}\\), which results in \\(n_{\\rm H}(r,z)=\\rho(r,z)/1.4m_{\\rm H}\\).
As already mentioned above, a key feature of protoplanetary accretion discs is that they are weakly ionised. This is because ionisation processes are generally ineffective (except possibly in the vicinity of the star and in the surface regions), while the recombination rate is accelerated by the high density of the fluid and the removal of charges by dust grains (if present). In fact, outside the innermost 0.1 - 0.3 AU, where thermal effects are relevant (e.g. Hayashi 1981), the main ionising sources are X-rays and UV radiation emanating from the central object (e.g. Glassgold, Feigelson & Montmerle 2000) and - to a much lesser extent - the decay of radioactive materials, particularly \\({}^{40}\\)K (Consolmagno & Jokipii 1978, Sano et al. 2000). Interstellar cosmic rays may also be important as they can potentially reach deeper into the disc than X-ray and UV fluxes do. In fact, cosmic rays are the dominant ionising source at 1 AU for \\(z/H\\lesssim 2.2\\) (they even reach the midplane at this radius, albeit significantly attenuated; e.g. SW05). However, their actual contribution is unclear because the low-energy particles responsible for ionisation may be scattered by outflows launched from the protostar-disc system (e.g. Fromang et al. 2002). On the other hand, recombination processes in the disc generally occur both in the gas phase (through the dissociative recombination of electrons with molecular ions and the radiative recombination with metal ions) and on grain surfaces (e.g. Oppenheimer & Dalgarno 1974, Spitzer 1978, Umebayashi & Nakano 1980, Nishi et al. 1991, Sano et al. 2000). For a typical abundance of metal atoms in the gas phase of \\(8.4\\times 10^{-5}\\delta_{2}\\)1 (Umebayashi & Nakano 1990), the radiative recombination rate of metal ions is dominant for all vertical locations of interest, with the exception of the uppermost sections of the disc (see Fig. 2 of SW05).
Footnote 1: Here \\(\\delta_{2}\\approx 0.02\\) is the fraction of heavy metal atoms in the gas phase, estimated from interstellar absorption lines in diffuse clouds (Morton 1974).
The resulting ionisation fraction of the fluid largely determines the ability of the magnetic field to couple to the gas and shear and thus, regulates the magnetic activity in these astrophysical systems. In protoplanetary discs, in particular, the field has been envisioned to be dynamically important near the surface, whereas magnetic activity may be suppressed in their inner sections (the 'layered accretion' scenario; Gammie 1996, Wardle 1997). However, the existence and configuration of a magnetically inactive - dead - zone in the disk interior has been shown to be critically dependent on the presence and properties of dust grains mixed with the gas (W07). As this study shows, in a minimum-mass solar nebula model at 1 AU the entire cross section of the disc is magnetically coupled when dust grains are assumed to have settled to the midplane. It is, in particular, the presence of small grains what most severely affects the magnetic coupling. For example, the presence of a standard interstellar population of 0.1 \\(\\mu\\)m grains at 1 AU reduces the total active layer of the disc from \\(\\sim 1700\\) g cm\\({}^{-2}\\) to \\(\\sim 2\\) g cm\\({}^{-2}\\). This column density increases to \\(\\sim 80\\) g cm\\({}^{-2}\\) once the grains aggregate to 3 \\(\\mu\\)m. At 5 AU, in contrast, the entire cross section of the disc is coupled once the grains have grown to 1 \\(\\mu\\)m (we refer the reader to W07 for further details of these models).
## 3 Formulation and Methodology
The solutions presented in this paper are based on the formulation detailed in SW03 and SW05. Only a brief summary is given here. We write the equations of non-ideal MHD about a local Keplerian frame that corotates with the disc at the Keplerian frequency \\(\\Omega\\) and express the velocity field as a departure from exact Keplerian motion. We further assume that the abundances of charged species are sufficiently low to be able to neglect their inertia, thermal pressure and the effect of ionisation and recombination processes on the neutrals. Under these conditions, only the equations of motion for the neutral gas are required:
\\[\\frac{\\partial\\rho}{\\partial t}+\
abla\\cdot(\\rho\\mathbf{v})=0\\,, \\tag{2}\\]\\[\\frac{\\partial\\mathbf{v}}{\\partial t}+(\\mathbf{v}\\cdot\
abla)\\mathbf{v}-2\\Omega v_{\\phi}\\mathbf{ \\hat{r}}+\\frac{1}{2}\\Omega v_{r}\\mathbf{\\hat{\\phi}}-\\frac{v_{K}^{2}}{r}\\mathbf{\\hat{r}}+ \\frac{c_{s}^{2}}{\\rho}\
abla\\rho+\
abla\\Phi=\\frac{\\mathbf{J}\\times\\mathbf{B}}{c\\rho}\\,, \\tag{3}\\]
\\[\\frac{\\partial\\mathbf{B}}{\\partial t}=\
abla\\mathbf{\\times}\\left(\\mathbf{v}\\mathbf{\\times}\\bm {B}\\right)-c\
abla\\mathbf{\\times}\\mathbf{E}^{\\prime}-\\frac{3}{2}\\Omega\\mathbf{B}_{r}\\mathbf{ \\hat{\\phi}}\\,. \\tag{4}\\]
In the equation of motion (3), \\(\\Phi\\) is the gravitational potential for a non self-gravitating disc and \\(\\mathbf{v}_{K}^{2}/r\\) is the centripetal term generated by Keplerian motion. Coriolis terms \\(2\\Omega v_{\\phi}\\mathbf{\\hat{r}}\\) and \\(\\frac{1}{2}\\Omega v_{r}\\mathbf{\\hat{\\phi}}\\) are associated with the use of a local Keplerian frame and \\(c_{s}\\) is the isothermal sound speed. In the induction equation (4), \\(\\mathbf{E}^{\\prime}\\) is the electric field in the frame comoving with the neutrals and \\(\\frac{3}{2}\\Omega\\mathbf{B}_{r}\\mathbf{\\hat{\\phi}}\\) accounts for the generation of toroidal field by the disc differential rotation. Finally, the magnetic field must also satisfy the constraint \\(\
abla\\cdot\\mathbf{B}=0\\) and the current density must satisfy Ampere's law,
\\[\\mathbf{J}=\\frac{c}{4\\pi}\
abla\\mathbf{\\times}\\mathbf{B} \\tag{5}\\]
and Ohm's law
\\[\\mathbf{J}=\\mathbf{\\sigma}\\cdot\\mathbf{E}^{\\prime}\\,. \\tag{6}\\]
Following Wardle & Ng (1999) and Wardle (1999; hereafter W99), the current density is expressed as,
\\[\\mathbf{J}=\\mathbf{\\sigma}\\cdot\\mathbf{E}^{\\prime}=\\sigma_{\\rm O}\\mathbf{E}_{\\parallel}^{ \\prime}+\\sigma_{\\rm H}\\mathbf{\\hat{B}}\\mathbf{\\times}\\mathbf{E}_{\\perp}^{\\prime}+\\sigma_{ \\rm P}\\mathbf{E}_{\\perp}^{\\prime}\\,, \\tag{7}\\]
where subscripts \\(\\parallel\\) and \\(\\perp\\) denote vector components parallel and perpendicular to \\(\\mathbf{B}\\). In this expression, \\(\\sigma_{\\rm O}\\), \\(\\sigma_{\\rm H}\\) and \\(\\sigma_{\\rm P}\\) are the Ohmic, Hall and Pedersen conductivity terms given by,
\\[\\sigma_{\\rm O}=\\frac{ec}{B}\\sum_{j}n_{j}Z_{j}\\beta_{j}\\,, \\tag{8}\\]
\\[\\sigma_{\\rm H}=\\frac{ec}{B}\\sum_{j}\\frac{n_{j}Z_{j}}{1+\\beta_{j}^{2}} \\tag{9}\\]
and
\\[\\sigma_{\\rm P}=\\frac{ec}{B}\\sum_{j}\\frac{n_{j}Z_{j}\\beta_{j}}{1+\\beta_{j}^{2} }\\,. \\tag{10}\\]
Subscript \\(j\\) is used here to label the charged species. They are characterised by their number density \\(n_{j}\\), particle mass \\(m_{j}\\), charge \\(Z_{j}e\\) and Hall parameter
\\[\\beta_{j}=\\frac{Z_{j}eB}{m_{j}c}\\,\\frac{1}{\\gamma_{j}\\rho} \\tag{11}\\]
(the ratio of the gyrofrequency and the collision frequency with the neutrals), which measures the relative importance of the Lorentz and drag forces in balancing the electric force on the particle.
Equation (7) can be inverted to find an expression for \\(\\mathbf{E}^{\\prime}\\). This leads to the following form of the induction equation (W07)
\\[\\frac{\\partial\\mathbf{B}}{\\partial t} =\
abla\\mathbf{\\times}\\left(\\mathbf{v}\\mathbf{\\times}\\mathbf{B}\\right)-\
abla \\mathbf{\\times}\\left[\\eta_{\\rm O}\
abla\\mathbf{\\times}\\mathbf{B}\\right.\\] \\[+\\left.\\eta_{\\rm H}(\
abla\\mathbf{\\times}\\mathbf{B})\\mathbf{\\times}\\hat{\\mathbf{B}} +\\eta_{\\rm A}(\
abla\\mathbf{\\times}\\mathbf{B})_{\\perp}\\right]\\,, \\tag{12}\\]
where
\\[\\eta_{\\rm O}=\\frac{c^{2}}{4\\pi\\sigma_{\\rm O}}\\,, \\tag{13}\\]
\\[\\eta_{\\rm H}=\\frac{c^{2}}{4\\pi\\sigma_{\\perp}}\\frac{\\sigma_{\\rm H}}{\\sigma_{ \\perp}} \\tag{14}\\]
and
\\[\\eta_{\\rm A}=\\frac{c^{2}}{4\\pi\\sigma_{\\perp}}\\frac{\\sigma_{\\rm P}}{\\sigma_{ \\perp}}-\\eta_{\\rm O} \\tag{15}\\]
are the Ohmic, Hall and ambipolar diffusivities; and
\\[\\sigma_{\\perp}=\\sqrt{\\sigma_{\\rm H}^{2}+\\sigma_{\\rm P}^{2}} \\tag{16}\\]
is the total conductivity perpendicular to the magnetic field. When ions and electrons are the only charged species, it can be shown that (W07)
\\[|\\eta_{\\rm H}|=|\\beta_{\\rm e}|\\eta_{\\rm O} \\tag{17}\\]
and
\\[\\eta_{\\rm A}=|\\beta_{\\rm e}|\\beta_{\\rm H}\\eta_{\\rm O}\\,. \\tag{18}\\]
Note that the Ohmic (\\(\\eta_{\\rm O}\\)) and ambipolar (\\(\\eta_{\\rm A}\\)) diffusivity terms are always positive, as the former does not depend on the magnetic field strength and the second scale quadratically with it. As a result, they are both invariant under a reversal of the magnetic field polarity. On the contrary, the Hall term (\\(\\eta_{\\rm H}\\)) scales linearly with \\(B\\) and thus, can become negative. The change in sign of \\(\\eta_{\\rm H}\\) corresponds, in turn, to a change in the direction of the magnetic field at the height where particular species become decoupled to it by collisions with the neutrals. It corresponds, therefore, to changes in the contribution of different charged species to this component of the diffusivity tensor.
The relative importance of the diffusion terms in (13) to (15) differentiate three _diffusivity regimes_:
1. In the _Ambipolar diffusion_ regime, \\(|\\beta_{j}|\\gg 1\\) for most charged species and \\(\\eta_{\\rm A}\\gg|\\eta_{\\rm H}|\\gg\\eta_{\\rm O}\\). In this limit, which is typically dominant in low density regions (e.g. in molecular clouds and near the surface of protoplanetary discs), the magnetic field is effectively frozen into the ionized component of the fluid and drifts with it through the neutrals.
2. _Ohmic (resistive)_ limit. In this case \\(|\\beta_{j}|\\ll 1\\) for most charged species, resulting in \\(\\eta_{\\rm O}\\gg|\\eta_{\\rm H}|\\gg\\eta_{\\rm A}\\). The magnetic field can not be regarded as being frozen into any fluid component and the diffusivity is a scalar, the well-known Ohmic diffusivity. This regime dominates close to the midplane in the inner regions (\\(R\\lesssim 5\\) AU) of protoplanetary discs when the magnetic field is relatively weak (W07)3. _Hall diffusion_ limit, which occurs when \\(|\\beta_{j}|\\gg 1\\) for charged species of one sign (typically electrons) and \\(\\ll 1\\) for those of the other sign (e.g. ions). In this case, \\(|\\eta_{\\rm H}|\\gg\\eta_{\\rm A}\\) and \\(\\eta_{\\rm O}\\). This regime is important at intermediate densities (between those at which ambipolar and Ohmic diffusion regimes dominate). It has been shown to prevail under fluid conditions satisfied over vast regions in protoplanetary discs (e.g. Sano & Stone 2002a, W07).
The diffusivities used in this study were calculated using the procedure described in W07, to which we refer the reader for details. Essentially, the adopted chemical reaction scheme is based on that of Nishi, Nakano & Umebayashi (1991), but it allows for higher charge states on dust grains. This is necessary because of the higher temperature and density of protoplanetary discs in relation to those typically associated with molecular clouds.
Finally, in the following sections we will also use the magnetic coupling parameter (W99)
\\[\\chi\\equiv\\frac{\\omega_{\\rm c}}{\\Omega}=\\frac{1}{\\Omega}\\frac{B_{0}\\sigma_{ \\perp}}{\\rho c^{2}}\\,, \\tag{19}\\]
the ratio of the critical frequency (\\(\\omega_{\\rm c}\\)) _above_ which flux-freezing conditions break down and the dynamical (Keplerian) frequency of the disc. For the sake of clarity we now sketch the arguments leading to this expression (see also Wardle & Ng 1999). First, recall that non-ideal MHD effects become important when the inductive and diffusive terms in the induction equation (4) are comparable; or
\\[\
abla\\times(\\mathbf{v\\times B})\\sim c\
abla\\times\\mathbf{ E}^{\\prime}\\,, \\tag{20}\\]
where (see equation 6)
\\[\\mathbf{E}^{\\prime}=\\frac{\\mathbf{J}}{\\sigma}=\\frac{1}{ \\sigma}\\left(\\frac{c}{4\\pi}\
abla\\times\\mathbf{B}\\right)\\,. \\tag{21}\\]
In expression (21), \\(\\sigma\\) is taken to be a characteristic measure of the conductivity of the gas. Next, we adopt the following typical values for the various terms above
\\[\
abla\\sim k=\\omega/v_{\\rm A}\\qquad v\\sim v_{\\rm A}\\qquad B\\sim B_{0}\\qquad \\sigma\\sim\\sigma_{\\perp}\\,,\\]
where \\(k\\) is the wavenumber of the MRI perturbations in flux-freezing conditions and \\(v_{\\rm A}\\equiv B^{2}/4\\pi\\rho\\) is the local Alfven speed. Substituting these relations in (20) yields the desired expression for \\(\\omega_{\\rm c}\\). This parameter is useful because in ideal-MHD conditions, the instability grows at \\(\\sim 0.75\\Omega\\) (Balbus & Hawley 1991). Consequently, if \\(\\omega_{\\rm c}<\\Omega\\) (or \\(\\chi<1\\)) the field is poorly coupled to the disc at the frequencies of interest for the analysis of the MRI and non-ideal MHD effects are expected to be important.
Equations (2) to (5) and (7) are linearized about an initial (labeled by a subscript '0'), steady state where the magnetic field is vertical and the current density, fluid velocity and electric field in the frame comoving with the neutrals all vanish. As a result of the last condition, the changes in the components of the diffusivity tensor are not relevant in this linear formulation (e.g. \\(\\mathbf{E}^{\\prime}_{0}\\cdot\\delta\\mathbf{\\sigma}\\equiv 0\\)) and only the initial, unperturbed values are required. Taking perturbations of the form \\(\\mathbf{\\rm q}=\\mathbf{\\rm q}_{0}+\\delta\\mathbf{ \\rm q}(z)\\mathbf{e}^{i\\omega t}\\) and assuming \\(k=k_{z}\\) we obtain a system of ordinary differential equations (ODE) in \\(\\delta\\mathbf{E}\\) (the perturbations of the electric field in the laboratory frame), \\(\\delta\\mathbf{B}\\), and the perturbations' growth rate \\(\
u=i\\omega/\\Omega\\). Three parameters are found to control the dynamics and evolution of the fluid: (1) \\(v_{\\rm A}/c_{\\rm s}\\), the local ratio of the Alfven speed and the isothermal sound speed of the gas, which measures the strength of the magnetic field. (2) \\(\\chi\\), the coupling between the ionised and neutral components of the fluid. (3) \\(\\eta_{\\rm H}/\\eta_{\\rm P}\\), the ratio of the diffusivity terms perpendicular to \\(\\rm B\\), which characterises the diffusivity regime of the fluid. These parameters are evaluated at different locations (\\(r\\), \\(z\\)) of the disc taking the magnetic field strength (\\(B>1\\) mG) as a free parameter. The system of equations is integrated vertically as a two-point boundary value problem for coupled ODE with boundary conditions \\(\\delta B_{r}=\\delta B_{\\phi}=0\\) and \\(\\delta E_{r}=1\\) at \\(z=0\\) and \\(\\delta B_{r}=\\delta B_{\\phi}=0\\) at \\(z/H=6\\).
Given that magnetic diffusion can have such a dramatic effect on the properties of magnetically-driven turbulence in protoplanetary discs, we now present calculations of the magnetic diffusivity at 5 and 10 AU. We explore which disc regions are expected to be magnetically coupled and which diffusion mechanism is dominant at different positions (\\(r\\), \\(z\\)) as a function of \\(B\\). This discussion is relevant for the analysis of our MRI results at these locations (section 4).
### Magnetic diffusivity
Fig. 1 shows the components of the diffusivity tensor (\\(\\eta_{\\rm O}\\), \\(|\\eta_{\\rm H}|\\) and \\(\\eta_{\\rm A}\\)) as a function of height for \\(R=10\\) AU and a representative field strength (\\(B=10\\) mG). The solutions in the top panel have been obtained assuming that dust grains have settled into a thin layer about the midplane, so the charges are carried by ions and electrons only. The results in the middle and bottom panels incorporate a population of 1 and 0.1 \\(\\mu\\)m-sized grains, respectively. Note that when dust grains are present, all diffusivity terms increase drastically in relation to their values in the no-grains scenario. This effect becomes more accentuated as the grain size diminishes, given the efficiency of small grains in removing free electrons from the gas. For example, in the case that incorporates 0.1 \\(\\mu\\)m-sized particles (bottom panel), the diffusivity components at the midplane are larger, by 4 - 7 orders of magnitude, than their corresponding values when the grains have settled.
When dust grains have either settled or aggregated to at least 1 \\(\\mu\\)m in size (see top and middle panels), \\(|\\eta_{\\rm H}|\\) is dominant for \\(z/H\\lesssim 2.5\\) and the fluid in this region is in the Hall diffusivity regime. Also, \\(\\eta_{\\rm A}>\\eta_{\\rm O}\\) there which implies that, if ions and electrons are the sole charge carriers, \\(|\\beta_{\\rm e}|\\beta_{\\rm J}>1\\) (see equation 18). For higher \\(z\\), \\(\\eta_{\\rm A}>|\\eta_{\\rm H}|>\\eta_{\\rm O}\\) and ambipolar diffusion dominates. Note that the Hall diffusivity term increases less sharply than the ambipolar diffusivity in response to the fall in fluid density. This is a general feature, and an expected one, given that the former scales with \\(\\rho^{-1}\\) and the latter with \\(\\rho^{-2}\\). As a result, \\(\\eta_{\\rm A}\\) is typically several orders of magnitude larger than \\(|\\eta_{\\rm H}|\\) near the surface regions of the disc. Finally, for the 0.1 \\(\\mu\\)m-sized grain population (bottom panel), \\(\\eta_{\\rm A}>|\\eta_{\\rm H}|\\) for all \\(z\\). This can be traced back to the fact that Hall diffusion is suppressed here because of the nearly equal abundances of (negatively charged) grains and ions in this region.
Note also that the Hall diffusivity component shows characteristic'spikes', at the heights where it changes sign.
This effect is also particularly evident for the 0.1 \\(\\mu\\)m-sized grains. In this scenario, in particular, \\(\\eta_{\\rm H}\\) is positive when \\(0\\lesssim z/H\\lesssim 1.6\\) and \\(2.5\\lesssim z/H\\lesssim 4.1\\). It is negative at all other vertical locations. As mentioned above, this corresponds to different charged species contributing to this diffusion term. In order to explore the contributions to \\(\\eta_{\\rm H}\\) in each of the vertical sections in which it has a different sign, we now describe the behaviour of the charged species at four representative heights, namely \\(z/H=1\\) and 3 (for which \\(\\eta_{\\rm H}>0\\)), and \\(z/H=2\\) and 5 (where \\(\\eta_{\\rm H}<0\\)).
At \\(z/H=1\\), the density is sufficiently high for electrons to be able to stick to the grains. As a result, they reside mainly on dust particles, as do about a third of the ions. The contribution of (negatively charged) grains and ions to the Hall diffusivity term are very similar, with a small positive excess, which determines the sign of \\(\\eta_{\\rm H}\\) in this region. On the contrary, at \\(z/H=2\\), ions are the dominant positively charged species, while the negative charges are still contained in dust grains and drift together with the neutrals. Consequently, \\(\\eta_{\\rm H}\\) is negative in this section of the disc. At \\(z/H=3\\), ions and electrons are the main charge carriers and ions dominate the contribution to the Hall term, which makes \\(\\eta_{\\rm H}\\) positive. Finally, for \\(z/H=5\\), the dominant contribution to the Hall diffusivity term comes from the small percentage of remaining negatively charged grains. As a result, \\(\\eta_{\\rm H}\\) is negative (and very small).
We now turn our attention to \\(R=5\\) AU (Fig. 2). At this radius, when the dust particles are at least 1 \\(\\mu\\)m in size (or are absent), Hall diffusion dominates for \\(z/H\\lesssim 3\\). In contrast with the solutions at 10 AU, however, Ohmic diffusion dominates over ambipolar diffusion (\\(\\eta_{\\rm O}>\\eta_{\\rm A}\\)) for most of this region (which implies that \\(|\\beta_{\\rm e}|\\beta_{\\rm i}<1\\) in this section of the disc when no grains are present). This is expected, given the larger fluid density at this radius in comparison with the 10 AU case discussed above. When the grains are small (\\(a=0.1\\)\\(\\mu\\)m, see bottom panel), ambipolar diffusion is dominant (\\(\\eta_{\\rm A}>|\\eta_{\\rm H}|>\\eta_{\\rm O}\\)) for \\(1\\lesssim z/H\\lesssim 2.8\\) and \\(z/H\\gtrsim 3.5\\). At all other heights, \\(\\eta_{\\rm A}\\approx|\\eta_{\\rm H}|>\\eta_{\\rm O}\\).
Figs. 3 and 4 generalise the analysis of the previous paragraphs to other field strengths for \\(R=10\\) and 5 AU, respectively. As before, the top panels refer to discs where no grains are present. The middle and bottom panels incorporate 1 and 0.1 \\(\\mu\\)m-sized grains, respectively. In these plots, the contours show the values of \\(\\tilde{\\eta}\\equiv(\\eta_{\\rm O}^{3}+\\eta_{\\rm H}^{2}+\\eta_{\\rm A}^{2})^{1/2}\\) and the background shading (from dark to light) denotes the dominant diffusion mechanism as Ohmic, Hall or Ambipolar. The solid line is the critical value of the diffusivity \\(\\tilde{\\eta}_{\\rm crit}\\equiv Hc_{\\rm s}\\) (W07), above which the diffusion term in the induction equation \\(|\
abla\\times(\\tilde{\\eta}\
abla\\times\\mathbf{B})|\\)is larger than the inductive term \\(|\
abla\\times(\\mathbf{v}\\times\\mathbf{B})|\\)(see equation 4), a situation that effectively limits the ability of the field to couple to the Keplerian shear. The dashed lines correspond to increases (decreases) in \\(\\tilde{\\eta}\\) by factors of 10 in the direction of a stronger (weaker) magnetic field.
Note that at 10 AU, in the no-grains case (top panel of Fig. 3), field strengths \\(\\lesssim\\) a few Gauss are able to couple to the gas at the midplane. Hall diffusion dominates at this location for \\(B\\lesssim 0.1\\) G but the range of field strengths over which this occurs decreases gradually with height, so that when \\(z/H\\gtrsim 3.3\\) ambipolar diffusion is dominant for all \\(B\\). The coupled region potentially extends to \\(z/H\\sim 4.5\\), depending on the field strength. When the particles have aggregated to 1 \\(\\mu\\)m in size (middle panel of Fig. 3), the situation is qualitatively similar to the one just discussed, the only significant difference being that in this case Hall diffusion is dominant at the midplane for all \\(B\\) that can couple to the gas.
The previous results, however, are significantly modified when small dust grains are present (\\(a=0.1\\)\\(\\mu\\)m, bottom panel). In this case, the section of the disc below two scaleheights is magnetically inactive and the magnetic diffusivity is severe enough to prevent coupling over the entire disc thickness when \\(B\\gtrsim 25\\) mG. Ambipolar diffusion dominates for all field strengths above \\(\\sim\\) four scaleheights and also for \\(\\sim 1\\) mG \\(\\lesssim B\\lesssim 10-100\\) mG below this height (the actual upper limit of this range varies with \\(z\\)). Hall diffusion is inhibited close to the midplane for this range of field strengths because the number density of positively and negatively charged grains are very similar. Note, finally, that Ohmic diffusion is not dominant in any of the depicted scenarios at this radius, as expected, given the relatively low density of the fluid.
The results at 5 AU are qualitatively similar (Fig. 4) to the ones just discussed. One key difference, however, is that Ohmic diffusion dominates in all cases for \\(z/H\\lesssim 1.2\\) provided that the field is sufficiently weak (\\(B\\lesssim\\) a few milligauss). Ambipolar diffusion, on the other hand, is dominant for relatively strong fields (e.g. \\(B\\gtrsim 1\\) G at \\(z=0\\) in the no-grains case) as well as for all \\(B\\) near the disk surface (\\(z/H\\gtrsim 4\\)). Hall diffusivity is the most important diffusion mechanism for fluid conditions in-between those specified above. Note that the coupled region extends up to \\(z/H\\sim 5\\) in all cases. When no grains are present (top panel) the midplane is coupled for fields up to a few Gauss. This upper limit drops to \\(\\sim 20\\) mG when 1 \\(\\mu\\)m grains are mixed with the gas. When the grains are 0.1 \\(\\mu\\)m in size, however, only the region above \\(z/H\\sim 2.5\\) is coupled, and only for \\(B\\lesssim 50\\) mG. This 'dead' zone is slightly more extended at this radius than at 10 AU, but stronger fields are able to couple to the gas in this case. The steep contours close to the midplane in this scenario result from the pronounced increase in the diffusivity in response to the increase in fluid density and the decline in the ionisation fraction of the gas as \\(z\\) diminishes.
### Magnetic coupling and MRI unstable modes
In this section we address the following question: Which magnetic diffusion mechanism determines the properties of the MRI in different regions in protoplanetary discs? In this connection, it is useful to recall (see W99, SW03 and references therein for details) that in the disc regions where the magnetic coupling \\(\\chi>10\\), ideal-MHD conditions hold and the particular configuration of the diffusivity tensor has little effect on the behaviour of the MRI. When \\(\\chi\\) is weaker than this but \\(\\gtrsim|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\), ambipolar diffusion dominates. Finally, in the regions where \\(\\chi<|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\), Hall diffusion modifies the structure and growth of MRI unstable modes, provided that this degree of coupling is sufficient for unstable modes to grow.
Figs. 5 and 6 compare \\(\\chi\\) with the ratio \\(|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\) at \\(R=10\\) and 5 AU, respectively, as a function of the magnetic field strength and for different assumptions regarding the presence, and radius, of dust grains mixed with the gas. In each figure, the top panels depict the results obtainedassuming that no grains are present whereas the effect of 1 \\(\\mu\\)m (0.1 \\(\\mu\\)m)-sized grains is shown in the middle (bottom) panels. Note the dramatic impact dust grains have in the level of magnetic coupling at both radii. For example, introducing a population of 0.1 \\(\\mu\\)m grains causes \\(\\chi\\) to drop by 6 - 8 orders of magnitude at the midplane. Bottom (solid) and leftmost (dashed) lines correspond to \\(B=1\\) mG in all cases. The field strength increases by a factor of ten towards larger \\(\\chi\\), except that at both radii the top (and rightmost)
Figure 1: Components of the diffusivity tensor (\\(\\eta_{0}\\), \\(|\\eta_{\\rm H}|\\) and \\(\\eta_{\\rm A}\\)) as a function of height for \\(R=10\\) AU and \\(B=10\\) mG. In the top panel, dust grains have settled to the midplane. The middle and bottom panels show the solutions when dust grains of radius \\(a=1\\) and 0.1 \\(\\mu\\)m, respectively, are well mixed with the gas over the entire disc thickness. When the grains are \\(\\gtrsim 1\\)\\(\\mu\\)m in size (or are absent), Hall diffusion dominates for \\(z/H\\lesssim 2.5\\). In contrast, for the 0.1 \\(\\mu\\)m-sized grains, \\(\\eta_{\\rm A}>|\\eta_{\\rm H}|\\) for all \\(z\\) and \\(|\\eta_{\\rm H}|>\\eta_{0}\\) above \\(z/H\\sim 1\\). Note that \\(|\\eta_{\\rm H}|\\) shows ‘spikes’ at the \\(z\\) where it changes sign in response to different charged species becoming decoupled to the magnetic field by collisions with the neutrals.
Figure 4: As per Fig. 3 for \\(R=5\\) AU. Note that in all cases, Ohmic diffusion is dominant close to the midplane (\\(z/H\\la 1.2\\)) when the field is weak (\\(B<\\) a few mG). The midplane can be magnetically coupled when the grains are absent or have aggregated to \\(a\\ga 1\\)\\(\\mu\\)m. However, a dead zone develops below \\(\\sim 2.5\\) scaleheights when they are small (\\(a=0.1\\)\\(\\mu\\)m). In this scenario, only \\(B\\la 50\\) mG can couple to the gas (and only for \\(z/H\\ga 2.5\\)). Note that ambipolar diffusion dominates for all \\(B\\) when \\(z/H\\ga 4\\) (all panels) and for 1 mG \\(\\la B\\la 10-100\\) mG below this height (bottom panel only).
Figure 3: Contours of \\(\\tilde{\\eta}\\equiv(\\eta_{\\rm O}^{2}+\\eta_{\\rm H}^{2}+\\eta_{\\rm A}^{2})^{1/2}\\) in a log(B) - \\(z/H\\) plane for \\(R=10\\) AU. In the top panel, dust particles are assumed to have settled to the midplane. The effect of dust grains is included in the middle (\\(a=1\\)\\(\\mu\\)m) and bottom (\\(a=0.1\\)\\(\\mu\\)m) panels. The solid line is the critical value of the diffusivity \\(\\tilde{\\eta}_{\\rm crit}\\equiv H_{\\rm e}\\)s (W07) above which the magnetic field does not couple to the gas and shear. The background shading (from dark to light) denotes the dominant diffusion mechanism as Ohmic, Hall or Ambipolar. Note that when the grain size is 1 \\(\\mu\\)m or larger (or they have settled), the disc midplane is magnetically coupled for a range of \\(B\\). In contrast, when the grains are small (bottom panel), magnetic diffusion prevents the field to couple to the gas for \\(z/H\\la 2\\).
curves for \\(a=1\\)\\(\\mu\\)m show the maximum field strength for which the MRI grows. This is also the case for the solutions with no grains at 5 AU (the maximum values of \\(B\\) for these cases are noted in the captions of Figs. 5 and 6).
Note that at 10 AU (Fig. 5), Hall diffusion is not expected to play an important role in the local properties of the MRI once the grains have settled (\\(\\chi>|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\) for all \\(z\\) and \\(B\\)). In this scenario, ambipolar diffusion dominates in the inner sections of the disc when the field is weak (\\(z/H\\lesssim 2\\) and \\(B\\lesssim 10\\) mG) while ideal-MHD holds at all heights for stronger \\(B\\). On the contrary, when either 1 or 0.1 \\(\\mu\\)m grains are mixed with the gas (middle and bottom panels), Hall diffusion has an impact on the MRI within \\(\\sim\\) three scaleheights of the midplane. At higher \\(z\\), the ionisation fraction is such that \\(|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\lesssim\\chi\\lesssim 10\\) and ambipolar diffusion determines the local properties of MRI unstable modes.
The corresponding solutions at 5 AU are shown in Fig. 6. In this case, Hall diffusion dominates within two scaleheights from the midplane if the magnetic field is weak (\\(B\\lesssim 10\\) mG) and the grains have settled (top panel). This is consistent with the higher column density at this radius in comparison to the 10 AU case discussed above (for which Hall diffusion was unimportant). Ambipolar diffusion is dominant in this scenario for 10 mG \\(\\lesssim B\\lesssim 100\\) mG but for stronger fields, \\(\\chi>10\\) for all \\(z\\) and the fluid is in ideal-MHD conditions over the entire section of the disc. On the other hand, when dust grains are present (middle and bottom panels), Hall diffusion determines the properties of MRI-unstable modes in the inner sections of the disc (\\(z/H\\lesssim 3\\)) for all magnetic field strengths for which they grow. Ambipolar diffusion is locally dominant at higher \\(z\\).
Finally, note that when dust grains are present, the extent of the disc section where Hall diffusion has an impact on the MRI at both radii is quite insensitive to the strength of the field. Evidently, dust particles can efficiently reduce the degree of magnetic coupling in the disc inner sections for a wide range of magnetic field strengths. In light of the concepts presented so far, we now analyse the properties of the MRI at the two radii of interest.
## 4 Magnetorotational Instability
Figs. 7 and 8 compare the vertical structure and growth rate of the most unstable MRI modes at \\(R=10\\) and 5 AU, respectively, for different choices of the magnetic field strength. The left column of each figure displays solutions obtained assuming that grains have settled out of the gas phase. The remaining columns - from left to right - show results that incorporate the effect of a different population of single- sized dust particles of radius \\(a=3\\), 1 and 0.1 \\(\\mu\\)m, respectively. Note how the growth rate, wavenumber and range of magnetic field strengths for which unstable modes exist are all drastically diminished when dust grains are present. This is expected, given the reduction in the coupling between the neutral and ionised components of the fluid when dust grains (particularly if they are small) are well mixed with the gas. Note also that the range of field strengths for which unstable modes are found matches quite well with the range for which the magnetic field is expected to couple to the fluid, as discussed in section 3.1 (see Figs. 3 and 4 and compare the maximum magnetic field strength for which the MRI grows and the maximum \\(B\\) that can couple to the gas at the \\(z\\) where the modes peak).
For the discussion that follows, it is useful to keep in mind that both the growth rate and the envelope of these perturbations are shaped by the interplay of different diffusion mechanisms, whose relative importance vary strongly
Figure 5: Comparison of the local magnetic coupling \\(\\chi\\) (solid lines) and critical coupling \\(\\chi_{\\rm crit}\\equiv|\\sigma_{\\rm H}|/\\sigma_{\\perp}\\) (dashed lines) for \\(R=10\\) AU as a function of the magnetic field strength and for different assumptions regarding the presence (and size) of dust grains. Hall diffusion modifies the structure and growth rate of MRI unstable modes in the regions where \\(\\chi<\\chi_{\\rm crit}\\) provided that the coupling is sufficient for the instability to operate (SW03). Ambipolar diffusion is important if \\(\\chi_{\\rm crit}<\\chi\\lesssim 10\\). For stronger \\(\\chi\\), ideal-MHD describes the fluid adequately (W99). The bottom (and leftmost) lines correspond to \\(B=1\\) mG in all panels. \\(B\\) increases by a factor of 10 between curves (towards larger \\(\\chi\\)), except that the top curve for \\(a=1\\)\\(\\mu\\)m grains corresponds to \\(B=77\\) mG, the strongest field for which perturbations grow.
with height. In particular, the ambipolar diffusion component of the diffusivity tensor drives the local growth rate (and thus the amplitude of global perturbations) to increase with \\(z\\) (SW03). This is because, in this diffusion regime, the maximum growth rate increases with the local \\(\\chi\\) (W99) and is, therefore, a strong function of the vertical location. Accordingly, the envelopes of the perturbations driven by this term typically peak at an intermediate height above the midplane. On the contrary, the maximum growth rate of Hall perturbations is insensitive to \\(\\chi\\) (W99). Because they are not driven from any particular vertical location, their envelope is fairly flat (SW03). We now analyse the structure and growth of the perturbations at the two radii of interest. The properties of these modes without dust grains were discussed in detail in SW05. For the sake of clarity, these results are briefly summarised as part of this discussion.
At 10 AU, ambipolar diffusion drives the MRI when the field is weak (\\(B\\lesssim 10\\) mG) and ions and electrons are the sole charge carriers (see top panel of Fig. 5). As a result, unstable modes peak above the midplane in this scenario (left column of Fig. 7; e.g. the mode computed with \\(B=1\\) mG peaks at \\(z/H\\sim 0.5\\)). For stronger fields, ideal-MHD holds at all \\(z\\). This explains the flat envelope, and fast growth, of the perturbations obtained with \\(B=10\\) and 100 mG. Unstable modes are found in this case for \\(B\\lesssim 250\\) mG and they grow at about the ideal-MHD rate for 2 mG \\(\\lesssim B\\lesssim 50\\) mG. When \\(B\\) is even stronger than 250 mG, the wavelength of the most unstable mode is \\(\\sim H\\), the disc tidal scaleheight, and the perturbations are strongly damped (Balbus & Hawley 1991). Finally, note that no dead zone develops in this scenario, given that the magnetic field is coupled to the gas even at the midplane (see top panel of Fig. 3).
We now turn our attention to the solutions obtained under the assumption that dust grains are present at this radius. Note that when the dust particles are relatively large (\\(a=3\\) and \\(1\\)\\(\\mu\\)m, central two columns of Fig. 7), the perturbations exhibit the flat envelope that is typical when Hall diffusion controls their behaviour (for these fluid conditions Hall diffusion is expected to modify MRI modes for \\(z/H\\lesssim 2.5\\), Fig. 5). These modes grow even at the midplane, a result consistent with the level of magnetic coupling associated with this disc model (e.g. see middle panel of Fig. 3). On the other hand, when the dust grains are small (rightmost column of Fig. 7) the low magnetic coupling, especially within two scaleheights of the midplane (Fig. 3, bottom panel), causes the amplitude of all perturbations in this section of the disc to be severely reduced. Unstable modes were found here only for \\(B\\lesssim 10\\) mG, a much reduced range compared with the \\(B\\approx 250\\) mG for which they exist when ions and electrons are the only charge carriers.
Fig. 8 shows the solutions obtained for \\(R=5\\) AU. At this radius, MRI unstable modes grow - without grains - for \\(B\\lesssim 795\\) mG. Moreover, they grow at essentially the ideal-MHD rate for a significant subset of this range (200 mG \\(\\lesssim B\\lesssim 500\\) mG). Hall diffusion modifies the structure and growth of these modes for \\(B\\lesssim 10\\) mG. When the field is within this limit, a small magnetically dead region develops but it disappears for stronger \\(B\\). Note that all the solutions that incorporate dust grains peak at a height above the midplane where ambipolar diffusion is locally dominant (e.g. \\(z/H\\sim 4\\) for \\(B=10\\) mG and \\(a=1\\)\\(\\mu\\)m; see also the middle panel of Fig. 6). This signals that this diffusivity term shapes the structure of these perturbations and explains why their amplitude increase with height. The solutions computed with the small grain population (\\(a=0.1\\)\\(\\mu\\)m) exhibit, as in the \\(R=10\\) AU case, an extended dead zone encompassing the region where the magnetic coupling is insufficient to sustain the MRI (\\(z/H\\lesssim 2.5\\); see bottom panel of Fig. 4). Solutions are found in this case for \\(B\\lesssim 16\\) mG.
The solutions described so far in this section incorporate all diffusion mechanisms (represented by \\(\\eta_{\\rm A}\\), \\(\\eta_{\\rm H}\\) and \\(\\eta_{\\rm O}\\)). For comparison, Fig 9 displays how the full \\(\\eta\\) modes at 10 AU and including 0.1 \\(\\mu\\)m-sized grains (left column), are modified if only ambipolar diffusion (middle column) and Hall diffusion (right column) are considered. Note that when the field is weak (e.g. the solutions for \\(B=1\\) and 4
Figure 6: As per Fig. 5 for \\(R=5\\) AU. Note, however, that the top (and rightmost) lines correspond to \\(B=795\\) mG (top panel) and \\(B=83\\) mG (middle panel), the maximum field strength for which unstable modes grow in each scenario.
mG), full \\(\\eta\\) perturbations grow faster, and are active closer to the midplane, than modes obtained in the ambipolar diffusion limit. This is the result of the contribution of the Hall diffusivity term (e.g. note that the solution in the Hall limit for \\(B=1\\) mG grows at the ideal-MHD rate). For \\(B=10\\) mG, the structure of the modes computed with and without the Hall term are fairly similar, as this diffusion mechanism is no longer important at the \\(z\\) where they peak (see bottom panel of Fig. 3). Note also that modes in the Hall limit do not grow for \\(B>4\\) mG, a result consistent with ambipolar diffusion being dominant at these field strengths (see bottom panel of Fig. 3).
A similar comparison is shown in Fig. 10 for \\(R=5\\) AU and incorporating the effect of 3 \\(\\mu\\)m-sized dust particles. The Hall term is important here also, as evidenced by the faster growth and more extended unstable zone of the perturbations that include this diffusion mechanism. Ambipolar diffusion strongly influences the structure of the modes for intermediate field strengths (e.g. compare the solutions with \\(B=10\\) mG and different configurations of the diffusivity tensor) and the perturbations grow in this limit for a reduced range of \\(B\\) in relation to that associated with full \\(\\eta\\) (or Hall limit) modes.
The dependence of the growth rate of the most unstable mode (\\(\
u_{\\rm max}\\)) with the strength of the magnetic field is summarised in Fig. 11 for different assumptions regarding the presence, and radius, of dust grains mixed with the gas. Results are shown for \\(R=10\\) AU (top panel) and 5 AU (bottom panel). Note that \\(\
u_{\\rm max}\\) drops sharply at a characteristic field strength (\\(B_{\\rm max}\\)), which is a function of both the properties of the grain population and the radial position. The maximum field strength for which perturbations grow in a particular scenario is weaker at larger radii, an effect that is particularly noticeable when no grains are present. This behaviour results from the instability being (generally) damped when the midplane ratio of the Alfven to sound speed approaches unity (e.g. \\(v_{\\rm A0}/c_{\\rm s}\\sim 1\\); Balbus & Hawley 1991)2 and the wavelength of the most unstable mode becomes \\(\\sim H\\). As the midplane density and temperature decrease with radius, the ratio \\(v_{\\rm A0}/c_{\\rm s}\\), associated with a particular field strength, increases at larger radii and the perturbations are damped at a weaker field as \\(r\\) increases. Note also that for each radii, the range of field strengths over which unstable modes exist is smaller as the grain size diminishes. This is consistent with the drop in magnetic coupling for a particular field strength as the dust particles are smaller (see Figs. 3 and 4).
Footnote 2: Note, however, that perturbations computed in the Hall limit have been found to grow for \\(v_{\\rm A0}/c_{\\rm s}\\lesssim 3\\) (SW03) when the magnetic field is counteraligned with the disc angular velocity vector (\\(\\Omega\\)).
Finally, note that at 5 AU and for weak fields (\\(B\\lesssim 5\\) mG, bottom panel of Fig. 11) the MRI grows faster when the small grains are considered than it does in the other scenarios. This is because in this case the modes grow high above the midplane, where the magnetic coupling is more favourable, and are completely suppressed at lower \\(z\\). In the other cases, on the contrary, the perturbations grow over a more extended section of the disc and the comparatively low magnetic coupling closer to the midplane reduces their global growth rate. This effect is not so noticeable at 10 AU by the relatively high magnetic coupling even at \\(z=0\\) at this radius.
## 5 Discussion
In this paper we have examined illustrative examples of the impact of dust grains in the magnetic activity of protoplanetary discs and, in particular, in the linear growth and vertical structure of MRI perturbations. Solutions were computed
Figure 11: Growth rate of the most unstable MRI modes as a function of the strength of the magnetic field for different assumptions regarding the presence, and size, of dust grains mixed with the gas. Solutions are presented for \\(R=10\\) AU (top panel) and \\(R=5\\) AU (bottom panel).
for \\(R=10\\) and 5 AU assuming a single size grain population of radius \\(a=0.1\\), 1 or 3 \\(\\mu\\)m, and constituting 1 % of the total mass of the gas, is well mixed with the gas phase over the entire vertical extent of the disc. This fraction is independent of height, so we have also assumed that the grains have not sedimented towards the midplane. Our results indicate that the perturbations' wavenumber and growth rate are significantly reduced when grains are present. Furthermore, the magnetically inactive - dead - zone, which was practically non-existent when grains were settled, extends to \\(z/H\\sim 3\\) at either radii when 0.1 \\(\\mu\\)m-sized grains are considered. At 10 AU (5 AU), unstable perturbations were found in this case for \\(B\\lesssim 10\\) mG (16 mG), a much reduced range compared with the strengths for which they exist when no grains are involved (250 and 795 mG, respectively). This maximum field strength corresponds well to the equipartition field at the height at which the perturbations peak, as expected (e.g. \\(z/H\\approx 3.7\\) at 10 AU; see lower right panel of Fig. 7).
These results illustrate the impact of dust particles on the dynamics - and evolution - of low conductivity discs. They can also be used to estimate the maximum magnetic field strength to support magnetic activity at 1 AU. The magnetic coupling at this radius, in a disc including 0.1 \\(\\mu\\)m grains, is too weak below \\(z/H\\sim 3.5\\) to allow the field to sufficiently couple with the gas (W07). Assuming that the maximum field strength for the MRI to grow is also of the order of the equipartition field at about this height, we can roughly estimate that the MRI should be active for \\(B\\lesssim 400\\) mG at 1 AU. This is a smaller range than the several gauss for which unstable modes exist when dust grains are not present (SW05). However, MRI modes still grow in this case for a wide range of field strengths.
The results just presented were obtained assuming that all grains have the same size and are well mixed with the
Figure 7: Structure and growth rate of the fastest growing MRI modes for \\(R=10\\) AU and different choices of the magnetic field strength. The leftmost column shows the perturbations obtained if dust grains have settled, while the remaining ones, from left to right, display results assuming a population of single-sized grains of radius \\(a=3\\), 1 and 0.1 \\(\\mu\\)m, respectively, are well mixed with the gas. The growth rate is indicated in the lower right corner of each panel. The strength of the field appears in the top right corner. Results are displayed for \\(B\\) spanning from 1 mG, the weakest magnetic field for which unstable modes could be computed, to the maximum strength for which unstable modes were found in each case. Note the reduced wavenumber, growth rate and range of \\(B\\) for which perturbations exist – as well as the extended dead zone – when dust grains are present. When the grains are relatively large (central two columns) Hall diffusion controls the modes, which grow even at the midplane. In contrast, when they are small (right column), the magnetic coupling is too low to sustain the instability for \\(z/H\\lesssim 3\\) and ambipolar diffusion controls the perturbations that grow above this height.
gas at all \\(z\\). More realistic spatial, and size, distributions must incorporate the effects of dust dynamics and evolution within the disc. Observations of the mid - and far - infrared spectra of discs have provided credible indications that dust properties in discs are indeed different from those of particles in diffuse clouds (e.g. Van Dishoeck 2004, D'Alessio et al. 2006 and references therein). Two aspects of this dust evolution have been clearly identified. First, dust grains coagulate from 0.1 \\(\\mu\\)m to \\(\\sim\\) 1 mm particles. Second, (silicate) material becomes crystallised. It is believed that this crystallisation occurs in the disc, given that crystalline silicates are absent from the interstellar medium. Furthermore, the presence of this material at radial locations where the temperature is too low to produce them, suggests significant radial mixing takes place as well (e.g. Van Dishoeck 2004 and references therein). Finally, simulations of dust dynamics and evolution also suggest that in quiescent environments, the grains tend to settle and agglomerate into bigger particles (e.g. Weidenschilling & Cuzzi 1993) and efficiently coagulate and grow icy mantles (Ossenkopf 1993). All these processes modify the surface area of dust grains and impact the recombination rate on their surfaces and the way they drift in response to magnetic stresses. It is also expected that a residual population of small grains would remain (e.g. Nomura et al. 2006) even when the mean grain size maybe relatively large (\\(a\\gtrsim 1\\mu\\)m). This is an important consideration because these small grains tend to carry a significant fraction of the grain charge (S07). The effect of this settling
Figure 8: As per Fig. 7 for \\(R=5\\) AU. At this radius, the solutions that incorporate 3 and 1 \\(\\mu\\)m-sized grains (central two columns) are shaped by ambipolar diffusion, as this mechanism is dominant at the height where they peak (see middle panel of Fig. 6). Note the extended dead zone when the grains are small (right column). In this scenario, severe magnetic diffusivity prevents the magnetic field from coupling to the gas for \\(z/H\\lesssim 2.5\\) and ambipolar diffusion dominates at higher \\(z\\) where the MRI grows.
in the spectral energy distribution (and optical appearance) of protostellar discs has been investigated in recent studies (e.g. Dullemond & Dominik 2004).
How quickly, and to what height, dust particles are able to settle is an important, and largely unanswered, question. According to Nakagawa, Nakazawa & Hayashi (1981), the mass fraction of \\(\\sim\\) 1 - 10 \\(\\mu\\)m grains well mixed with the gas, diminishes from \\(\\sim\\) 10\\({}^{-1}\\) to 10\\({}^{-4}\\) in a timescale of about \\(2\\times 10^{3}\\) to 10\\({}^{5}\\) years. Moreover, although the timescale for dust grains to sediment all the way to the midplane may exceed the lifetime of the disc, they may be able to settle within a few scaleheights from the midplane in a shorter timescale (Dullemond & Dominik 2004). This is complicated even more by the expectation that the transition between sections were dust grains are well mixed with the gas, and those completely depleted of them, occurs gradually (Dullemond & Dominik 2004).
MHD turbulence may itself be an important factor for the settling of dust particles. It may, in particular, produce sufficient vertical stirring to prevent settling below a certain height (Dullemond & Dominik 2004, Carballido et al. 2005, Turner et al. 2006). However, this is contingent on the disc being able to generate and sustain MHD turbulence in the vertical sections where the dust is present. This is not guaranteed, even if turbulence exists in other regions, as dust grains efficiently reduce the ionisation fraction (and magnetic coupling) of the gas. As a result, the efficiency - and even the viability - of MHD turbulence in the presence of dust grains, is an important topic that merits careful investigation.
## 6 Summary
We have explored in this paper the linear growth and vertical structure of MRI unstable modes at two representative radii (\\(R=5\\) and 10 AU) in protoplanetary discs. Dust grains are assumed to be well mixed with the fluid over the entire section of the disc and, for simplicity, are taken to have
Figure 9: Structure and growth rate of the fastest growing MRI modes as a function of height for \\(R=10\\) AU and assuming 0.1 \\(\\mu\\)m grains are present. Different configurations of the diffusivity tensor are shown. The left column displays solutions incorporating all \\(\\eta\\) components (\\(\\eta_{\\rm A}\\), \\(\\eta_{\\rm H}\\) and \\(\\eta_{\\rm O}\\)). The middle and right columns correspond to the ambipolar (\\(\\eta_{\\rm H}=0\\)) and Hall diffusion (\\(\\eta_{\\rm A}=0\\)) limits, respectively. We find that for relatively weak fields (\\(B=1\\) and 4 mG), Hall diffusion causes the perturbations to grow faster and closer to the midplane than solutions in the ambipolar diffusion limit. For \\(B=10\\) mG, the structure of the modes computed with and without the Hall term are fairly similar, as this diffusion mechanism is no longer important at the \\(z\\) where they peak (see bottom panel of Fig. 3).
the same radius (\\(a=3\\), 1 or 0.1 \\(\\mu\\)m). They constitute a constant fraction (1%) of the total mass of the gas. These solutions are compared with those arrived at assuming that the grains have settled to the midplane of the disc (SW05). We have also explored which disc sections are expected to be magnetically coupled and the dominant diffusion mechanism as a function of height and the strength of the magnetic field, which is initially vertical.
Our models use a minimum-mass solar nebula disc (Hayashi 1981, Hayashi et al. 1985) and incorporate all three diffusion mechanisms between the magnetic field and the neutral gas: Ohmic, Hall and Ambipolar. The diffusivity components are a function of height (as is the density) and are obtained using the method described in W07, to which we refer the reader for details. Essentially, this formalism uses a chemical reaction scheme similar to that of Nishi, Nakano & Umebayashi (1991), but it incorporates higher dust-grain charge states that are likely to occur in discs on account of their larger gas density and temperature in relation to those of molecular clouds. Our calculations also include a realistic ionisation profile, with the main ionising sources being cosmic rays, X-rays and (to a lesser extent) radioactive decay.
Solutions were obtained at the two radii of interest for different grain sizes and configurations of the diffusivity tensor as a function of the magnetic field strength. We refer the reader to SW03 and SW05 for further details of the integra
Figure 10: Structure and growth of the fastest growing MRI modes as a function of height at \\(R=5\\) AU and assuming 3 \\(\\mu\\)m grains are present. The left column shows solutions incorporating all \\(\\eta\\) components (\\(\\eta_{\\rm A}\\), \\(\\eta_{\\rm H}\\) and \\(\\eta_{\\rm O}\\)). The middle and right columns display the Hall (\\(\\eta_{\\rm A}=0\\)) and ambipolar diffusion (\\(\\eta_{\\rm H}=0\\)) limits, respectively. Hall diffusion strongly modifies the structure and growth of unstable modes.
tion procedure. The main findings of this study are detailed below.
### Magnetic diffusivity
1. When no grain are present, or they are \\(\\gtrsim 1\\)\\(\\mu\\)m in radius, the midplane of the disc remains magnetically coupled for field strengths up to a few gauss at both radii.
2. In contrast, when a population of small grains (\\(a=0.1\\mu\\)m) is mixed with the gas, the section of the disc below \\(z/H\\sim 2\\) (\\(z/H\\sim 2.5\\)) is magnetically inactive at \\(R=10\\) AU (5 AU). Only magnetic fields weaker than 25 mG (50 mG) can couple to the gas.
3. At 5 AU, Ohmic diffusion dominates for \\(z/H\\lesssim 1.2\\) when the field is relatively weak (\\(B\\lesssim\\) a few milligauss), irrespective of the properties of the grain population. Conversely, at 10 AU this diffusion term is unimportant in all the scenarios studied here.
4. High above the midplane (\\(z/H\\gtrsim 4.5-5\\), depending on the specific model), ambipolar diffusion is severe and prevents the field from coupling to the gas for all \\(B\\). This is consistent with previous results by W07.
5. Hall diffusion is dominant for a wide range of field strengths and grain sizes at both radii (see Figs. 3 and 4).
### Magnetorotational instability
1. The growth rate, wavenumber and range of magnetic field strengths for which unstable modes exist are all drastically diminished when dust grains are present, particularly when they are small (\\(a\\sim 0.1\\)\\(\\mu\\)m; see Figs. 7 and 8).
2. In all cases that involve dust grains, perturbations that incorporate Hall diffusion grow faster than those obtained under the ambipolar diffusion approximation.
3. At 10 AU, unstable MRI modes grow for \\(B\\lesssim 80\\) mG (10 mG) when the grain size is 1 \\(\\mu\\)m (0.1 \\(\\mu\\)m), a much reduced range compared with the \\(\\sim 250\\) mG for which they exist when ions and electrons are the only charge carriers (SW05). When the grains are relatively large (\\(a=1\\) and 3 \\(\\mu\\)m), Hall diffusion controls the structure the modes, which grow even at the midplane. In contrast, when the grains are small (\\(a=0.1\\)\\(\\mu\\)m), the perturbations grow only for \\(z/H\\gtrsim 3\\) and are shaped mainly by ambipolar diffusion.
4. At 5 AU, MRI perturbations exist for \\(B\\lesssim 80\\) mG (16 mG) when the grains are 1 \\(\\mu\\)m (0.1 \\(\\mu\\)m) in size. For comparison, the upper limit when no grains are present is \\(\\sim 800\\) mG (SW05). These modes are shaped largely by ambipolar diffusion, the dominant mechanism at the height where they peak.
We conclude that in protoplanetary discs, the magnetic field is able to couple to the gas and shear over a wide range of fluid conditions even when small dust grains are well mixed with the gas. Despite the low magnetic coupling, MRI modes grow for an extended range of magnetic field strengths and Hall diffusion largely determines the properties of the perturbations in the inner regions of the disc.
## Acknowledgments
This research has been supported by the Australian Research Council. RS acknowledges partial support from NASA Theoretical Astrophysics Program Grant NNG04G178G.
## References
* [1] Balbus S. A., Hawley J. F., 1991, ApJ, 376, 214
* [2] Balbus S. A., Hawley J. F., 1998, Rev. Mod. Phys., 70, 1
* [3] Blandford R.D., Payne D.G., 1982, MNRAS, 199, 883 (BP82)
* [4] Carballido A., Stone J. M., Pringle J. E., 2005, MNRAS, 358, 1055
* [5] Consolmagno G. J., Jokipii J. R., 1978, Moon Planets, 19, 253
* [6] Cowling, T. G. 1957, Magnetohydrodynamics (New York: Interscience)
* [7] D'Alessio P., Calvet N., Hartmann L., Franco-Hernandez R., Servin H., 2006, ApJ, 638, 314
* [8] Dullemond C. P., Dominik C., 2004, A&A, 421, 1075
* [9] Fromang S., Terquen C., Balbus S. A., 2002, MNRAS, 329, 18
* [10] Gammie C. F., 1996, ApJ, 457, 355
* [11] Glassgold A. E., Feigelson E. D., Montmerle T., 2000, in Protostars & Planets IV, ed. V. G. Mannings, A. P. Boss, S. Russell (Tucson: Univ. Arizona Press), p. 429
* [12] Glassgold A. E., Najita J., Igea J., 1997, ApJ, 480, 344
* [13] Hayashi C., 1981, Prog Theor Phys Supp, 70, 35
* [14] Hayashi C., Nakasawa K., Nakagawa Y., 1985, in Protostars & Planets II, ed. D.C. Black, M. S. Mathews (Tucson: Univ. Arizona Press), p. 1100
* [15] Igea J., Glassgold A. E., 1999, ApJ, 518, 848
* [16] Ilgner M., Nelson R. P., 2006, A&A, 445, 223
* [17] Johnson E. T., Goodman J., Menou K., 2006, ApJ, 647, 1413
* [18] Konigl A., Pudritz R. E., 2000, in Mannings, V. G., Boss, A. P., Russell, S. eds, Protostars & Planets IV. Univ. Arizona Press, Tucson, p. 759
* [19] Mathis J. S., Rumpl W., Nordsieck K. H., 1977, ApJ, 217, 425
* [20] Matsumura S., Pudritz R. E., 2005, ApJ, 618, L137
* [21] Morton D. C., 1974, Astrophys. J., 193, L35
* [22] Nakagawa Y., Nakazawa K., Hayashi C., 1981, Icarus, 45, 517
* [23] Nakano T., Umebayashi T. 1986, MNRAS, 218, 663
* [24] Nishi R., Nakano T., Umebayashi T., 1991, ApJ, 368, 181
* [25] Nomura H., Nakagawa Y., 2006, ApJ, 640, 1099
* [26] Norman C., Heyvaerts J., 1985, AA, 147, 247
* [27] Oppenheimer M., Dalgarno A., 1974, ApJ, 192, 29
* [28] Ossenkopf V., 1993, AA, 280, 617
* [29] Salmeron R., Wardle M., 2003, MNRAS, 345, 992 (SW03)
* [30] Salmeron R., Wardle M., 2005, MNRAS, 361, 45 (SW05)
* [31] Sano T., Stone J. M., 2002a, ApJ, 570, 314
* [32] Sano T., Miyama S., Umebayashi J., Nakano T., 2000, ApJ, 543, 486
* [33] Semenov D., Wiebe D., Henning Th., 2006, ApJ, 647, L57
* [34] Spitzer, L. 1978, Physical Processes in the Interstellar Medium (New York: Wiley)
* [35] Terquen C., 2003, MNRAS, 341, 1157
* [36] Turner N. J., Willacy K., Bryden G., Yorke H. W., 2006, ApJ, 639, 1218
* [37] Umebayashi T., Nakano T., 1980, PASJ, 32, 405
* [38] Umebayashi T., Nakano T., 1990, MNRAS, 243, 103
* [39] Van Dishoeck E. F., 2004, ARA&A, 42, 119
* [40] Wardle, M., Konigl, A. 1993, ApJ, 410, 218
* [41] Wardle M., 1997, in Proc. IAU Colloq. 163, Accretion Phenomena and Related Outflows, ed. D. Wickramasinghe, L. Ferrario, G. Bicknell (San Francisco: ASP), p. 561
* [42] Wardle M., 1999, MNRAS, 307, 849 (W99)
* [43] Wardle M., 2007, astro-ph/0704.0970
* [44] Wardle M., Ng C., 1999, MNRAS, 303, 239* [20] Weidenschilling S. J., Cuzzi J. N., 1993, in Protostars & Planets III, ed. E. H. Levy, J. I. Lunine (Tucson: Univ. Arizona Press), p. 1031 | We investigate the linear growth and vertical structure of the magnetorotational instability (MRI) in weakly ionised, stratified protoplanetary discs. The magnetic field is initially vertical and dust grains are assumed to be well mixed with the gas over the entire vertical dimension of the disc. For simplicity, all the grains are assumed to have the same radius (\\(a=0.1\\), 1 or 3 \\(\\mu\\)m) and constitute a constant fraction (1 %) of the total mass of the gas. Solutions are obtained at representative radial locations (\\(R=5\\) and 10 AU) from the central protostar for a minimum-mass solar nebula model and different choices of the initial magnetic field strength, configuration of the diffusivity tensor and grain sizes. | Give a concise overview of the text below. |
arxiv-format/0801_3372v1.md | # A Geometrical Study of Matching Pursuit Parametrization
## 1 Introduction
There has been a large effort in the last decade to develop analysis techniques that decompose non-stationary signals into elementary components, called _atoms_, that characterize their salient features [1, 2, 3, 4, 5]. In particular, the matching pursuit (MP) algorithm has been extensively studied [6, 7, 8, 9, 10, 11, 2] to expand a signal over a redundant dictionary of elementary atoms, based on a greedy process that selects the elementary function that best matches the residual signal at each iteration. Hence, MP progressively isolates the structures of the signal that are coherent with respect to the chosen dictionary, and provides an adaptive signal representation in which the more significant coefficients are first extracted. The progressive nature of MP is a key issue for adaptive and scalable communication applications [12, 13].
A majority of works that have considered MP for practical signal approximation and compression define the dictionary based on the discretization of a parametrized prototype function, typically a scaled/modulated Gaussian function or its second derivative [6, 14, 15]. An orthogonal 1-D or 2-D wavelet basis is also a trivial example of such a discretization even if in that case MP is not required to find signal coefficients; a simple wavelet decomposition is computationally more efficient. Works that do not directly rely on a prototype function either approximatesuch a parametrized dictionary based on computationally efficient cascades of filters [16, 17, 18], or attempt to adapt a set of parametrized dictionary elements to a set of training signal samples based on vector quantization techniques [19, 20]. Thus, most earlier works define their dictionary by discretizing, directly or indirectly, the parameters of a prototype function.
The key question is then: _how should the continuous parameter space be discretized?_ A fine discretization results in a large dictionary which approximates signals efficiently with few atoms, but costs both in terms of computational complexity and atom index entropy coding. Previous works have studied this trade-off empirically [6, 15]. In contrast, our paper focuses on this question in a formal way. It provides a first attempt to quantify analytically how the MP convergence is affected by the discretization of the continuous space of dictionary function parameters.
Our compass to reach this objective is the natural geometry of the continuous dictionary. This dictionary can be seen as a parametric (Riemannian) manifold on which the tools of differential geometry can be applied. This geometrical approach, of increasing interest in the signal processing literature, is inspired by the works [21, 22] on _Image Appearance Manifolds_, and is also closely linked to manifolds of parametric probability density function associated to the Fisher information metric [23]. Some preliminary hints were also provided in a Riemannian study of generalized correlation of signals with probing functions [24].
The outcome of our study is twofold. On the one hand, we analyze how the rate of convergence of the continuous MP (cMP) is affected by the discretization of the prototype function parameters. We demonstrate that the MP using that discretized dictionary (dMP) converges like a weak continuous MP, i.e. a MP algorithm where the coefficient of the selected atom at each iteration overtakes only a percentage (the _weakness factor_) of the largest atom magnitude. We describe then how this weakness factor decreases as the so-called _density radius1_ of the discretization increases. This observation is demonstrated experimentally on images and randomly generated 1-D signals.
Footnote 1: This density radius represents the maximal distance between any atom of the continuous dictionary and its closest atom in the discretization.
On the other hand, to improve the rate of convergence of discrete MP without resorting to a finer but computationally heavier discretization, we propose to exploit a geometric gradient ascent method. This allows to converge to a set of locally optimal continuous parameters, starting from the best set of parameters identified by a coarse but computationally light discrete MP. Each atom of the MP expansion is then defined in two steps. The first step selects the discrete set of parameters that maximizes the inner product between the corresponding dictionary function and the residual signal. The second step implements a (manifold2) gradient ascent method to compute the prototype function parameters that maximize the inner product function over the continuous parameter space. As a main analytical result, we demonstrate that this geometrically optimized discrete MP (gMP) is again equivalent to a continuous MP, but with a weakness factor that is two times closer to unity than for the non-optimized dMP. Our experiments confirm that the proposed gradient ascent procedure significantly increases the rate of convergence of MP, compared to the non-optimized discrete MP. At an equivalent convergence rate, the optimization allows reduction of the discretization density by an order of magnitude, resulting in significant computational gains.
Footnote 2: In the sense that this gradient ascent evolves on the manifold induced by the intrinsic dictionary geometry.
The paper is organized as follows. In Section 2, we introduce the notions of parametric dictionary in the context of signal decomposition in an abstract Hilbert space. This dictionary is then envisioned as a Hilbert manifold, and we describe how its geometrical structure influences its parametrization using the tools of differential geometry. Section 3 surveys the definition of (weak) continuous MP providing a theoretical optimal rate of convergence for further comparisons with other greedy decompositions. A \"discretization autopsy\" of this algorithm is performed in Section 4 and a resulting theorem explaining the dependences of the dMP convergence relatively to this sampling is proved. A simple but illustrative example of a 1-D dictionary, the wavelet (affine) dictionary, is then given. The optimization scheme announced above is developed in Section 5. After a review of gradient ascent optimization evolving on manifolds, the geometrically optimized MP is introduced and its theoretical rate of convergence analyzed in a second theorem. Finally, in Section 6, experiments are performed for 1-D and 2-D signal decompositions using dMP and gMP on various regular discretizations of dictionary parametrizations. We provide links to previous related works in Section 7 and conclude with possible extensions in Section 8.
## 2 Dictionary, Parametrization and Differential Geometry
Our object of interest throughout this paper is a general real \"signal\", i.e. a real function taking value on a measure space \\(X\\). More precisely, we assume \\(f\\) in the set of finite energy signals, i.e. \\(f\\in L^{2}(X,\\mathrm{d}\\mu)=\\{u:X\\to\\mathbb{R}\\,:\\,\\,\\|u\\|^{2}=\\int_{X}\\,|u(x)| ^{2}\\,\\,\\mathrm{d}\\mu(x)\\,\\,<\\,\\,\\infty\\}\\), for a certain integral measure \\(\\mathrm{d}\\mu(x)\\). Of course, the natural comparison of two functions \\(u\\) and \\(v\\) in \\(L^{2}(X,\\mathrm{d}\\mu)\\) is realized through the scalar product \\(\\langle u,v\\rangle_{L^{2}(X)}=\\langle u,v\\rangle\\triangleq\\int_{X}\\,u(x)\\,v( x)\\mathrm{d}\\mu(x)\\) making \\(L^{2}(X,\\mathrm{d}\\mu)\\) a Hilbert3 space where \\(\\|u\\|^{2}=\\langle u,u\\rangle\\).
Footnote 3: Assuming it _complete_, i.e. every Cauchy sequence converges in this space relatively to the norm \\(\\|\\cdot\\|^{2}=\\langle\\cdot,\\cdot\\rangle\\).
This very general framework can be specialized to 1-D signal or image decomposition where \\(X\\) is given respectively by \\(\\mathbb{R}\\) or \\(\\mathbb{R}^{2}\\), but also to more special spaces like the two dimensional sphere \\(S^{2}\\)[25] or the hyperboloid [26]. In the sequel, we will write simply \\(L^{2}(X)=L^{2}(X,\\mathrm{d}\\mu)\\).
In the following sections, we will _decompose_\\(f\\) over a highly redundant parametric _dictionary_ of real _atoms_. These are obtained from smooth transformations of a real mother function \\(g\\in L^{2}(X)\\) of unit norm. Formally, each atom is a function \\(g_{\\lambda}(x)=[U(\\lambda)g](x)\\in L^{2}(X)\\), for a certain isometric operator \\(U\\) parametrized by elements \\(\\lambda\\in\\Lambda\\) and such that \\(\\|g_{\\lambda}\\|=\\|g\\|=1\\). The _parametrization_ set \\(\\Lambda\\) is a continuous space where each \\(\\lambda\\in\\Lambda\\) corresponds to \\(P\\) continuous components \\(\\lambda=\\{\\lambda^{i}\\}_{0\\leq i\\leq P-1}\\) of different nature. For instance, in the case of 1-D signal or image analysis, \\(g\\) may be transformed by translation, modulation, rotation, or (anisotropic) dilation operations, each associated to one component \\(\\lambda^{i}\\) of \\(\\lambda\\). Our dictionary is then the set \\(\\mathrm{dict}(g,U,\\Theta)\\triangleq\\big{\\{}\\,g_{\\lambda}(x)=[U(\\lambda)g](x) :\\lambda\\in\\Theta\\,\\big{\\}}\\), for a certain subset \\(\\Theta\\subseteq\\Lambda\\). In the rest of the paper, we write \\(\\mathrm{dict}(\\Theta)=\\mathrm{dict}(g,U,\\Theta)\\), assuming \\(g\\) and \\(U\\) implicitly given by the context. For the case \\(\\Theta=\\Lambda\\), we write \\(\\mathcal{D}=\\mathrm{dict}(\\Lambda)\\).
We assume that \\(g\\) is twice differentiable over \\(X\\) and that the functions \\(g_{\\lambda}(x)\\) are twice differentiable on each of the \\(P\\) components of \\(\\lambda\\). In the following, we write \\(\\partial_{i}\\) for the partial derivative with respect to \\(\\lambda^{i}\\), i.e. \\(\\frac{\\partial}{\\partial\\lambda^{i}}\\), of any element (e.g. \\(g_{\\lambda}(x)\\), \\(\\langle g_{\\lambda},u\\rangle\\), ) depending on \\(\\lambda\\), and \\(\\partial_{ij}=\\partial_{i}\\partial_{j}\\). From the smoothness of \\(U\\) and \\(g\\), we have \\(\\partial_{ij}=\\partial_{ji}\\) on quantities built from these two ingredients.
Let us now analyze the geometrical structure of \\(\\Lambda\\). Rather than an artificial _Euclidean distance_\\(d_{\\mathcal{E}}(\\lambda_{a},\\lambda_{b})^{2}\\ \\triangleq\\ \\sum_{i}(\\lambda_{a}^{i}- \\lambda_{b}^{i})^{2}\\) between \\(\\lambda_{a},\\lambda_{b}\\in\\Lambda\\), we use a distance introduced by the dictionary \\(\\mathcal{D}\\) itself seen as a \\(P\\)-dimensional parametric submanifold of \\(L^{2}(X)\\) (or a _Hilbert manifold_4[27]). The _dictionary distance_\\(d_{\\mathcal{D}}\\) is thus the distance in the embedding space \\(L^{2}(X)\\), i.e. \\(d_{\\mathcal{D}}(\\lambda_{a},\\lambda_{b})\\ \\triangleq\\ \\|g_{\\lambda_{a}}-g_{ \\lambda_{b}}\\|\\).
Footnote 4: This is a special case of Image Appearance Manifold (IAM) defined for instance in [21, 22]. It is also closely linked to manifolds of parametric probability density function associated to the Fisher information metric [23].
From this embedding, we can define an intrinsic distance in \\(\\mathcal{D}\\), namely the _geodesic distance_. This later has been used in a similar context in the work of Grimes and Donoho [22] and we follow here their approach. For our two points \\(\\lambda_{a},\\lambda_{b}\\), assume that we have a smooth curve \\(\\gamma:[0,1]\\to\\Lambda\\) with \\(\\gamma(t)=\\left(\\gamma^{0}(t),\\cdots,\\gamma^{P-1}(t)\\right)\\), such that \\(\\gamma(0)=\\lambda_{a}\\) and \\(\\gamma(1)=\\lambda_{b}\\). The length \\(\\mathcal{L}(\\gamma)\\) of this curve in \\(\\mathcal{D}\\) is thus given by \\(\\mathcal{L}(\\gamma)\\triangleq\\int_{0}^{1}\\|\\frac{\\mathrm{d}}{\\mathrm{d}t}\\,g_ {\\gamma(t)}\\|\\,\\mathrm{d}t\\), assuming that \\(g_{\\gamma(t)}\\) is differentiable5 with respect to \\(t\\).
Footnote 5: Another definition of \\(\\mathcal{L}\\) exists for non differentiable curve. See for instance [22].
The _geodesic distance_ between \\(\\lambda_{a}\\) and \\(\\lambda_{b}\\) in \\(\\Lambda\\) is the length of shortest path between these two points, i.e.
\\[d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b})\\ \\triangleq\\ \\inf_{\\gamma(\\lambda_{a} \\to\\lambda_{b})}\\,\\int_{0}^{1}\\|\\frac{\\mathrm{d}}{\\mathrm{d}t}\\,g_{\\gamma(t)} \\|\\,\\mathrm{d}t, \\tag{1}\\]
where \\(\\gamma(\\lambda_{a}\\to\\lambda_{b})\\) is any differentiable curve \\(\\gamma(t)\\) linking \\(\\lambda_{a}\\) to \\(\\lambda_{b}\\) for \\(t\\) equals to \\(0\\) and \\(1\\) respectively.
We denote by \\(\\gamma_{\\lambda_{a}\\lambda_{b}}\\) the optimal _geodesic_ curve joining \\(\\lambda_{a}\\) and \\(\\lambda_{b}\\) on the manifold \\(\\mathcal{D}\\), i.e. such that \\(\\mathcal{L}(\\gamma_{\\lambda_{a}\\lambda_{b}})=d_{\\mathcal{G}}(\\lambda_{a}, \\lambda_{b})\\), and we assume henceforth that it is always possible to define this curve between two points of \\(\\Lambda\\). Note that by construction, \\(d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b})=d_{\\mathcal{G}}(\\lambda_{a},\\lambda^ {\\prime})+d_{\\mathcal{G}}(\\lambda^{\\prime},\\lambda_{b})\\), for all \\(\\lambda^{\\prime}\\) on the curve \\(\\gamma_{\\lambda_{a}\\lambda_{b}}(t)\\).
In the language of differential geometry, the parameter space \\(\\Lambda\\) is a Riemannian manifold \\(\\mathcal{M}=(\\Lambda,\\mathcal{G}_{ij})\\) with metric \\(\\mathcal{G}_{ij}(\\lambda)=\\langle\\partial_{i}g_{\\lambda},\\partial_{j}g_{ \\lambda}\\rangle\\). Indeed, for any differentiable curve \\(\\gamma:t\\in[-\\delta,\\delta]\\to\\gamma(t)\\in\\Lambda\\) with \\(\\delta>0\\) and \\(\\gamma(0)=\\lambda\\), we have
\\[\\|\\tfrac{\\mathrm{d}}{\\mathrm{d}t}\\,g_{\\gamma(t)}\\big{|}_{t=0}\\|^{2}\\ =\\ \\dot{\\gamma}^{i}(0)\\,\\dot{\\gamma}^{j}(0)\\,\\mathcal{G}_{ij}(\\lambda), \\tag{2}\\]
with \\(\\dot{u}(t)=\\frac{\\mathrm{d}}{\\mathrm{d}t}u(t)\\), and where Einstein's summation convention is used for simplicity6.
Footnote 6: Namely, a summation in an expression is defined implicitly each time the same index is repeated once as a subscript and once as a superscript, the range of summation being always \\([0,P-1]\\), so that for instance the expression \\(a^{i}b_{i}\\) reads \\(\\sum_{i=0}^{P-1}a^{i}b_{i}\\).
The vector \\(\\xi^{i}=\\dot{\\gamma}^{i}(0)\\) is by definition a vector in the tangent space \\(T_{\\lambda}\\Lambda\\) of \\(\\Lambda\\) in \\(\\lambda\\). The meaning of relation (2) is that the metric \\(\\mathcal{G}_{ij}(\\lambda)\\) allows the definitions of a scalar product and a norm in each \\(T_{\\lambda}\\Lambda\\). The norm of a vector \\(\\xi\\in T_{\\lambda}\\Lambda\\) is therefore noted \\(|\\xi|^{2}=|\\xi|_{\\lambda}^{2}\\triangleq\\xi^{i}\\xi^{j}\\mathcal{G}_{ij}(\\lambda)\\), with the correspondence \\(\\|\\frac{\\mathrm{d}}{\\mathrm{d}t}\\,g_{\\gamma(t)}|_{t=0}\\|=|\\dot{\\gamma}|\\). For the consistency of further Riemannian geometry developments, we assume that our dictionary \\(\\mathcal{D}\\) is _non-degenerate_, i.e. that it induces a positive definite metric \\(\\mathcal{G}_{ij}\\). Appendix A provides additional details.
We conclude this section with the _arc length_ (or _curvilinear_) parametrization \"\\(s\\)\" [28] of a curve \\(\\gamma(s)\\). It is such that \\(|\\gamma^{\\prime}|^{2}\\triangleq\\gamma^{\\prime i}(s)\\,\\gamma^{\\prime j}(s)\\, \\mathcal{G}_{ij}(\\gamma(s))=1\\), where \\(u^{\\prime}(s)=\\frac{\\mathrm{d}}{\\mathrm{d}s}u(s)\\). From its definition, the curvilinear parameter \\(s\\) is the one which measures at each point \\(\\gamma(s)\\) the length of the segment of curve already travelled on \\(\\gamma\\) from \\(\\gamma(0)\\). Therefore, in this parametrization, \\(\\lambda_{a}=\\gamma_{\\lambda_{a}\\lambda_{b}}(0)\\) and \\(\\lambda_{b}=\\gamma_{\\lambda_{a}\\lambda_{b}}(d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{ b}))\\).
Matching Pursuit in Continuous Dictionary
Let us assume that we want to decompose a function \\(f\\in L^{2}(X)\\) into simpler elements (atoms) coming from a dictionary \\(\\mathrm{dict}(\\Theta)\\), given a possibly uncountable and infinite subset \\(\\Theta\\subseteq\\Lambda\\). Our general aim is thus to find a set of _coefficients_\\(\\{c_{m}\\}\\) such that \\(f(x)\\) is equal or well approximated by \\(f_{\\mathrm{app}}(x)=\\sum_{m}c_{m}\\,g_{\\lambda_{m}}(x)\\) with a finite set of atoms \\(\\{g_{\\lambda_{m}}\\}\\subset\\mathrm{dict}(\\Theta)\\).
Formally, for a given _weakness_ factor \\(\\alpha\\in(0,1]\\), a _General Weak\\((\\alpha)\\) Matching Pursuit_ decomposition of \\(f\\)[2, 29], written \\(\\mathrm{MP}(\\Theta,\\alpha)\\), in the dictionary \\(\\mathrm{dict}(\\Theta)\\) is performed through the following _greedy7_ algorithm :
Footnote 7: Greedy in the sense that it does not solve a global \\(\\ell_{0}\\) or \\(\\ell_{1}\\) minimization [1] to find the coefficients \\(c_{m}\\) of \\(f_{\\mathrm{app}}\\) above, but works iteratively by solving at each iteration step a local and smaller minimization problem.
\\[R^{0}f=f,\\ A^{0}f=0,\\ \\mbox{(initialization)},\\] \\[R^{m+1}f\\ =\\ R^{m}f\\ -\\ \\left\\langle g_{\\lambda_{m+1}},R^{m}f \\right\\rangle g_{\\lambda_{m+1}}, \\tag{3a}\\] \\[A^{m+1}f\\ =\\ A^{m}f\\ +\\ \\left\\langle g_{\\lambda_{m+1}},R^{m}f \\right\\rangle g_{\\lambda_{m+1}},\\] (3b) \\[\\mbox{with}:\\ \\langle g_{\\lambda_{m+1}},R^{m}f\\rangle^{2}\\ \\geq\\ \\alpha^{2}\\,\\sup_{\\lambda\\in\\Theta}\\,\\langle g_{\\lambda},R^{m}f\\rangle^{2}. \\tag{3c}\\]
The quantity \\(R^{m+1}f\\) is the _residual_ of \\(f\\) at iteration \\(m+1\\). Since it is orthogonal to atom \\(g_{\\lambda_{m+1}}\\), \\(\\|R^{m+1}f\\|^{2}=\\|R^{m}f\\|^{2}-\\langle g_{\\lambda_{m+1}},R^{m}f\\rangle^{2} \\leq\\|R^{m}f\\|^{2}\\), so that the energy \\(\\|R^{m}f\\|^{2}\\) is non-increasing. The function \\(A^{m}f\\) is the \\(m\\)-term _approximation_ of \\(f\\) with \\(A^{m}f\\ =\\ \\sum_{k=0}^{m-1}\\ \\langle g_{\\lambda_{k+1}},R^{k}f\\rangle\\ g_{ \\lambda_{k+1}}\\).
Notice that the _selection rule_ (3c) concerns the square of the real scalar product \\(\\langle g_{\\lambda},R^{m}f\\rangle\\). Matching Pursuit atom selection is typically defined over the absolute value \\(|\\langle g_{\\lambda},R^{m}f\\rangle|\\). However, we prefer this equivalent quadratic formulation first to avoid the abrupt behavior of the absolute value when the scalar product crosses zero, and second for consistency with the quadratic optimization framework to be explained in Section 5. Finally, to allow the non-weak case where \\(\\alpha=1\\), we assume that a maximizer \\(g_{u}\\in\\mathrm{dict}(\\Theta)\\) of \\(\\langle g,u\\rangle^{2}\\) always exists for any \\(u\\in L^{2}(X)\\).
If \\(\\Theta\\) is uncountable, our general Matching Pursuit algorithm is named _continuous Matching pursuit_. In particular, for \\(\\Theta=\\Lambda\\), we write \\(\\mathrm{cMP}(\\alpha)=\\mathrm{MP}(\\Lambda,\\alpha)\\). The _rate of convergence_ (or convergence) of the \\(\\mathrm{cMP}(\\alpha)\\), characterized by the rate of decay of \\(\\|R^{m}f\\|\\) with \\(m\\), can be assessed in certain particular cases. For instance, if there exists a Hilbert space \\(\\mathcal{S}\\subseteq L^{2}(X)\\) containing \\(\\mathcal{D}=\\mathrm{dict}(\\Lambda)\\) such that
\\[\\beta^{2}\\ =\\ \\inf_{u\\in S,\\ \\|u\\|=1}\\ \\sup_{\\lambda\\in\\Lambda}\\ \\langle g_{\\lambda},u\\rangle^{2}\\ >\\ 0, \\tag{4}\\]
then the \\(\\mathrm{cMP}(\\alpha)\\) converges inside \\(\\mathcal{S}\\). In fact, the convergence is exponential [30] since \\(\\langle g_{\\lambda_{m}},R^{m-1}f\\rangle^{2}\\geq\\alpha^{2}\\beta^{2}\\,\\|R^{m-1} f\\|^{2}\\) and \\(\\|R^{m}f\\|^{2}\\leq\\|R^{m-1}f\\|^{2}-\\alpha^{2}\\beta^{2}\\|R^{m-1}f\\|^{2}\\leq(1- \\alpha^{2}\\beta^{2})^{m}\\|f\\|^{2}\\). We name \\(\\beta=\\beta(\\mathcal{S},\\mathcal{D})\\) the _greedy factor_ since it charaterizes the MP convergence (greediness).
The existence of the greedy factor \\(\\beta\\) is obvious for instance for finite dimensional space [30], i.e. \\(f\\in\\mathbb{C}^{N}\\), with finite dictionary (finite number of atoms).
For a finite dictionary in an infinite dimensional space, as \\(L^{2}(X)\\), the existence of \\(\\beta\\) is not guaranteed over the whole space. However, there exists on the space of functions given by linear combination of dictionary elements, the number of terms being restricted by the dictionary _(cumulative) coherence_[29].
In the case of an infinite dictionary in an infinite dimension space where the greedy factor vanishes, \\(\\mathrm{cMP}(\\alpha)\\) convergence is characterized differently on the subspace of linear combination of countable subsets of dictionary elements. This question is addressed separately in a companion Technical Report [31] to this article. We now consider only the case where a non-zero greedy factor exists to characterize the rate of convergence of MP using continuous and discrete dictionaries.
## 4 Discretization effects of Continuous Dictionary
The greedy algorithm \\(\\mathrm{cMP}(\\alpha)\\) using the dictionary \\(\\mathcal{D}\\) is obviously numerically unachievable because of the intrinsic continuity of its main ingredient, namely the parameter space \\(\\Lambda\\). Any computer implementation needs at least to discretize the parametrization of the dictionary, more or less densely, leading to a countable set \\(\\Lambda_{\\mathrm{d}}\\subset\\Lambda\\). This new parameter space leads naturally to the definition of a countable subdictionary \\(\\mathcal{D}_{\\mathrm{d}}=\\mathrm{dict}(\\Lambda_{\\mathrm{d}})\\). Henceforth, elements of \\(\\Lambda_{\\mathrm{d}}\\) are labelled with roman letters, e.g. \\(k\\), to distinguish them from the continuous greek-labelized elements of \\(\\Lambda\\), e.g \\(\\lambda\\).
For a weakness factor \\(\\alpha\\in(0,1]\\), the _discrete Weak\\((\\alpha_{\\mathrm{d}})\\) Matching Pursuit_ algorithm, or \\(\\mathrm{dMP}(\\alpha)\\), of a function \\(f\\in L^{2}(X)\\) over \\(\\mathcal{D}_{\\mathrm{d}}\\) is naturally defined as \\(\\mathrm{dMP}(\\alpha)=\\mathrm{MP}(\\Lambda_{\\mathrm{d}},\\alpha)\\). The replacement of \\(\\Lambda\\) by \\(\\Lambda_{\\mathrm{d}}\\) in the MP algorithm (3) leads obviously to the following question that we address in the next section.
**Question 1**.: _How does the MP rate of convergence evolve when the parametrization of a dictionary is discretized and what are the quantities that control (or bound) this evolution?_
### Discretization Autopsy
By working with \\(\\mathcal{D}_{\\mathrm{d}}\\) instead of \\(\\mathcal{D}\\), the atoms selected at each iteration of \\(\\mathrm{dMP}(\\alpha)\\) are of course less optimal than those available in the continuous framework. Answering Question 1 requires a quantitative measure of the induced loss in the MP coefficients. More concretely, defining the _score function_\\(S_{u}(\\lambda)=\\langle g_{\\lambda},u\\rangle^{2}\\) for some \\(u\\in L^{2}(X)\\), we must analyze the difference between a maximum of \\(S_{u}\\) computed over \\(\\Lambda\\) and that obtained from \\(\\Lambda_{\\mathrm{d}}\\). This function \\(u\\) will be next identified with the residue of \\(\\mathrm{dMP}(\\alpha)\\) at any iteration to characterize the global change in convergence.
We propose to found our analysis on the geometric tools described in Section 2.
**Definition 1**.: _The value \\(S_{u}(\\lambda_{a})\\) is critical in the direction of \\(\\lambda_{b}\\) if, given the geodesic \\(\\gamma=\\gamma_{\\lambda_{a}\\lambda_{b}}\\) in the manifold \\(\\mathcal{M}=(\\Lambda,\\mathcal{G}_{ij})\\), \\(\\frac{\\mathrm{d}}{\\mathrm{d}s}S_{u}(\\gamma(s))|_{s=0}=0\\), where \\(\\gamma(0)=\\lambda_{a}\\)._
Notice that if \\(S_{u}(\\lambda_{a})\\) is critical in the direction of \\(\\lambda_{b}\\), \\(\\gamma^{\\prime i}(0)\\,\\partial_{i}S_{u}(\\lambda_{a})=0\\). An _umbilical_ point for which \\(\\partial_{i}S_{u}(\\lambda_{a})=0\\) for all \\(i\\), is obviously critical in any direction. An umbilical point corresponds geometrically either to maxima, minima or saddlepoints of \\(S_{u}\\) relatively to \\(\\Lambda\\).
**Proposition 1**.: _Given \\(u\\in L^{2}(X)\\), if \\(S_{u}(\\lambda_{a})\\) is critical in the direction of \\(\\lambda_{b}\\) for \\(\\lambda_{a},\\lambda_{b}\\in\\Lambda\\), then for some \\(r\\in(0,d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b}))\\),_
\\[|S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})|\\leq\\left\\|u\\right\\|^{2}d_{\\mathcal{G}} (\\lambda_{a},\\lambda_{b})^{2}\\left(1+\\left\\|\\frac{\\mathrm{d}^{2}g_{\\gamma}}{ \\mathrm{d}s^{2}}\\right\\|_{s=r}\\right\\|), \\tag{5}\\]
_where \\(\\gamma(s)=\\gamma_{\\lambda_{a}\\lambda_{b}}(s)\\) is the geodesic in \\(\\mathcal{M}\\) linking \\(\\lambda_{a}\\) to \\(\\lambda_{b}\\)._Proof.: Let us define the twice differentiable function \\(\\psi(s)\\triangleq S_{u}(\\gamma(s))\\) on \\(s\\in[0,\\eta]\\), with \\(\\eta\\triangleq d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b})\\). A second order Taylor development of \\(\\psi\\) gives, for a certain \\(r\\in(0,s)\\), \\(\\psi(s)=\\psi(0)+s\\,\\psi^{\\prime}(0)+\\frac{1}{2}s^{2}\\,\\psi^{\\prime\\prime}(r)\\). Since \\(\\psi^{\\prime}(0)=\\gamma^{\\prime i}(0)\\,\\partial_{i}S_{u}(\\lambda_{a})=0\\) by hypothesis, we have in \\(s=\\eta\\), \\(|\\psi(0)-\\psi(\\eta)|=|S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})|\\leq\\frac{1}{2}\\, \\eta^{2}\\,|\\psi^{\\prime\\prime}(r)|\\). However, on any \\(s\\), \\(|\\psi^{\\prime\\prime}(s)|=2\\,|\\big{\\langle}\\frac{\\mathrm{d}}{\\mathrm{d}s}g_{ \\gamma(s)},u\\big{\\rangle}^{2}+\\langle g_{\\gamma(s)},u\\rangle\\,\\big{\\langle} \\frac{\\mathrm{d}^{2}}{\\mathrm{d}s^{2}}g_{\\gamma(s)},u\\big{\\rangle}|\\leq 2\\,( \\|\\frac{\\mathrm{d}}{\\mathrm{d}s}g_{\\gamma(s)}\\|^{2}+\\|\\frac{\\mathrm{d}^{2}}{ \\mathrm{d}s^{2}}g_{\\gamma(s)}\\|)\\,\\|u\\|^{2}\\), using the Cauchy-Schwarz (CS) inequality in \\(L^{2}(X)\\) in the last equation. The result follows from the fact that \\(\\|\\frac{\\mathrm{d}}{\\mathrm{d}s}g_{\\gamma(s)}\\|=1\\).
The previous Lemma is particularly important since it bounds the loss in coefficient value when we decide to choose \\(S_{u}(\\lambda_{b})\\) instead of the optimal \\(S_{u}(\\lambda_{a})\\) in function of the geodesic distance \\(d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b})\\) between the two parameters. To obtain a more satisfactory control of this difference, we need however a new property of the dictionary.
We start by defining the _principal curvature_ in the point \\(\\lambda\\in\\Lambda\\) as
\\[\\mathcal{K}_{\\lambda}\\ \\triangleq\\ \\sup_{\\xi\\colon\\ |\\xi|=1}\\,\\|\\frac{ \\mathrm{d}^{2}}{\\mathrm{d}s^{2}}\\,g_{\\gamma_{\\xi}(s)}\\big{|}_{s=0}\\|, \\tag{6}\\]
where \\(\\gamma_{\\xi}\\) is the unique geodesic in \\(\\mathcal{M}\\) starting from \\(\\lambda=\\gamma_{\\xi}(0)\\) and with \\(\\gamma^{\\prime}_{\\xi}(0)=\\xi\\), for a direction \\(\\xi\\) of unit norm in \\(T_{\\lambda}\\Lambda\\).
**Definition 2**.: _The condition number of a dictionary \\(\\mathcal{D}\\) is the number \\(\\mathcal{K}^{-1}\\) obtained from_
\\[\\mathcal{K}\\ \\triangleq\\ \\sup_{\\lambda\\in\\Lambda}\\,\\mathcal{K}_{\\lambda}. \\tag{7}\\]
_If \\(\\mathcal{K}\\) does not exist (not bounded \\(\\mathcal{K}_{\\lambda}\\)), by extension, \\(\\mathcal{D}\\) is said to be of zero condition number._
The notion of condition number has been introduced by Niyogi et al. [32] to bound the local curvature of an embedded manifold8 in its ambient space, and to characterize its self-avoidance. Essentially, it is the inverse of the maximum radius of a sphere that, when placed tangent to the manifold at any point, intersects the manifold only at that point [33, 34]. Our quantity \\(\\mathcal{K}^{-1}\\) is then by construction a similar notion for the dictionary \\(\\mathcal{D}\\) seen as a manifold in \\(L^{2}(X)\\). However, it does not actually prevent manifold self-crossing on large distance due to the locality of our differential analysis9.
Footnote 8: In their work, the condition number, named there \\(\\tau^{-1}\\), of a manifold \\(\\mathcal{M}^{\\prime}\\) measures the maximal “thickness” \\(\\tau\\) of the _normal bundle_, the union of all the orthogonal complement of every tangent plane at every point of the manifold.
Footnote 9: A careful study of local self-avoidance of well-conditioned dictionary would have to be considered but this is beyond the scope of this paper.
**Proposition 2**.: _For a dictionary \\(\\mathcal{D}=\\mathrm{dict}(\\Lambda)\\),_
\\[1\\ \\leq\\ \\mathcal{K}\\ \\leq\\ \\sup_{\\lambda\\in\\Lambda}\\ \\Big{[}\\,\\big{\\langle}\\partial_{ij}\\,g_{\\lambda},\\partial_{kl}\\,g_{ \\lambda}\\big{\\rangle}\\,\\mathcal{G}^{ik}\\,\\mathcal{G}^{jl}\\,\\Big{]}^{\\frac{1}{ 2}}\\,, \\tag{8}\\]
_where \\(\\mathcal{G}^{ij}=\\mathcal{G}^{ij}(\\lambda)\\) is the inverse10 of \\(\\mathcal{G}_{ij}\\)._The proof is given in Appendix B since it uses some elements of differential geometry not essential in the core of this paper. The interested reader will find also there a slightly lower bound than the bound presented in (8), exploiting covariant derivatives, Laplace-Beltrami operator and scalar curvature of \\(\\mathcal{M}\\)[28]. We can state now the following corollary of Proposition 1.
**Corollary 1**.: _In the conditions of Proposition 1, if \\(\\mathcal{D}\\) has a non-zero condition number \\(\\mathcal{K}^{-1}\\), then_
\\[|S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})|\\quad\\leq\\quad\\|u\\|^{2}\\,d_{\\mathcal{G} }(\\lambda_{a},\\lambda_{b})^{2}\\,\\big{(}1+\\mathcal{K}\\big{)}. \\tag{9}\\]
Therefore, in the \\(\\mathrm{dMP}(\\alpha)\\) decomposition of \\(f\\) based on \\(\\mathcal{D}_{\\mathrm{d}}\\), even if at each iteration the exact position of the continuous optimal atom of \\(\\mathcal{D}\\) is not known, we are now able to estimate the convergence rate of this MP provided we introduce a new quantity characterizing the set \\(\\Lambda_{\\mathrm{d}}\\).
**Definition 3**.: _The density radius\\(\\rho_{\\mathrm{d}}\\) of a countable parameter space \\(\\Lambda_{\\mathrm{d}}\\subset\\Lambda\\) is the value_
\\[\\rho_{d}\\ =\\ \\sup_{\\lambda\\in\\Lambda}\\inf_{k\\in\\Lambda_{\\mathrm{d}}}\\ d_{ \\mathcal{G}}(\\lambda,\\,k). \\tag{10}\\]
_We say that \\(\\Lambda_{\\mathrm{d}}\\) covers \\(\\Lambda\\) with a radius \\(\\rho_{\\mathrm{d}}\\)._
This radius characterizes the density of \\(\\Lambda_{\\mathrm{d}}\\) inside \\(\\Lambda\\). Given any \\(\\lambda\\) in \\(\\Lambda\\), one is guaranteed that there exists an element \\(k\\) of \\(\\Lambda_{\\mathrm{d}}\\) close to \\(\\lambda\\), i.e. within a geodesic distance \\(\\rho_{\\mathrm{d}}\\).
**Theorem 1**.: _Given a Hilbert space \\(\\mathcal{S}\\subseteq L^{2}(X)\\) with a non zero greedy factor \\(\\beta\\), and a dictionary \\(\\mathcal{D}=\\mathrm{dict}(\\Lambda)\\subset S\\) of non-zero condition number \\(\\mathcal{K}^{-1}\\), if \\(\\Lambda_{\\mathrm{d}}\\) covers \\(\\Lambda\\) with radius \\(\\rho_{\\mathrm{d}}\\), and if \\(\\rho_{\\mathrm{d}}<\\beta/\\sqrt{1+\\mathcal{K}}\\), then, for functions belonging to \\(\\mathcal{S}\\), a \\(\\text{dMP}(\\alpha)\\) algorithm using \\(\\mathcal{D}_{\\mathrm{d}}=\\mathrm{dict}(\\Lambda_{\\mathrm{d}})\\) is bounded by the exponential convergence rate of a cMP\\((\\alpha^{\\prime})\\) using \\(\\mathcal{D}\\) with a weakness parameter given by \\(\\alpha^{\\prime}=\\alpha\\big{(}1-\\beta^{-2}\\,\\rho_{\\mathrm{d}}^{2}(1+\\mathcal{K}) \\big{)}^{1/2}<\\alpha\\)._
Proof.: Notice first that since \\(f\\in\\mathcal{S}\\) and \\(\\mathcal{D}_{\\mathrm{d}}\\subset\\mathcal{D}\\subset\\mathcal{S}\\), \\(R^{m}f\\in\\mathcal{S}\\) for all iteration \\(m\\) of dMP. Let us take the \\((m+1)^{\\mathrm{th}}\\) step of \\(\\mathrm{dMP}(\\alpha)\\) and write \\(u=R^{m}f\\). We have of course \\(\\|R^{m+1}f\\|^{2}=\\|u\\|^{2}-S_{u}(k_{m+1})\\), where \\(k_{m+1}\\) is the atom obtained from the selection rule (3c), i.e. \\(S_{u}(k_{m+1})\\geq\\alpha^{2}\\,\\sup_{k\\in\\Lambda_{\\mathrm{d}}}\\,S_{u}(k)\\).
Denote by \\(g_{\\tilde{\\lambda}}\\) the atom of \\(\\mathcal{D}\\) that best represents \\(R^{m}f\\), i.e. \\(S_{u}(\\tilde{\\lambda})=\\sup_{\\lambda\\in\\Lambda}S_{u}(\\lambda)\\). If \\(\\tilde{k}\\) is the closest element of \\(\\tilde{\\lambda}\\) in \\(\\Lambda_{\\mathrm{d}}\\), we have \\(d_{\\mathcal{G}}(\\tilde{\\lambda},\\tilde{k})\\leq\\rho_{\\mathrm{d}}\\) from the covering property of \\(\\Lambda_{\\mathrm{d}}\\), and the Proposition 1 tells us that, with \\(u=R^{m}f\\), \\(|S_{u}(\\tilde{k})-S_{u}(\\tilde{\\lambda})|\\leq\\rho_{\\mathrm{d}}^{2}\\,(1+ \\mathcal{K})\\,\\|u\\|^{2}\\), since \\(\\partial_{i}S_{u}(\\tilde{\\lambda})=0\\) for all \\(i\\).
Therefore, \\(S_{u}(\\tilde{k})\\geq S_{u}(\\tilde{\\lambda})-\\rho_{\\mathrm{d}}^{2}\\,(1+ \\mathcal{K})\\,\\|u\\|^{2}\\geq\\beta^{2}\\,\\|u\\|^{2}-\\rho_{\\mathrm{d}}^{2}\\,(1+ \\mathcal{K})\\,\\|u\\|^{2}\\), and \\(S_{u}(\\tilde{k})\\geq\\beta^{2}\\,\\big{(}1-\\beta^{-2}\\,\\rho_{\\mathrm{d}}^{2}(1+ \\mathcal{K})\\big{)}\\,\\|R^{m}f\\|^{2}\\), this last quantity being positive from the density requirement, i.e. \\(\\rho_{\\mathrm{d}}<\\beta/\\sqrt{1+\\mathcal{K}}\\).
In consequence, \\(S_{u}(k_{m+1})\\geq\\alpha^{2}\\,\\sup_{k\\in\\Lambda_{\\mathrm{d}}}\\,S_{u}(k)\\geq \\alpha^{2}\\,S_{u}(\\tilde{k})\\), implying \\(\\|R^{m+1}f\\|^{2}\\ =\\ \\|u\\|^{2}\\ -\\ S_{u}(k_{m+1})\\leq\\|u\\|^{2}\\ -\\ \\alpha^{2}\\,S_{u}( \\tilde{k})\\leq\\|u\\|^{2}\\,(1-\\alpha^{\\prime 2}\\beta^{2})\\), for \\(\\alpha^{\\prime}\\triangleq\\alpha\\big{(}1-\\beta^{-2}\\,\\rho_{\\mathrm{d}}^{2}(1+ \\mathcal{K})\\big{)}^{1/2}\\). So, \\(\\|R^{m+1}f\\|\\leq(1-\\alpha^{\\prime 2}\\beta^{2})^{(m+1)/2}\\|f\\|\\), which is the exponential convergence rate of the Weak\\((\\alpha)\\) Matching Pursuit in \\(\\mathcal{D}\\) when \\(\\beta\\) exists [29, 30].
The previous proposition has an interesting interpretation : a weak Matching Pursuit decomposition in a discrete dictionary corresponds, in terms of rate of convergence, to a weaker Matching Pursuit in the continuous dictionary from which the discrete one is extracted.
About the hypotheses of the proposition, notice first that the existence of a greedy factor inside \\(\\mathcal{S}\\) concerns the continuous dictionary \\(\\mathcal{D}\\) and not the discrete one \\(\\mathcal{D}_{\\mathrm{d}}\\). Consequently, this condition is certainly easier to fulfill from the high redundancy of \\(\\mathcal{D}\\). Second, the _density requirement_, \\(\\rho_{\\mathrm{d}}<\\beta/\\sqrt{1+\\mathcal{K}}\\), is just sufficient since the Proposition 1 does not state that it achieves the best bound for the control of \\(|S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})|\\) when \\(\\lambda_{a}\\) is critical. It is interesting to note that this inequality relates \\(\\rho_{\\mathrm{d}}\\), a quantity that characterizes the discretization \\(\\Lambda_{\\mathrm{d}}\\) in \\(\\Lambda\\), to \\(\\beta\\) and \\(\\mathcal{K}\\), which depend only on the dictionary. In particular, \\(\\beta\\) represents the density of \\(\\mathcal{D}\\) inside \\(\\mathcal{S}\\subset L^{2}(X)\\), and \\(\\mathcal{K}\\) depends on the shape of the atoms through the curvature of the dictionary.
Finally note that as \\(\\beta<1\\) (from definition (4)) and \\(\\mathcal{K}>1\\) (Prop. 2), the density radius must at least satisfy \\(\\rho_{\\mathrm{d}}<\\frac{1}{\\sqrt{2}}\\) to guarantee that our analysis is valid.
### A Simple Example of Discretization
Let us work on the _line_ with \\(L^{2}(X)=L^{2}(\\mathbb{R},\\mathrm{d}t)\\), and check if the hypothesis of the previous theorem can be assessed in the simple case of an _affine_ (wavelet-like) dictionary.
We select a symmetric and real mother function \\(g\\in L^{2}(\\mathbb{R})\\) well localized around the origin, e.g. a Gaussian or a Mexican Hat, normalized such that \\(\\|g\\|=1\\). The parameter set \\(\\Lambda\\) is related to the _affine group_, the group of translations and dilations \\(G_{\\mathrm{aff}}\\). We identify \\(\\lambda=(\\lambda^{0}=b,\\lambda^{1}=a)\\), where \\(b\\in\\mathbb{R}\\) and \\(a>0\\) are the translation and the dilation parameters respectively. The dictionary \\(\\mathcal{D}\\) is defined from the atoms \\(g_{\\lambda}(t)=[U(\\lambda)g](t)=a^{-1/2}\\,g\\big{(}(t-b)/a\\big{)}\\), with \\(\\|g_{\\lambda}\\|=1\\) for all \\(\\lambda\\in\\Lambda\\). Our atoms are nothing but the wavelets of a Continuous Wavelet Transform if \\(g\\) is admissible [35], and \\(U\\) is actually the representation of the affine group on \\(L^{2}(\\mathbb{R})\\)[36].
In the technical report [31], we prove that the associated metric is given by \\(\\mathcal{G}_{ij}(\\lambda)=a^{-2}\\,W\\), where \\(W\\) is a constant \\(2\\times 2\\) diagonal matrix depending only of the mother function \\(g\\) and its first and second derivatives. Since \\(\\mathcal{G}^{ij}(\\lambda)=a^{2}\\,W^{-1}\\), \\(\\mathcal{K}\\) can be bounded by a constant also associated to \\(g\\) and its first and second order time derivatives.
Finally, given the \\(\\tau\\)-adic parameter discretization
\\[\\Lambda_{\\mathrm{d}}=\\{k_{jn}=(b_{jn},a_{j})=(n\\,b_{0}\\,\\tau^{j},a_{0}\\tau^{ j}):\\;j,n\\in\\mathbb{Z}\\},\\]
with \\(\\tau>1\\) and \\(a_{0},b_{0}>0\\), the density radius \\(\\rho_{\\mathrm{d}}\\) of \\(\\Lambda_{\\mathrm{d}}\\) is shown to be bounded by \\(\\rho_{\\mathrm{d}}\\leq Ca_{0}^{-1}b_{0}+D\\ln\\tau\\), with \\(C\\) and \\(D\\) depending only of the norms of \\(g\\) and its first derivative.
This bound has two interesting properties. First, as for the grid \\(\\Lambda_{\\mathrm{d}}\\), it is invariant under the change \\((b_{0},a_{0})\\to(2b_{0},2a_{0})\\). Second, it is multiplied by \\(2^{n}\\) if we realize a \"zoom\" of factor \\(2^{n}\\) in our \\(\\tau\\)-adic grid, in other words, if \\((b_{0},\\tau)\\to(2^{n}\\,b_{0},\\tau^{2^{n}})\\). By the same argument, the true density radius has also to respect these rules. Therefore, we conjecture that \\(\\rho_{\\mathrm{d}}=C^{\\prime}a_{0}^{-1}b_{0}+D^{\\prime}\\,\\ln\\tau\\), for two particular (non computed) positive constants \\(C^{\\prime}\\) and \\(D^{\\prime}\\).
Unfortunately, even for this simple affine dictionary, the existence of \\(\\beta=\\beta(\\mathcal{S},\\mathcal{D})\\) is non trivial to prove. However, if the greedy factor exists, the control of \\(\\tau\\), \\(a_{0}\\) and \\(b_{0}\\) over \\(\\rho_{\\mathrm{d}}\\) tells us that it is possible to satisfy the density requirement for convenient values of these parameters.
## 5 Optimization of Discrete Matching Pursuits
The previous section has shown that under a few assumptions a dMP is equivalent, in terms of rate of convergence, to a weaker cMP in the continuous dictionary from which the discrete one has been sampled.
**Question 2**.: _Can we improve the rate of convergence of a dMP, not with an obvious increasing of the dictionary sampling, but by taking advantage of the dictionary geometry?_
Our approach is to introduce an optimization of the discrete dMP scheme. In short, at each iteration, we propose to use the atoms of \\(\\mathcal{D}_{\\mathrm{d}}\\) as the seeds of an iterative optimization, such as the basic _gradient descent/ascent_, respecting the geometry of the manifold \\(\\mathcal{M}=(\\Lambda,\\mathcal{G}_{ij})\\).
Under the same density hypothesis of Theorem 1, we show that in the worst case and if the number of optimization steps is large enough, an optimized discrete MP is again equivalent to a continuous dMP, but with a weakness factor two times closer to unity than for the non-optimized discrete MP.
In this section, we first introduce the basic gradient descent/ascent on a manifold. Next, we show how this optimization can be introduced in the Matching Pursuit scheme to defined the geometrically optimized MP (gMP). Finally, the rate of convergence of this method is analyzed.
### Gradient Ascent on Riemannian Manifolds
Given a function \\(u\\in L^{2}(X)\\) and \\(S_{u}(\\lambda)=\\langle g_{\\lambda},u\\rangle^{2}\\), we wish to find the parameter that maximizes \\(S_{u}\\), i.e.
\\[\\lambda_{*}\\ =\\ \\operatorname*{arg\\,max}_{\\lambda\\in\\Lambda}\\ S_{u}(\\lambda)\\] ( **P.1** )
Equivalently, by introducing \\(h_{u,\\lambda}=\\langle g_{\\lambda},u\\rangle\\,g_{\\lambda}\\), we can decide to find \\(\\lambda_{*}\\) by the minimization
\\[\\lambda_{*}\\ =\\ \\operatorname*{arg\\,min}_{\\lambda\\in\\Lambda}\\ \\|u-h_{u, \\lambda}\\|^{2}.\\] ( **P.2** )
If we are not afraid to get stuck on local maxima (P.1) or minima (P.2) of these two non-convex problems, we can solve them by using well known optimization techniques such as gradient descent/ascent, or Newton or Newton-Gauss optimizations.
We present here a basic gradient ascent of the Problem (P.1) that respect the geometry of \\(\\mathcal{M}=(\\Lambda,\\mathcal{G}_{ij})\\)[37]. This method increases iteratively the value of \\(S_{u}\\) by following a path in \\(\\Lambda\\), composed of geodesic segments, driven by the gradient of \\(S_{u}\\).
Given a sequence of step size \\(t_{r}>0\\), the gradient ascent of \\(S_{u}\\) starting from \\(\\lambda_{0}\\in\\Lambda\\) is defined by the following induction [38] :
\\[\\phi_{0}(\\lambda_{0})\\ =\\ \\lambda,\\quad\\phi_{r+1}(\\lambda_{0})\\ =\\ \\gamma\\big{(}t_{r},\\ \\phi_{r}(\\lambda_{0}),\\ \\xi_{r}(\\lambda_{0})\\,\\big{)},\\]
where \\(\\xi_{r}(\\lambda_{0})=|\
abla S_{u}(\\phi_{r}(\\lambda_{0}))|^{-1}\\,\
abla S_{u} (\\phi_{r}(\\lambda_{0}))\\) is the _gradient direction_ obtained from the gradient \\(\
abla^{i}S_{u}=\\mathcal{G}^{ij}\\,\\partial_{j}S_{u}\\), and \\(\\gamma(s,\\lambda_{0},\\xi_{0})\\) is the geodesic starting at \\(\\lambda_{0}=\\gamma(0,\\lambda_{0},\\xi_{0})\\) with the unit velocity \\(\\xi_{0}=\\frac{\\partial}{\\partial s}\\gamma(0,\\lambda_{0},\\xi_{0})\\). Notice that \\(\
abla^{i}\\) is the natural notion of gradient on a Riemannian manifold. Indeed, as for the Euclidean case, with \\(\
abla^{i}h\\triangleq\\mathcal{G}^{ij}\\,\\partial_{j}h\\) for \\(h\\in L^{2}(X)\\), given \\(w\\in T_{\\lambda}\\Lambda\\), the directional derivative \\(D_{w}h\\) is equivalent to \\(D_{w}h(\\lambda)\\triangleq w^{i}\\partial_{i}h(\\lambda)=\\langle\
abla h,w \\rangle_{\\lambda}\\triangleq w^{i}\\,\
abla^{j}h(\\lambda)\\,\\mathcal{G}_{ij}(\\lambda)\\), since \\(\\mathcal{G}^{ik}\\,\\mathcal{G}_{kj}=\\delta^{i}_{j}\\).
Practically, in our gradient ascent, we use the linear first order approximation of \\(\\gamma\\), i.e.
\\[\\phi_{r+1}(\\lambda)\\ =\\ \\phi_{r}(\\lambda)\\ +\\ t_{r}\\,\\xi_{r}(\\lambda), \\tag{11}\\]valid for small value of \\(t_{r}\\) (error in \\(O(t_{r}^{2})\\)). This is actually an optimization method since \\(\\partial_{i}S_{u}(\\phi_{r}(\\lambda))\\,\\xi_{r}^{i}=|\\partial S_{u}(\\phi_{r}( \\lambda))|>0\\) and \\(S_{u}(\\phi_{r+1}(\\lambda))=S_{u}(\\phi_{r}(\\lambda))+t_{r}|\\partial S_{u}(\\phi_{ r}(\\lambda))|+O(t_{r}^{2})\\geq S_{u}(\\phi_{r}(\\lambda))\\), for a convenient step size \\(t_{r}>0\\). At each step of this gradient ascent, the value \\(t_{r}\\) is chosen so that \\(S_{u}\\) is increased. This can be done for instance by a _line search_ algorithm [39]. From the positive definiteness of \\(\\mathcal{G}_{ij}\\) and \\(\\mathcal{G}^{ij}\\), a fixed point \\(\\phi_{r+1}(\\lambda)=\\phi_{r}(\\lambda)\\) is reached if \\(\
abla^{i}S_{u}(\\phi_{r}(\\lambda))=\\partial_{i}S_{u}(\\phi_{r}(\\lambda))=0\\) for all \\(i\\).
More sophisticated algorithms such as Newton or Newton-Gauss can be developed to solve the Problem (P.2) on a Riemannian manifolds [38, 40] even if, unlike to the flat case, a direct definition of the Hessian does not exist on differentiable manifolds. However, we will not use them here as our aim is to prove that a dMP driven by the very basic optimization above provides already a better rate of convergence than the non-optimized dMP.
### Optimized Discrete Matching Pursuit Algorithm
Let us optimize each step of a discrete MP using the gradient ascent of the previous section.
**Definition** Given sequence of positive integers \\(\\kappa_{m}\\) and a weakness factor \\(0<\\alpha\\leq 1\\), the geometrically optimized discrete matching pursuit (gMP(\\(\\alpha\\))) is defined by
\\[R^{0}f\\ =\\ f\\ \\ \\ \\mbox{(initialization)}, \\tag{12a}\\] \\[R^{m+1}f\\ =\\ R^{m}f\\ -\\ \\langle g_{\
u_{m+1}},R^{m}f\\rangle\\,g_{ \
u_{m+1}},\\] (12b) \\[\\langle g_{\
u_{m+1}},R^{m}f\\rangle^{2}\\ \\geq\\ \\alpha^{2}\\,\\sup_{k\\in\\Lambda_{\\rm d}} \\,\\langle g_{\\phi_{\\kappa_{m}}(k)},R^{m}f\\rangle^{2}. \\tag{12c}\\]
Notice that the best atom \\(g_{\
u_{m+1}}\\) is selected in the set \\(\\Phi_{m}\\triangleq\\{g_{\\phi_{\\kappa_{m}}(k)}:k\\in\\Lambda_{\\rm d}\\}\\subset \\mathcal{D}\\). Elements of \\(\\Phi_{m}\\) are determined by applying the _optimization function_\\(\\phi_{r}:\\Lambda_{\\rm d}\\to\\Lambda\\) of our gradient ascent defined in (11) on elements of \\(\\Lambda_{\\rm d}\\). In consequence, \\(\\Phi_{m}\\) depends on \\(R^{m}f\\) and is thus different at each iteration \\(m\\).
Rate of convergenceThe following theorem characterizes the rate of convergence of the optimized Matching Pursuit defined in (12).
**Theorem 2**.: _Given the notations and the conditions of Theorem 1, there exists a sequence of positive integers \\(\\kappa_{m}\\) such that, the gMP(\\(\\alpha\\)) decomposition of functions in \\(\\mathcal{S}\\subset L^{2}(X)\\) optimized \\(\\kappa_{m}\\) steps at each iteration \\(m\\), is bounded by the same rate of convergence as a cMP(\\(\\alpha^{\\prime\\prime}\\)) using the corresponding continuous dictionary \\(\\mathcal{D}\\) with \\(\\alpha^{\\prime\\prime}=\\alpha(1-\\frac{1}{2}\\,\\beta^{-2}\\,\\rho_{\\rm d}\\,(1+ \\mathcal{K}))^{1/2}\\leq\\alpha\\)._
In other words, for \\(\\alpha=1\\), a gMP is equivalent to a cMP with a weakness factor two times closer to unity than the one reached by a dMP in the same conditions. Before proving this result, let us introduce some new lemmata.
**Lemma 1**.: _Given a function \\(u\\in L^{2}(X)\\) and a dictionary \\(\\mathcal{D}\\) of non-zero condition number \\(\\mathcal{K}^{-1}\\), if \\(\\lambda_{a}\\) is critical in the direction of \\(\\lambda_{b}\\), and if \\(\\lambda_{b}\\) is critical in the direction of \\(\\lambda_{a}\\), i.e. \\(\\gamma^{\\prime i}(0)\\,\\partial_{i}S_{u}(\\lambda_{a})=\\gamma^{\\prime i}(d)\\, \\partial_{i}S_{u}(\\lambda_{b})=0\\) for \\(\\gamma=\\gamma_{\\lambda_{a}\\lambda_{b}}\\) the geodesic joining \\(\\lambda_{a}\\) and \\(\\lambda_{b}\\) and \\(d=d_{\\mathcal{G}}(\\lambda_{a},\\lambda_{b})\\), then_
\\[|S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})|\\ \\leq\\ \\tfrac{1}{2}\\,\\|u\\|^{2}\\,d_{ \\mathcal{G}}(\\lambda_{a},\\lambda_{b})^{2}\\,(1+\\mathcal{K}). \\tag{13}\\]Proof.: Without loss of generality, assume that \\(S_{u}(\\lambda_{a})\\geq S_{u}(\\lambda_{b})\\). If this is not the case, we can switch the labels \\(a\\) and \\(b\\). Let us define \\(\\lambda(\\theta)=\\gamma(\\theta d)\\) with \\(\\theta\\in[0,1]\\) on the geodesic \\(\\gamma=\\gamma_{\\lambda_{a}\\lambda_{b}}\\). We have \\(\\lambda_{a}=\\lambda(0)\\) and \\(\\lambda_{b}=\\lambda(1)\\). Using the Corollary 1, the two following inequalities hold : \\(S_{u}(\\lambda(\\theta))\\geq S_{u}(\\lambda_{a})-\\|u\\|^{2}\\,d_{\\mathcal{G}}( \\lambda(\\theta),\\lambda_{a})^{2}\\,(1+\\mathcal{K})\\) and \\(S_{u}(\\lambda(\\theta))\\leq S_{u}(\\lambda_{b})+\\|u\\|^{2}\\,d_{\\mathcal{G}}( \\lambda(\\theta),\\lambda_{b})^{2}\\,(1+\\mathcal{K})\\).
Therefore, since by definition of \\(\\lambda(\\theta)\\), \\(d_{\\mathcal{G}}(\\lambda(\\theta),\\lambda_{a})=\\theta d\\) and \\(d_{\\mathcal{G}}(\\lambda(\\theta),\\lambda_{b})=(1-\\theta)d\\), we find \\(S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})\\leq\\|u\\|^{2}\\,\\big{(}\\theta^{2}+(\\theta -1)^{2}\\big{)}\\,d^{2}\\,(1+\\mathcal{K})\\) for all \\(\\theta\\in[0,1]\\). Taking the minimum over all \\(\\theta\\), we obtain finally \\(S_{u}(\\lambda_{a})-S_{u}(\\lambda_{b})\\ \\leq\\ \\frac{1}{2}\\,\\|u\\|^{2}\\,d_{\\mathcal{G}}( \\lambda_{a},\\lambda_{b})^{2}\\,(1+\\mathcal{K})\\).
In other words, the critical nature of \\(\\lambda_{a}\\) and \\(\\lambda_{b}\\) divides by two the bound on the decreasing of \\(S_{u}\\) between them compared to the situation where only one of these points is critical.
**Lemma 2**.: _Given a function \\(u\\in L^{2}(X)\\), assume that \\(S_{u}(\\lambda)\\) has a global maximum at \\(\\lambda_{M}\\), i.e. \\(\\partial_{i}S_{u}(\\lambda_{M})=0\\) for all \\(i\\), and write \\(\\mathcal{T}_{k}=\\{\\phi_{r}(k):r\\in\\mathbb{N}\\}\\) the trajectory of the gradient ascent described in (11) starting from a point \\(k\\in\\Lambda_{\\mathrm{d}}\\). There exists a \\(\\lambda^{\\prime}\\in\\mathcal{T}_{k}\\) that can be reached in a finite number of optimization steps, such that_
\\[S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\ \\leq\\ \\frac{1}{2}\\,\\|u\\|^{2}\\ d_{ \\mathcal{G}}(\\lambda_{M},k)^{2}\\,(1+\\mathcal{K}). \\tag{14}\\]
For the sake of clarity, the proof of this technical Lemma is placed in Appendix C. The main idea is to find a point in the trajectory \\(\\mathcal{T}_{k}\\) that is closer to \\(\\lambda_{M}\\) than \\(k\\), and that is also critical in the direction of \\(\\lambda_{M}\\) so that Lemma 1 can be applied. Let us now enter in the proof of the previous proposition.
Proof of Theorem 2.: In our \\(\\mathrm{gMP}(\\alpha)\\) decomposition of a function \\(f\\in\\mathcal{S}\\subset L^{2}(X)\\) defined before, given the iteration \\(m+1\\) where \\(u=R^{m}f\\) is analyzed, denote by \\(\\tilde{\\lambda}\\) the parameter of the atom in \\(\\mathcal{D}\\) maximizing \\(S_{u}\\), i.e. \\(S_{u}(\\tilde{\\lambda})=\\sup_{\\lambda\\in\\Lambda}S_{u}(\\lambda)\\).
If \\(\\tilde{k}\\) is the closest element of \\(\\Lambda_{\\mathrm{d}}\\) to \\(\\tilde{\\lambda}\\), from the covering property of \\(\\Lambda_{\\mathrm{d}}\\) we have \\(d_{\\mathcal{G}}(\\tilde{\\lambda},\\tilde{k})\\leq\\rho_{\\mathrm{d}}\\), and the Lemma 2 tells us that there exists a finite number of optimization steps \\(\\kappa_{m}\\) such that \\(S_{u}(\\phi_{\\kappa_{m}}(\\tilde{k}))\\geq S_{u}(\\tilde{\\lambda})-\\frac{1}{2}\\, \\rho_{\\mathrm{d}}\\,\\|u\\|^{2}\\,(1+\\mathcal{K})\\geq\\beta^{2}\\,\\big{(}1-\\frac{1}{2 }\\beta^{-2}\\,\\rho_{\\mathrm{d}}^{2}\\,(1+\\mathcal{K})\\big{)}\\,\\|u\\|^{2}\\), where the last term is positive from the density requirement \\(\\rho_{\\mathrm{d}}<\\beta/\\sqrt{1+\\mathcal{K}}\\).
Therefore, from the selection rule (12c), \\(S_{u}(\
u_{m+1})\\geq\\alpha^{2}\\,S_{u}(\\phi_{\\kappa_{m}}(\\tilde{k}))\\). We have thus \\(\\|R^{m+1}f\\|^{2}=\\|u\\|^{2}-S_{u}(\
u_{m+1})\\leq\\|u\\|^{2}-\\alpha^{2}\\,S_{u}( \\phi_{\\kappa_{m}}(\\tilde{k}))\\leq\\|u\\|^{2}\\,(1-\\alpha^{\\prime\\prime 2}\\beta^{2})\\), with \\(\\alpha^{\\prime\\prime}\\triangleq\\alpha\\big{(}1-\\frac{1}{2}\\beta^{-2}\\,\\rho_{ \\mathrm{d}}^{2}\\,(1+\\mathcal{K})\\big{)}^{1/2}\\). So, \\(\\|R^{m+1}f\\|\\leq(1-\\alpha^{\\prime\\prime 2}\\beta^{2})^{(m+1)/2}\\|f\\|\\) which is also the exponential convergence rate of the \\(\\mathrm{cMP}(\\alpha^{\\prime\\prime})\\) in \\(\\mathcal{D}\\) when \\(\\beta\\) exists.
In Theorem 2, even if the sequence of optimization steps \\(\\kappa_{m}\\) is proved to exist, it is actually unknown. One practical way to overcome this problem is to observe how the ratio \\(\\frac{|\
abla S_{u}|}{S_{u}}\\) decreases at each optimization steps, and to stop the procedure once this value falls below a predefined threshold. This follows from the idea that the closer to a local maximum \\(S_{u}(\\phi_{r}(k))\\) is, the smaller must be the optimization step. As it is often the case in optimization problems, an upper bound on the number of optimization steps can be fixed jointly to this threshold test.
## 6 Experiments
In this section, dMP and gMP decompositions of 1-D and 2-D signals are studied experimentally in different situations. These will imply different classes of signals and different discretization of parametrization of various densities.
Prior to these experiments, some remarks have to be made about dMP and gMP implementations. First, for both algorithms, as described in Equations (3) and (12), a _full-search_ has to be performed in \\(\\mathcal{D}_{\\mathrm{d}}=\\mathrm{dict}(\\Lambda_{\\mathrm{d}})\\) to compute all the squared scalar products \\(S_{u}\\) of the current residue \\(u=R^{m}f\\), with atoms \\(g_{k}\\). We decide thus to reduce the computational complexity of this full-search with the help of the Fast Fourier Transform (FFT). One component (for 1-D signals) or two components (for 2-D signals) of the parametrization correspond indeed to a regular grid of atoms positions, which makes \\(S_{u}\\) a discrete correlation relatively to these parameters. Moreover, as described in detail in [41, 42], we apply the fast _boundary renormalization_ of atoms, where atoms of \\(\\mathcal{D}\\) truncated by the limit of the signal remain valid atoms, i.e. of unit norm, and features that suddenly terminate at the signal boundary are correctly caught in the procedure. Notice that all our dMP and gMP experiments are performed in the non-weak case, i.e. \\(\\alpha=1\\).
Second, for the Gradient-Ascent optimization, we realize some simplifications to the initial formulation : the best discrete atom only is optimized at each MP iteration and \\(\\kappa_{m}=\\kappa>0\\) for all \\(m\\), with \\(\\kappa\\) typically equal to 5 or 10. Even if these two restrictions are not optimal compared to the method described in the theoretical results, the gain of the optimization in the quality of signals reconstructions is already impressive. We also set all the step sizes to \\(t_{r}=\\chi>0\\), with \\(\\chi=0.1\\) in all our experiments. Then, at each optimization step \\(r\\), we adaptively decrease the step parameter \\(t_{r}\\) by dividing it by 2 if the ascent condition is not met, i.e. if \\(S_{u}(\\phi_{r+1}(k))<S_{u}(\\phi_{r}(k))\\). If after 10 divisions, the ascent condition still does not hold, the optimization process is simply stopped.
Finally, let us mention that our algorithms are written in MATLAB(c) and are consequently not truly optimized. The different computation times that we provide through this section allow us only to compare various schemes, as for dMP and gMP decomposition of the same signal. All of our experiments were realized on a Pentium 1.73 GHz with 1Gb of memory.
### One Dimensional Analysis
This section analyzes the benefit obtained from gMP, and from an increased density of the discrete dictionary, when decomposing some specific classes of randomly generated 1-D signals. In our experiments, each 1-D signal is of unit norm and has \\(N=2^{13}\\) samples. Each signal consists of the sum of 100 random bursts, each burst being a rectangular or Gaussian window, depending on the class of the signal. The position and magnitude of each burst is selected at random, according to a uniform distribution. The duration of the rectangular window and the standard deviation of the Gaussian function are selected uniformly within the range \\([\\frac{1}{2}L,\\frac{3}{4}L]\\), for \\(L=2^{8}\\). The mother function of the dictionary is the _Mexican Hat_ function \\(g(t)\\propto(1-t^{2})\\,e^{-t^{2}/2}\\). Its scale and translation parameters are sampled as defined in Section 4.2, following the \\(\\tau\\)-adic discretization \\(\\Lambda_{\\mathrm{d}}=\\{(nb_{0}\\tau^{j},a_{0}\\tau^{j}):j,n\\in\\mathbb{Z}\\}\\), with \\(a_{0}=1\\). We work in the non-weak case, i.e. \\(\\alpha=1\\), for dMP and gMP, and we set \\(\\kappa=10\\) for gMP.
Figures 1(a) and 1(b) analyze how the energy \\(\\|R^{m}f\\|^{2}\\) of the residual decreases with the number \\(m\\) of MP iterations for the random Gaussian and rectangular signals, respectively. Notice that only a small number of iterations are studied (twelve) since our analysis aims at analyzing the behaviour of dMP and gMP on one class of signals. However the current residual \\(R^{m}f\\) belongs only approximately to the considered class on small \\(m\\) when not many atoms have been substracted to \\(f=R^{0}f\\). Results presented are averaged over 20 trials. In each graph, two
distinct discretizations of the Mexican Hat parameters are considered to provide two discrete dictionaries, with one (\\(b_{0}=1,\\log_{2}\\tau=0.25\\)) being two times denser than the other (\\(b_{0}=2,\\log_{2}\\tau=0.5\\)), according to the behavior11 of the density radius \\(\\rho_{\\rm d}\\) analyzed in Section 4.2. Both discrete and geometrically optimized MP are studied for each dictionary. We observe that gMP significantly outperforms dMP, and that an increased density of the dictionary also speeds up the MP convergence. By comparing Figure 1(a) and 1(b), we also observe that the residual energy decreases much faster for Gaussian signals than for rectangular ones, which unsurprisingly reveals that the Mexican Hat dictionary is better suited to represent Gaussian structures.
Footnote 11: Obviously equivalent for \\(\\log_{2}\\tau\\) or \\(\\ln\\tau\\) variations.
Figures 1(c)-1(f) further analyze the impact of the discretization of the dictionary parameters on MP convergence. In these figures, we introduce the notion of _normalized atom energy_ (NAE) to measure the convergence rate of a particular dictionary dealing with a specific class of signals at a specific MP iteration step. Formally, the NAE denotes the expected value of the best squared atom coefficient computed on a normalized signal when this one is randomly generated within a specific class of signals. Mathematically, \\(\\text{NAE}=\\mathbb{E}\\big{[}\\langle g_{\\lambda_{*}},\\frac{u}{\\|u\\|}\\rangle^{2} \\big{]}\\), where \\(u\\) is a sample signal of the class and the \\(g_{\\lambda_{*}}\\) the associated best atom for a fixed greedy algorithm (dMP or gMP). We show the dependence of NAE on the discretization for the \\(1^{st}\\) and \\(30^{th}\\) iteration12 for both rectangular and Gaussian signals. Results are averaged over 500 trials.
Footnote 12: Note that the NAE at the \\(30^{th}\\) iteration refers to the NAE computed on the residual signals obtained after 29 iterations of the gMP with the densest dictionary, independently of the actual discrete dictionary considered at iteration 30. Hence, the reference class of signals to compute the NAE at iteration 30 is the same for all investigated dictionaries, i.e. for all \\(\\log_{2}\\tau\\) values.
By considering the dMP and gMP curves in Figures 1(c)-1(f), we first observe that the NAE is significantly higher for gMP than for dMP, which confirms the advantage of using gradient ascent optimization to refine the parameters of the atoms extracted by dMP. Note that the NAE for a Gaussian random signal (Fig. 1(c)-1(d)) is nearly one order of magnitude higher than for a rectangular one (Fig.1(e)-1(f)). This confirms that the Mexican Hat dictionary better matches the Gaussian structures than the rectangular ones. In all cases, the NAE sharply decreases with the iteration index, which is not a surprise as the coherence between the signal and the dictionary decreases as MP expansion progresses.
To better understand the penalty induced by the discretization of the continuous dictionary, we now analyze how the rate of convergence for a particular class of signals behaves compared to the reference provided by a signal composed of a single Mexican Hat function. For that purpose, an additional curve, denoted \\(\\text{dMP}_{a}\\), has been plotted in each graph. This curve is expected to provide an upper bound to the penalty induced by a sparser dictionary. Specifically, \\(\\text{dMP}_{a}\\) plots the energy captured during the \\(1^{st}\\) step of the dMP expansion of a random (scale and position) Mexican Hat function, as a function of the discretization parameter \\(\\log_{2}\\tau\\). As the Mexican Hat is the generative function of the dictionary, the \\(1^{st}\\) step of the MP expansion would capture the entire function energy if the entire continuous dictionary were used, but is particularly penalized by a discretization of the dictionary. In each graph of Figures 1(c)-1(f), to compare \\(\\text{dMP}_{a}\\) with dMP, the \\(\\text{dMP}_{a}\\) curve obtained with pure atoms (i.e. unit coefficients) is scaled to correspond to atoms whose energy is set to the NAE expected from the expansion of the corresponding class of signals with a continuous dictionary. In practice, the NAE expected with a continuous dictionary is estimated based on the NAE computed with gMP and the densest dictionary (\\(\\log_{2}\\tau=0.25\\)).
Figure 1: (a)-(b) Residual energy as a function of the MP iteration. \\(\\mathrm{dMP}\\left(b_{0},\\log_{2}\\tau\\right)\\) and \\(\\mathrm{gMP}\\left(b_{0},\\log_{2}\\tau\\right)\\) refer to discrete and optimized MP, computed on a discretization \\(\\Lambda_{\\mathrm{d}}=\\{(nb_{0}\\tau^{j},a_{0}\\tau^{j}):j,n\\in\\mathbb{Z}\\}\\) of the continuous Mexican Hat dictionary. (c)-(f) Normalized atom energy (NAE) as a function of the \\(\\log_{2}\\tau\\) discretization parameter. \\(b_{0}\\) is set to one in all cases. dMP and gMP respectively refer to discrete and optimized MP. \\(\\mathrm{dMP}_{a}\\) provides a lower bound to the decrease of NAE with \\(\\log_{2}\\tau\\), and is formally described in the text.
The approximation is reasonable as we observe that gMP saturates for small \\(\\log_{2}\\tau\\) values, i.e. for large densities. We first observe that both the dMP and the dMP\\({}_{a}\\) curves nearly coincide in Figure 1(c). Hence, the MP expansion of a Gaussian signal is penalized as much as the one of a Mexican Hat function by a reduction of the discrete dictionary density. We then observe that the penalty induced by a reduction of density decreases as the coherence between signal and dictionary structures drops. This is for example the case when the signal to represent is intrinsically sharper than the dictionary structures (Fig. 1(e)-1(f)), or because the coherent structures have been extracted during the initial MP steps (Fig. 1(d)). This last observation is of practical importance because it reveals that using a coarsely discretized dictionary incurs a greater penalty during the first few iterations of the MP expansion than during the subsequent ones. For compression applications, it might thus be advantageous to progressively decrease the density of the dictionary along the expansion process, the cost associated to the definition of the atom indices decreasing with the density of the dictionary13. Hence, it might be more efficient - in a rate-distortion sense - to use a dense but expensive dictionary during the first MP iterations, so as to avoid penalizing the MP convergence rate, but a sparser and cheaper during subsequent steps, so as to save bits. We plan to investigate this question in details in a future publication.
Footnote 13: Less distinct atom indices need to be described by the codewords.
### Two Dimensional Analysis
This section analyzes experimentally the effect of discretizing a dictionary on the Matching Pursuit decomposition of images, i.e. with the Hilbert space \\(L^{2}(\\mathbb{R}^{2})\\).
Parametrization and DictionaryWe use the same dictionary as in [41]. Its mother function \\(g\\) is defined by a separable product of two 1-D behaviors : a Mexican Hat wavelet in the \\(x\\)-direction, and a Gaussian in the \\(y\\)-direction, i.e. \\(g(\\mathbf{x})=(\\frac{4}{3\\pi})^{1/2}\\,(1-x^{2})\\,\\exp(-\\frac{1}{2}\\,|\\mathbf{ x}|^{2})\\), where \\(\\mathbf{x}=(x,y)\\in\\mathbb{R}^{2}\\) and \\(\\|g\\|=1\\)[30]. Notice that \\(g\\) is infinitely differentiable.
The dictionary is defined by the translations, rotations, and anisotropic dilations of \\(g\\). Mathematically, these transformations are represented by operators \\(T_{\\mathbf{b}}\\), \\(R_{\\theta}\\), and \\(D_{\\mathbf{a}}\\), respectively. These are given by \\([T_{\\mathbf{b}}\\,g]\\big{(}\\mathbf{x}\\big{)}=g\\big{(}\\mathbf{x}-\\mathbf{b} \\big{)}\\), \\([R_{\\theta}\\,g]\\big{(}\\mathbf{x}\\big{)}=g\\big{(}r_{\\theta}^{-1}\\,\\mathbf{x} \\big{)}\\), and \\([D_{\\mathbf{a}}\\,g]\\big{(}\\mathbf{x}\\big{)}=(a_{1}a_{2})^{-1/2}\\,g\\big{(}d_{ \\mathbf{a}}^{-1}\\mathbf{x}\\big{)}\\), for \\(\\theta\\in S^{1}\\simeq[0,2\\pi)\\), \\(\\mathbf{b}\\in\\mathbb{R}^{2}\\), \\(\\mathbf{a}=(a_{1},a_{2})\\), \\(a_{1},a_{2}\\in\\mathbb{R}_{+}^{*}\\), while \\(r_{\\theta}\\) is the usual \\(2\\times 2\\) rotation matrix \\(r_{\\theta}\\) and \\(d_{\\mathbf{a}}=\\text{diag}(a_{1},a_{2})\\).
In other words, we have a parametrization of \\(P=5\\) dimensions and \\(\\Lambda=\\{\\lambda=(\\lambda^{0},\\ldots,\\lambda^{4})=(b_{1},b_{2},\\theta,a_{1},a _{2})\\in\\mathbb{R}^{2}\\times S^{1}\\times(\\mathbb{R}_{+}^{*})^{2}\\}\\). At the end, each atom of the dictionary \\(\\mathcal{D}=\\{g_{\\lambda}:\\lambda\\in\\Lambda\\}\\) is generated by \\(g_{\\lambda}(\\mathbf{x})=[U(\\lambda)\\,g](\\mathbf{x})\\triangleq[T_{\\mathbf{b}} \\,R_{\\theta}\\,D_{\\mathbf{a}}\\,g](\\mathbf{x})\\), with \\(\\|g_{\\lambda}\\|=\\|g\\|=1\\).
Obviously, the dictionary \\(\\mathcal{D}\\) is complete in \\(L^{2}(\\mathbb{R}^{2})\\). Indeed, translations, rotations and isotropic dilations alone are already enough to constitute a wavelet basis of \\(L^{2}(X)\\) since \\(g\\) is an admissible wavelet [35, 43]. Finally, as requested in the previous section, from the smoothness of \\(g\\) and of the transformations \\(U\\) above, the atoms \\(g_{\\lambda}\\) of our dictionary \\(\\mathcal{D}\\) are twice differentiable on each component \\(\\lambda^{i}\\).
Spatial SamplingFor all our experiments, images are discretized on a Cartesian regular grid of pixels, i.e. an image \\(f\\) takes its values on the grid \\(\\mathcal{X}=\\big{(}[0,N_{x})\\times[0,N_{y})\\big{)}\\cap\\mathbb{Z}^{2}\\), with \\(N_{x}\\) and \\(N_{y}\\) the \"\\(x\\)\" and \"\\(y\\)\" sizes of the grid. We work in the _continuous approximation_, that is we assume that the grid \\(\\mathcal{X}\\) is fine enough to guarantee that the scalar products \\(\\langle\\cdot,\\cdot\\rangle\\) and norms \\(\\|\\cdot\\|\\) are well estimated from their discrete counterparts. This holds of course for band-limited functions on \\(L(\\mathbb{R}^{2})\\).
In consequence, in order to respect this continuous approximations and to have dictionary atoms smaller than the image size, the mother function \\(g\\) of our dictionary \\(\\mathcal{D}\\) must be dilated in a particular range of scales so that \\(g_{\\lambda}\\) is essentially band-limited, i.e. \\(a_{1},a_{2}\\in[a_{\\mathrm{m}},a_{\\mathrm{M}}]\\). According to the definition of \\(g\\) above, we set experimentally \\(a_{\\mathrm{m}}=0.7\\) and \\(a_{\\mathrm{M}}=\\min(N_{x},N_{y})\\).
Discrete Parameter SpaceWe decide to sample regularly \\(\\Lambda\\) so that to have \\(N_{\\mathrm{pix}}=N_{x}N_{y}\\) positions \\(\\mathbf{b}\\), \\(J^{2}\\) scales \\(a_{1}\\) and \\(a_{2}\\) selected logarithmically in the range \\([a_{\\mathrm{m}},a_{\\mathrm{M}}]\\), and \\(K\\) orientations evenly spaced in \\([0,\\pi)\\), with \\(J,K\\in\\mathbb{N}\\). At the end, we obtain the discretized parameter set \\(\\Lambda_{\\mathrm{d}}=\\Lambda_{\\mathrm{d}}(N_{\\mathrm{pix}},J,K)=\\left\\{\\,( \\mathbf{b},\\theta_{n},a_{1j},a_{2j^{\\prime}}),\\ \\mathbf{b}\\in\\mathcal{X},\\ n\\in[0,K-1],\\ j,j^{\\prime}\\in[0,J-1]\\,\\right\\}\\), and the corresponding dictionary \\(\\mathcal{D}_{\\mathrm{d}}(N_{\\mathrm{pix}},J,K)=\\mathrm{dict}(\\Lambda_{ \\mathrm{d}}(N_{\\mathrm{pix}},J,K))\\). The number of atoms in the dictionary is simply \\(|\\mathcal{D}_{\\mathrm{d}}|=J^{2}K\\,N_{\\mathrm{pix}}\\).
\\begin{table}
\\begin{tabular}{|c|c|c|} \\hline & \\(J=3\\) & \\(J=5\\) \\\\ \\hline \\(K=4\\) & 24.30 dB (834s) & 25.88 dB (2327s) \\\\ (\\(\\kappa=5\\)) & 26.08 dB (889s) & 27.09 dB (2381s) \\\\ (\\(\\kappa=10\\)) & 26.68 dB (950s) & 27.37 dB (2447s) \\\\ \\hline \\(K=8\\) & 25.21 dB (1660s) & 26.63 dB (4634s) \\\\ \\cline{2-3} (\\(\\kappa=5\\)) & 27.05 dB (1715s) & 27.92 dB (4703s) \\\\ (\\(\\kappa=10\\)) & 27.44 dB (1772s) & 28.09 dB (5131s) \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: dMP and gMP applied on Barbara image. Quality (in PSNR) of the reconstruction after 300 iterations for various \\(J\\), \\(K\\) and \\(\\kappa\\). In each table cell, the first row correspond to dMP result, the second and the third rows to gMP.
**Results** We start our experiment by decomposing the venerable image of Barbara. 300 atoms were selected by dMP and gMP for various \\(J\\) and \\(K\\). Results are presented in Table 1. In these tests, the best quality obtained for dMP corresponds obviously to the finest grid, i.e. \\(J=5\\) and \\(K=8\\) (\\(26.63\\,\\)dB, Fig.2(a)), with a computational time (CT) of 4634s. With 10 optimization steps (\\(\\kappa=10\\)), the gMP for the coarsest parametrization (\\(J=3\\) and \\(K=4\\)) is equivalent to the best dMP result with a PSNR of \\(26.68\\,\\)dB and a CT of only 950s, i.e. almost five time faster. This is also far better than the dMP on the same grid (\\(24.30\\,\\)dB). The visual inspection of the dMP image (\\(J=5\\), \\(K=8\\), Fig.2(a)) and the gMP image (\\(J=3\\), \\(K=4\\), \\(\\kappa=10\\), 2(b)) is also instructive. Most of the features of the gMP results are well represented (e.g. Barbara's mouth, eyes, nose, hair, ). However, the regular pattern of the chair in the background of the picture, which needs a lot of similar atoms, is poorly drawn. This can be explained by the fact that this highly directional structure has to be represented by a lot of similarly oriented and scaled atoms with similar amplitude. The fine grid of dMP has therefore more chance to correctly fit these atoms, while the gMP on its coarse grid is deviated in its optimization process to more prominent structure with higher amplitudes. Notice finally, the best optimized result (PSNR \\(28.09\\,\\)dB) is obtained for \\(\\kappa=10\\) on the grid associated to \\(J=5\\) and \\(K=8\\) orientations.
For our second experiment, we compare dMP and gMP (\\(\\kappa=10\\)) 300 atoms approximation of well known 128\\(\\times\\)128 pixels pictures, namely Lena, Baboon, Cameraman, GoldHill, and Peppers, on the same parametrization grid (\\(J=4\\), \\(K=8\\)). For a computational time slightly higher (5%) than the dMP decomposition, we reach in all cases a significantly higher PSNR with gMP than with dMP, i.e. the dB gain is within the range \\([0.87,2.03]\\).
## 7 Related Works
A similar approach to our geometric analysis of MP atom selection rule has been proposed in [24]. In that paper, a dictionary of (\\(L^{2}\\)-normalized) wavelets is seen as a manifold associate to a Riemannian metric. However, the authors restrict their work to wavelet parametrization inherited from Lie group (such as the affine group). They also work only on the \\(L^{2}\\) (dictionary) distance between dictionary atoms and do not introduce intrinsic geodesic distance. They define a discretization of the parametrization \\(\\Lambda\\) such that, in our notations, \\(\\mathcal{G}_{ij}\\Delta\\lambda^{i}\\Delta\\lambda^{j}<\\epsilon\\), with \\(\\Delta\\lambda(k)\\) the local width of the cell localized on \\(k\\in\\Lambda_{\\mathrm{d}}\\). There is however no analysis of the effect of this discretization on the MP rate of convergence.
\\begin{table}
\\begin{tabular}{|r|c|c|} \\hline Image name & dMP & gMP (\\(\\kappa=10\\)) \\\\ \\hline Barbara & \\(25.94\\,\\)dB (2707s) & \\(27.86\\,\\)dB (2820s) \\\\ Lena & \\(26.50\\,\\)dB (2709s) & \\(28.53\\,\\)dB (2857s) \\\\ Baboon & \\(24.06\\,\\)dB (2770s) & \\(24.93\\,\\)dB (2900s) \\\\ Cameraman & \\(25.80\\,\\)dB (2807s) & \\(27.62\\,\\)dB (2918s) \\\\ GoldHill & \\(26.54\\,\\)dB (2810s) & \\(28.12\\,\\)dB (2961s) \\\\ Peppers & \\(24.51\\,\\)dB (2853s) & \\(26.69\\,\\)dB (3013s) \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Comparison of dMP and gMP on different usual images of size 128\\(\\times\\)128. Computations have been performed for \\(J=4\\), \\(K=8\\), 300 atoms. Computation times are given indicatively in parenthesis.
In [14], the author uses a 4-dimensional Gaussian chirp dictionary to analyze 1-D signals with MP algorithm. He develops a fast procedure to find the best atom of this dictionary in the representation of the current MP residual by applying a two-step search. First, by setting the chirp rate parameter to zero, the best common Gabor atom is found with full search procedure taking advantage of the FFT algorithm. Next, a ridge theorem proves that starting from this Gabor atom, the best Gaussian chirp atom can be approximated with a controlled error. The whole method is similar to the development of our optimized matching pursuit since we start also from a discrete parametrization to find a better atom in the continuous one. However, our approach is more general since we are not restricted to a specific dictionary. We use the intrinsic geometry of any smooth dictionary manifold to perform a optimization driven by a geometric gradient ascent.
## 8 Conclusions
In this paper, we have adopted a geometrical framework to study the effect of dictionary discretization on the rate of convergence associated to MP. In a first step, we have derived an upper bound for this rate using geometrical quantities inherited from the dictionary seen as a manifold, such as the geodesic distance, the condition number of the dictionary, and the covering property of the discrete set of atoms in the continuous dictionary. We have also shown in a second step how a simple optimization of the parameters selected by the discrete dictionary, can lead theoretically and experimentally to important gain in the approximation of (general) signals.
In a future study, it could be interesting to see how our methods extend to other greedy algorithms, like the Orthogonal Matching Pursuit (OMP) [44]. However, this extension has to be performed carefully since we need to characterized the convergence of continuous OMP, as it is here for the one of MP induced by the existence of a greedy factor.
Our work paves the way for future extensions and advances in two practical fields. As explained in our 1-D experiments, a first idea could be to analyze carefully the benefit - in a rate-distortion sense - of using a dense but expensive dictionary during the first MP iterations, so as to avoid penalizing the MP convergence rate, but a sparser and cheaper dictionary during subsequent steps, so as to save bits. We plan to investigate this question in details in a future publication.
Another idea is to analyze the behaviors of gMP in the Compressive Sensing (CS) formalism, that is after random projection of the signal and atoms. Matching Pursuit is already used currently as a retrieval algorithm of CS of sparse signals [45, 46, 47]. However, recent results [48] suggests also that for manifold of bounded condition number, their geometrical structure (metric, distances) is essentially preserved after random projection of their points in a smaller space than the ambient one. If a natural definition of random projection in our continuous formalism can be formulated, a natural question is thus to check if the gradient ascent technique survives after random projection of the residual and the atoms on the same subspace. This could lead to dramatic computation time reduction, up to controlled errors that could be even attenuated by the greedy iterative procedure.
## Acknowledgements
LJ wishes to thank Prof. Richard Baraniuk and his team at Rice University (Houston, TX, USA) for the helpful discussions about general \"manifolds processing\" and Compressive Sensing. LJ is also very grateful to R. Baraniuk for having accepted and funded him during a short postdoctoral stay at Rice University. LJ and CDV are funded by the Belgian FRS-FNRS. We would like to thank Dr. David Kenric Hammond (LTS2/EPFL, Switzerland) for his careful proofreading and the referees for valuable comments on this paper.
## Appendix A Complements on the Geometry of \\((\\Lambda,\\mathcal{G}_{ij})\\)
In this short appendix, we provide some additional information on the geometrical concepts developed in Section 2. First, as explained in that section, the parameter space \\(\\Lambda\\) of the dictionary \\(\\mathcal{D}=\\mathrm{dict}(\\Lambda)\\) is linked to a Riemannian manifold \\(\\mathcal{M}=(\\Lambda,\\mathcal{G}_{ij})\\) with a structure inherited from the dictionary \\(\\mathcal{D}\\subset L^{2}(X)\\). From the geodesic definition (1) and the metric relation (2), we see that the curve \\(\\gamma_{\\lambda_{a}\\lambda_{b}}(t)\\in\\Lambda\\) is thus also a geodesic in \\(\\mathcal{M}\\). In other words, it is defined only from the metric \\(\\mathcal{G}_{ij}\\) and not anymore from the full behavior of atoms of \\(\\mathcal{D}\\subset L^{2}(X)\\). In [31], we explain also that \\(\\mathcal{M}\\) is in fact an _immersed manifold_[28] in the Hilbert manifold \\(\\mathcal{D}\\subset L^{2}(X)\\), and \\(\\mathcal{G}_{ij}\\) is the associated _pullback_ metric. All the geometric quantities of the Riemannian analysis of \\(\\mathcal{M}\\), such as Christoffel's symbols, covariant derivatives, curvature tensors, etc. can be defined. This is actually done in the following appendices of this paper.
Second, some important designations can be introduced. The metric \\(\\mathcal{G}_{ij}(\\lambda)\\) is a (_covariant_) _tensor_ of rank-2, i.e. described by two subscript indices, on \\(\\mathcal{M}\\). This means that \\(\\mathcal{G}_{ij}\\) satisfies a specific transformation under changes of coordinates in \\(T_{\\lambda}\\Lambda\\) such that the values of the bilinear form14\\(\\mathcal{G}_{\\lambda}(\\xi,\\zeta)\\triangleq\\xi^{i}\\,\\zeta^{j}\\,\\mathcal{G}_{ij}(\\lambda)\\) that it induces are unmodified15. A function \\(f:\\Lambda\\to\\mathbb{R}\\) is a _scalar field_ on \\(\\mathcal{M}\\), or rank-0 tensor. A vector field \\(\\zeta^{i}(\\lambda)\\) on this manifold, which associates to each point \\(\\lambda\\) a vector in the tangent plane \\(T_{\\lambda}\\Lambda\\), is a function \\(\\zeta:\\Lambda\\to T_{\\lambda}\\Lambda\\simeq\\mathbb{R}^{P}\\) also named (_contravariant_) rank-1 tensor, i.e. with one superscript. More generally, a rank-\\((m,n)\\) tensor is a quantity \\(T^{i_{1}\\,\\cdots\\,i_{m}}_{j_{1}\\cdots j_{n}}(\\lambda)\\)\\(m\\)-times contravariant and \\(n\\)-times covariant such that \\(\\mathcal{G}_{i_{1}k_{1}}\\cdots\\,\\mathcal{G}_{i_{m}k_{m}}\\)\\(\\xi^{k_{1}}_{1}\\cdots\\,\\xi^{k_{m}}_{m}\\)\\(T^{i_{1}\\cdots i_{m}}_{j_{1}\\cdots j_{n}}(\\lambda)\\)\\(\\zeta^{j_{1}}_{1}\\cdots\\,\\zeta^{j_{n}}_{n}\\) is invariant under change of coordinates in \\(T_{\\lambda}\\Lambda\\) for any vectors \\(\\{\\xi_{1},\\,\\cdots,\\xi_{m},\\zeta_{1},\\,\\cdots,\\zeta_{n}\\}\\) in this space.
Footnote 14: Also named first fundamental form [28].
Footnote 15: In the same way that the scalar product between two vectors in the usual Euclidean space is independent of the choice of coordinates.
## Appendix B Proof of Proposition 2
Let \\(\\gamma\\) be a geodesic in \\(\\mathcal{M}\\) with curvilinear parametrization, i.e. with \\(|\\gamma^{\\prime}(s)|=1\\). Writing \\(\\gamma=\\gamma(s)\\) and \\(\\gamma^{\\prime}=\\frac{\\mathrm{d}}{\\mathrm{d}s}\\gamma(s)\\), we have \\(\\frac{\\mathrm{d}}{\\mathrm{d}s}\\,g_{\\gamma(s)}=\\partial_{i}g_{\\gamma}\\,\\gamma^{ \\prime i}\\) and \\(\\frac{\\mathrm{d}^{2}}{\\mathrm{d}s^{2}}\\,g_{\\gamma(s)}=\\partial_{ij}g_{\\gamma}\\, \\gamma^{\\prime i}\\gamma^{\\prime j}+\\partial_{k}g_{\\gamma}\\,\\gamma^{\\prime \\prime k}\\), where we write abusively \\(\\partial_{i}g_{\\gamma}=\\partial_{i}g_{\\lambda}|_{\\lambda=\\gamma(s)}\\) and similarly for second order derivative.
We need now some elements of differential geometry. Since \\(\\gamma\\) is a geodesic in \\(\\mathcal{M}\\), it respects the second order differential equation \\(\\gamma^{\\prime\\prime k}+\\Gamma^{k}_{ij}\\,\\gamma^{\\prime i}\\gamma^{\\prime j}=0\\), where the values \\(\\Gamma^{k}_{ij}=\\frac{1}{2}\\,\\mathcal{G}^{lk}\\left(\\partial_{j}\\,\\mathcal{G}_ {li}+\\frac{1}{2}\\,\\mathcal{G}^{lk}\\right)\\) are the same as the first order differential equation \\(\\gamma^{\\prime\\prime k}+\\Gamma^{k}_{ij}\\,\\gamma^{\\prime\\prime k}+\\Gamma^{k}_ {ij}\\,\\gamma^{\\prime\\prime k}=0\\). We will now show that the second order differential equation \\(\\gamma^{\\prime\\prime k}+\\Gamma^{k}_{ij}\\,\\gamma^{\\prime\\prime k}+\\Gamma^{k}_ {ij}\\,\\gamma^{\\prime\\prime k}=0\\) is equivalent to the second order differential equation \\(\\gamma^{\\prime\\prime k}+\\Gamma^{k}_{ij}\\,\\gamma^{\\prime\\prime k}=0\\).
\\(\\partial_{i}\\,{\\cal G}_{jl}-\\partial_{l}\\,{\\cal G}_{ij}\\)) are the Christoffel's symbols [28] derived from the metric \\({\\cal G}_{ij}\\). Therefore, we get
\\[rcl\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma} = \\partial_{ij}g_{\\gamma}\\,\\gamma^{\\prime i}\\gamma^{\\prime j}- \\partial_{k}g_{\\gamma}\\,\\Gamma^{k}_{ij}\\,\\gamma^{\\prime i}\\gamma^{\\prime j} \\tag{15}\\] \\[= \
abla_{ij}g_{\\gamma}\\,\\gamma^{\\prime i}\\gamma^{\\prime j}, \\tag{16}\\]
where \\(\
abla_{i}g_{\\gamma}=\\partial_{i}g_{\\gamma}\\) and \\(\
abla_{ij}g_{\\gamma}=\
abla_{i}\
abla_{j}g_{\\gamma}=\\partial_{ij}g_{\\gamma}- \\partial_{k}g_{\\gamma}\\,\\Gamma^{k}_{ij}\\) are by definition the first order \\(i\\) and the second order \\(ij\\)_covariant derivatives_ of \\(g_{\\gamma}\\) respectively [28]. In addition, we can easily compute that for \\({\\cal M}=(\\Lambda,{\\cal G}_{ij})\\),
\\[\\Gamma^{k}_{ij}\\ =\\ {\\cal G}^{kl}\\,\\langle\\partial_{ij}g_{\\lambda},\\partial_{ l}g_{\\lambda}\\rangle. \\tag{17}\\]
The lower bound of the proposition comes simply from the projection of \\(\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma(s)}\\) onto \\(g_{\\gamma}\\). Indeed, for any \\(\\lambda\\in\\Lambda\\), since \\(\\|g_{\\lambda}\\|^{2}=\\langle g_{\\lambda},g_{\\lambda}\\rangle=1\\), \\(\\langle\\partial_{i}g_{\\lambda},g_{\\lambda}\\rangle=0\\) and \\(\\langle\\partial_{ij}g_{\\lambda},g_{\\lambda}\\rangle=-{\\cal G}_{ij}\\). By (15), \\(\\langle\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma(s)},g_{\\gamma}\\rangle= \\langle\\partial_{ij}g_{\\gamma},g_{\\gamma}\\rangle\\,\\gamma^{\\prime i}\\gamma^{ \\prime j}=-{\\cal G}_{ij}\\,\\gamma^{\\prime i}\\gamma^{\\prime j}=-1\\), and using Cauchy-Schwarz we get \\(\\|\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma(s)}\\|\\geq 1\\). Therefore, for \\(\\epsilon>0\\) and \\(\\gamma_{\\xi}\\ :\\ [0,\\epsilon]\\to\\Lambda\\), a segment of geodesic starting from \\(\\lambda\\) with unit speed \\(\\xi\\),
\\[{\\cal K}\\ \\geq\\ \\sup_{\\xi:|\\xi|=1}\\|\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{ \\gamma_{\\xi}(s)}\\big{|}_{s=0}\\|\\ \\geq\\ 1.\\]
For the upper bound, coming back to any geodesic \\(\\gamma\\), we need to analyze directly \\(\\|\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma(s)}\\|^{2}\\). Using (16) and the expression (17) of the Christoffel's symbols above, we have \\(\\|\\frac{{\\rm d}^{2}}{{\\rm d}s^{2}}\\,g_{\\gamma(s)}\\|^{2}=\\|\
abla_{ij}\\,g_{ \\gamma}\\,\\gamma^{\\prime i}\\gamma^{\\prime j}\\|^{2}\\leq\\langle\
abla_{ij}\\,g_{ \\gamma},\
abla_{kl}\\,g_{\\gamma}\\rangle\\,{\\cal G}^{ik}\\,{\\cal G}^{jl}\\), where we used \\(|\\gamma^{\\prime}|=1\\) and the Cauchy-Schwarz (CS) inequality expressed in the Einstein's summation notation on rank-2 tensors. This latter states that, for the tensors \\(A_{ij}=\
abla_{ij}\\,g_{\\gamma}\\) and \\(B^{ij}=\\gamma^{\\prime i}\\gamma^{\\prime j}\\), \\(|A_{ij}B^{ij}|^{2}\\leq|A_{ij}\\,A_{kl}\\,{\\cal G}^{ki}\\,{\\cal G}^{lj}|\\,|B^{ij}\\, B^{kl}\\,{\\cal G}_{ki}\\,{\\cal G}_{lj}|\\), the equality holding if the two tensors are multiple of each other. We prove in [31] the general explanation for rank-n tensor as a simple consequence of the positive-definiteness of \\({\\cal G}_{ij}\\).
Therefore, taking \\(\\gamma=\\gamma_{\\xi}\\), and since \\(\\gamma_{\\xi}(0)=\\lambda\\),
\\[{\\cal K}\\ \\leq\\ \\sup_{\\lambda\\in\\Lambda}\\,\\left[\\left\\langle\
abla_{ij}\\,g_{ \\lambda},\
abla_{kl}\\,g_{\\lambda}\\right\\rangle{\\cal G}^{ik}\\,{\\cal G}^{jl} \\,\\right]^{\\frac{1}{2}}. \\tag{18}\\]
In the companion Technical Paper [31], we prove that this inequality is also equivalent to
\\[{\\cal K}\\ \\leq\\ \\sup_{\\lambda\\in\\Lambda}\\,\\,\\big{[}\\,R(\\lambda)+\\|\\Delta\\,g_{ \\lambda}\\|^{2}\\,\\big{]}^{\\frac{1}{2}},\\]
where \\(R\\) is the scalar curvature of \\({\\cal M}\\), i.e. the quantity \\(R=R_{ijkl}\\,{\\cal G}^{ik}\\,{\\cal G}^{jl}\\) contracted from the curvature tensor \\(R_{iklm}=\\frac{1}{2}(\\partial_{kl}{\\cal G}_{im}+\\partial_{im}{\\cal G}_{kl}- \\partial_{km}{\\cal G}_{il}-\\partial_{il}{\\cal G}_{km})+{\\cal G}_{np}(\\Gamma^{n} _{kl}\\,\\Gamma^{p}_{im}-\\Gamma^{n}_{km}\\,\\Gamma^{p}_{il})\\), and \\(\\Delta g_{\\lambda}={\\cal G}_{ij}\\,\
abla^{i}\\,\
abla^{j}\\,g_{\\lambda}\\) is the Laplace-Beltrami operator applied on \\(g_{\\lambda}\\). The curvature \\(R\\) requires only the knowledge of \\({\\cal G}_{ij}(\\lambda)\\) (and its derivatives), implying just one step of scalar products computations, i.e. integrations in \\(L^{2}(X)\\).
The reader who does not want to deal with differential geometry can however get rid of the covariant derivatives of Equation (18) by replacing them by usual derivatives. This provides however a weaker bound. Indeed, using the expression (17) of the Christoffel's symbols, some easy calculation provides \\(0\\leq\\left\\langle\
abla_{ij}\\,g_{\\lambda},\
abla_{kl}\\,g_{\\lambda}\\right\\rangle {\\cal G}^{ik}\\,{\\cal G}^{jl}=\\left\\langle\\partial_{ij}\\,g_{\\lambda},\\partial_{ kl}\\,g_{\\lambda}\\right\\rangle{\\cal G}^{ik}\\,{\\cal G}^{jl}-a_{ijk}\\,a_{lmn}\\,{\\cal G }^{il}\\,{\\cal G}^{jm}\\,{\\cal G}^{kn}\\), with \\(a_{ijk}=\\langle\\partial_{ij}g_{\\lambda},\\partial_{k}g_{\\lambda}\\rangle\\).
Therefore, \\(\\left\\langle\
abla_{ij}\\,g_{\\lambda},\
abla_{kl}\\,g_{\\lambda}\\right\\rangle\\leq \\left\\langle\\partial_{ij}\\,g_{\\lambda},\\partial_{kl}\\,g_{\\lambda}\\right\\rangle \\mathcal{G}^{ik}\\,\\mathcal{G}^{jl}\\), from the positive definiteness of \\(\\mathcal{G}^{ij}\\) and \\(\\mathcal{G}_{ij}\\). Indeed, if we write \\(W^{ijklmn}=\\mathcal{G}^{il}\\mathcal{G}^{jm}\\mathcal{G}^{kn}\\), and if we gather indices \\(ijk\\) and \\(lmn\\) in the two multi-indices16\\(I=(i,j,k)\\) and \\(L=(l,m,n)\\), \\(W^{IL}\\) can be seen as a 2-D matrix in \\(\\mathbb{R}^{P^{3}\\times P^{3}}\\). It is then easy to check that the \\(P^{3}\\) eigenvectors of \\(W^{IL}\\) are given by the \\(P^{3}\\) combinations of the product of three of the \\(P\\) eigenvectors of \\(\\mathcal{G}^{ij}\\), i.e. the covariant vectors \\(\\zeta_{i}\\) respecting the equation \\(\\mathcal{G}^{ij}\\,\\zeta_{j}=\\mu\\,\\delta^{ij}\\,\\zeta_{j}\\) for a certain \\(\\mu=\\mu(\\zeta)>0\\). The matrix \\(\\mathcal{G}^{ij}\\) being positive, the eigenvalues of \\(W^{IL}\\) are thus all positive, and \\(W^{IL}\\) is positive. Therefore, \\(a_{I}W^{IL}a_{L}\\geq 0\\) for any tensor \\(a_{I}=a_{ijk}\\). \\(\\blacksquare\\)
Footnote 16: This can be seen as a relabelling of the \\(P^{3}\\) combinations of values for \\(ijk\\) into \\(P^{3}\\) different one-number indices \\(I\\).
## Appendix C Proof of Lemma 2
Recall that we use the gradient ascent defined from the optimization function \\(\\phi_{r}\\) such that \\(\\phi_{r+1}(k)=\\phi_{r}(k)+t_{r}\\,\\xi_{r}(k)\\), for a sequence of positive step size \\(t_{r}\\) increasing \\(S_{u}\\) at each step, and for a step direction \\(\\xi_{r}^{i}(\\lambda)\\triangleq|\
abla S_{u}(\\phi_{r}(\\lambda))|^{-1}\
abla^{i }S_{u}(\\phi_{r}(\\lambda))\\). From this definition, starting from \\(k\\in\\Lambda\\), if \\(\\lim_{r\\to+\\infty}\\phi_{r}(k)=k^{\\infty}\\in\\Lambda\\) exists, then \\(k^{\\infty}\\) is a point where \\(\
abla^{i}S_{u}(k^{\\infty})=0\\) for all \\(i\\), since \\(S_{u}(\\phi_{r+1}(k))=S_{u}(\\phi_{r}(k))+t_{r}\\,|\\partial S_{u}(\\phi_{r}(k))|+O (t_{r}^{2})\\).
How may the trajectory \\(\\mathcal{T}_{k}=\\{\\phi_{r}(k):r\\in\\mathbb{N}\\}\\) contain a point \\(\\lambda^{\\prime}\\) satisfying (14)? Let us write \\(\\gamma_{r}(s)\\) for the geodesic linking \\(\\phi_{r}(k)\\) to \\(\\lambda_{M}\\), and define the _distance function_\\(\\zeta_{r}=d_{\\mathcal{G}}(\\lambda_{M},\\phi_{r}(k))\\). We have thus \\(\\gamma_{r}(0)=\\phi_{r}(k)\\) and \\(\\gamma_{r}(\\zeta_{r})=\\lambda_{M}\\), where \\(\\lambda_{M}\\) is the global maximum of \\(S_{u}\\).
_Case 1._ _If \\(\\xi_{0}^{i}{\\gamma_{0}^{\\prime}}^{j}(0)\\,\\mathcal{G}_{ij}(k)<0\\), i.e. the optimization starts in the wrong direction._ The function \\(\\psi(s)=S_{u}(\\gamma_{0}(s))\\) is twice differentiable over \\([0,\\zeta_{0}]\\) and for \\(s\\) close to zero, we have \\(\\psi(0)>\\psi(s)\\) since \\(\\psi^{\\prime}(0)=\\partial_{i}S_{u}(k)\\,{\\gamma_{0}^{\\prime}}^{i}(0)\\ =|\
abla S_{u}(k)|\\, \\xi_{0}^{i}{\\gamma_{0}^{\\prime}}^{j}(0)\\,\\mathcal{G}_{ij}(k)\\ <\\ 0\\).
Since \\(\\lambda_{M}\\) is a global maximum of \\(S_{u}\\), \\(\\psi(0)<\\psi(\\zeta_{0})=S_{u}(\\lambda_{M})\\). Therefore, there exists a \\(s^{*}\\in(0,\\zeta_{0})\\) that minimizes \\(\\psi\\), i.e. \\(\\psi^{\\prime}(s^{*})=0\\) with \\(\\psi(s^{*})<\\psi(0)\\). For \\(\\lambda_{*}=\\gamma_{0}(s^{*})\\), this implies that \\(\\lambda_{*}\\) is critical since \\(\\psi^{\\prime}(s^{*})=\\partial_{i}S_{u}(\\lambda_{*}){\\gamma_{0}^{\\prime}}^{i}( s^{*})=0\\). From Lemma 1, \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda_{*})\\leq\\frac{1}{2}\\|u\\|^{2}d_{\\mathcal{G}}( \\lambda_{M},\\lambda_{*})^{2}\\,(1+\\mathcal{K})<\\frac{1}{2}\\|u\\|^{2}d_{\\mathcal{G }}(\\lambda_{M},k)^{2}\\,(1+\\mathcal{K})\\), since \\(d_{\\mathcal{G}}(\\lambda_{M},\\lambda_{*})<d_{\\mathcal{G}}(\\lambda_{0},k)\\). Finally, for any \\(\\lambda^{\\prime}\\in\\mathcal{T}_{k}\\), \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\leq S_{u}(\\lambda_{M})-S_{u}(k)\\), and \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\leq S_{u}(\\lambda_{M})-S_{u}(\\lambda _{*})\\leq\\frac{1}{2}\\|u\\|^{2}\\,d_{\\mathcal{G}}(\\lambda_{M},k)^{2}\\,(1+ \\mathcal{K})\\), since \\(S_{u}(k)\\geq S(\\lambda_{*})\\).
_Case 2._ _If \\(\\xi_{0}^{i}{\\gamma_{0}^{\\prime}}^{j}(0)\\,\\mathcal{G}_{ij}(k)=0\\)._ We have right away \\({\\gamma_{0}^{\\prime}}^{i}(0)\\partial_{i}S_{u}(k)=0\\), and \\(k\\) is a critical point in the direction \\(\\lambda_{M}\\). Lemma 1 applied on \\(k\\) gives \\(S_{u}(\\lambda_{M})-S_{u}(k)\\leq\\frac{1}{2}\\,\\|u\\|^{2}\\,d_{\\mathcal{G}}(\\lambda _{M},k)^{2}\\,(1+\\mathcal{K})\\). Since \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\leq S_{u}(\\lambda_{M})-S_{u}(k)\\) for any \\(\\lambda^{\\prime}\\in\\mathcal{T}_{k}\\), Equation (14) holds.
_Case 3._ _If \\(\\xi_{0}^{i}{\\gamma_{0}^{\\prime}}^{j}(0)\\,\\mathcal{G}_{ij}(k)>0\\)._ Let us analyze the behavior of the distance function \\(\\zeta_{r}\\).
Let us introduce the function \\(d_{M}(\\lambda)=d_{\\mathcal{G}}(\\lambda_{M},\\lambda)\\). As for the Euclidean space, it is easy to prove17 that \\(\
abla^{i}d_{M}(\\lambda)=-\\gamma^{i}(0)\\) if \\(\\gamma\\) is the geodesic linking \\(\\lambda=\\gamma(0)\\) to \\(\\lambda_{M}\\). Therefore, since \\(\\zeta_{r+1}=d_{M}(\\phi_{r+1}(k))\\), a Taylor expansion of \\(d_{M}(\\lambda)\\) around \\(\\lambda=\\phi_{r}(k)\\) provides
Footnote 17: The interested reader will find a proof of this basic differential geometry result in the companion Technical Report [31].
\\[\\zeta_{r+1}\\ =\\ \\zeta_{r}\\ -\\ t_{r}\\,\\xi_{r}^{i}(k)\\,{\\gamma_{r}^{\\prime}}^{j}(0)\\, \\mathcal{G}_{ij}(\\phi_{r}(k))\\ +\\ O(t_{r}^{2}). \\tag{19}\\]
For \\(r=0\\), if \\(t_{0}\\) is sufficiently small, \\(\\zeta_{1}<\\zeta_{0}\\) and \\(\\zeta_{r}\\) has either a local minima on a particular step \\(r_{m}>0\\), or it decreases monotically and converges to a value \\(\\zeta_{\\infty}=\\lim_{r\\to\\infty}\\zeta_{r}<\\zeta_{0}\\).
_(i) \\(\\zeta\\) has a local minima \\(\\zeta_{r_{m}}<\\zeta_{0}\\) on \\(r_{m}>0\\) :_ Then, \\(\\zeta_{r_{m}+1}>\\zeta_{r_{m}}\\) and, using (19) with some implicit dependences, \\(\\zeta_{r_{m}+1}-\\zeta_{r_{m}}=-\\,t_{r_{m}}\\,\\gamma^{\\prime\\,i}_{r_{m}}(0)\\, \\xi^{j}_{r_{m}}\\,{\\cal G}_{ij}+O(t_{r_{m}}^{2})\\). Therefore, for a sufficiently small step \\(t_{r_{m}}\\), \\(\\gamma^{\\prime\\,i}_{r_{m}}(0)\\,\\xi^{j}_{r_{m}}\\,{\\cal G}_{ij}<0\\) and we are in the same hypothesis as _Case 1_ with the point \\(\\lambda^{\\prime}=\\phi_{r_{m}}(k)\\in{\\cal T}_{k}\\) instead of \\(k\\). We obtain then \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\leq\\frac{1}{2}\\|u\\|^{2}\\,d_{\\cal G}( \\lambda_{M},\\lambda^{\\prime})^{2}\\,(1+{\\cal K})<\\frac{1}{2}\\|u\\|^{2}\\,d_{\\cal G} (\\lambda_{M},k)^{2}\\,(1+{\\cal K})\\), since \\(d_{\\cal G}(\\lambda_{M},\\lambda^{\\prime})=\\zeta_{r_{m}}<\\zeta_{0}=d_{\\cal G}( \\lambda_{M},k)\\).
_(ii) If \\(\\zeta_{r}\\) decreases monotically for \\(r>0\\) :_ Since \\(\\zeta_{r}\\geq 0\\), the limit \\(\\lim_{r\\to\\infty}\\zeta_{r}\\) exists and converges to \\(\\zeta_{\\infty}<\\zeta_{0}\\). However, it is not guaranteed that the sequence \\(\\{\\phi_{r}(k)\\}\\) converges to a point of \\(\\Lambda\\). Fortunately, since for all \\(r>0\\), \\(\\phi_{r}(k)\\) remains in the finite volume \\(V_{0}=\\{\\lambda\\in\\Lambda:d_{M}(\\lambda)\\leq d_{M}(k)\\}\\), this sequence is bounded in the finite dimensional space \\(\\Lambda\\). Therefore, from the Bolzano-Weierstrass theorem on the metric space \\((\\Lambda,d_{\\cal G}(\\cdot,\\cdot))\\), we can find a convergent subsequence \\(\\{r_{i}\\in\\mathbb{N}:r_{i+1}>r_{i}\\}\\) such that \\(\\lim_{i\\to\\infty}\\phi_{r_{i}}(k)=k_{\\infty}\\in V_{0}\\). On this point, we will have \\(\
abla^{i}S_{u}(k_{\\infty})=0\\) for all \\(i\\). So, \\(k_{\\infty}\\) is an umbilical point and, from Lemma 1,
\\[S_{u}(\\lambda_{M})-S_{u}(k_{\\infty})\\ \\leq\\ \\tfrac{1}{2}\\|u\\|^{2}\\,d_{\\cal G}( \\lambda_{M},k_{\\infty})^{2}\\,(1+{\\cal K}).\\]
From now on, we abuse notation and write \\(\\phi_{r_{i}}(k)=\\phi_{i}(k)\\). Since \\(\\zeta_{\\infty}^{2}=d_{\\cal G}(\\lambda_{M},k_{\\infty})^{2}<d_{\\cal G}(\\lambda_{ M},k)^{2}=\\zeta_{0}^{2}\\), we can find a \\(\\delta>0\\) such that \\(d_{\\cal G}(\\lambda_{M},k_{\\infty})^{2}\\,+\\,\\delta\\,<\\,d_{\\cal G}(\\lambda_{M},k )^{2}\\). Therefore, because \\(\\lim_{i\\to\\infty}S_{u}(\\phi_{i}(k))=S_{u}(k^{\\infty})\\) by continuity of \\(S_{u}\\), and since \\(S_{u}(\\phi_{i}(k))\\) increases monotonically with \\(i\\), there exists a \\(i^{\\prime}>0\\) such that \\(S_{u}(k^{\\infty})-S(\\phi_{i^{\\prime}}(k))\\leq\\frac{1}{2}\\|u\\|^{2}\\,\\delta\\,(1+ {\\cal K})\\). With \\(\\lambda^{\\prime}=\\phi_{i^{\\prime}}(k)\\in{\\cal T}_{k}\\), we finally get \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})<S_{u}(\\lambda_{M})\\ -\\ S_{u}(k^{\\infty})\\,+\\,\\frac{1}{2}\\|u\\|^{2}\\, \\delta\\,(1+{\\cal K})\\), so that \\(S_{u}(\\lambda_{M})-S_{u}(\\lambda^{\\prime})\\leq\\frac{1}{2}\\|u\\|^{2}\\,\\left(d_{ \\cal G}(\\lambda_{M},k^{\\infty})^{2}+\\delta\\right)(1+{\\cal K})<\\frac{1}{2}\\|u\\| ^{2}\\,d_{\\cal G}(\\lambda_{M},k)^{2}\\,(1+{\\cal K})\\). This gives the result and concludes the proof. \\(\\blacksquare\\)
## References
* [1] S. Chen, D. Donoho, and M. Saunders, \"Atomic decomposition by basis pursuit,\" _SIAM Journ. Sci. Comp._, vol. 20, no. 1, pp. 33-61, August 1998.
* [2] S. Mallat and Z. Zhang, \"Matching pursuit with time-frequency dictionaries,\" _IEEE T. Signal. Proces._, vol. 41, no. 12, pp. 3397-3415, December 1993.
* [3] J. Tropp, \"Greed is good: algorithmic results for sparse approximation,\" _IEEE T. Inform. Theory._, vol. 50, no. 10, pp. 2231-2242, October 2004.
* [4] E. Le Pennec and S. Mallat, \"Sparse geometric image representations with bandelets,\" _IEEE T. Image. Process._, vol. 14, no. 4, pp. 423-438, April 2005.
* [5] D. Donoho and X. Huo, \"Uncertainty principles and ideal atom decomposition,\" _IEEE T. Inform. Theory._, vol. 47, no. 7, pp. 2845-2862, November 2001.
* [6] R. Neff and A. Zakhor, \"Very low bit rate video coding based on matching pursuits,\" _IEEE T. Circ. Syst. Vid._, vol. 7, no. 1, pp. 158-171, February 1997.
* [7] M. Goodwin and M. Vetterli, \"Matching pursuit and atomic signal models based on recursive filterbanks,\" _IEEE T. Signal. Proces._, vol. 47, no. 7, pp. 1890-1902, July 1999.
* [8] P. Durka, D. Ircha, and K. Blinowska, \"Stochastic time-frequency dictionaries for matching pursuit,\" _IEEE T. Signal. Proces._, vol. 49, no. 3, pp. 507-510, March 2001.
* [9] R. Gribonval and E. Bacry, \"Harmonic decomposition of audio signals with matching pursuit,\" _IEEE T. Signal. Proces._, vol. 51, no. 1, pp. 101-111, January 2003.
* [10] R. Gribonval and P. Vandergheynst, \"On the exponential convergence of matching pursuits in quasi-incoherent dictionaries,\" _IEEE T. Inform. Theory._, vol. 52, no. 1, pp. 255-261, January 2006.
* [11] O. Divorra Escoda, L. Granai, and P. Vandergheynst, \"On the use of a priori information for sparse signal approximations,\" _IEEE T. Signal. Proces._, vol. 54, no. 9, pp. 3468-3482, September 2006.
* [12] A. Rahmoune, P. Vandergheynst, and P. Frossard, \"Flexible motion-adaptive video coding with redundant expansions,\" _IEEE T. Circ. Syst. Vid._, vol. 16, no. 2, pp. 178-190, February 2006.
* [13] C. De Vleeschouwer and A. Zakhor, \"In-loop atom modulus quantization for matching pursuit and its application to video coding,\" _IEEE T. Image. Process._, vol. 12, no. 10, pp. 1226-1242, October 2003.
* [14] R. Gribonval, \"Fast matching pursuit with a multiscale dictionary of Gaussianchirps,\" _IEEE T. Signal. Proces._, vol. 49, no. 5, pp. 994-1001, May 2001.
* [15] R. Figueras i Ventura, P. Vandergheynst, and P. Frossard, \"Low-rate and flexible image coding with redundant representations,\" _IEEE T. Image. Process._, vol. 15, no. 3, pp. 726-739, March 2006.
* [16] C. De Vleeschouwer and B. Macq, \"Subband dictionaries for low-cost matching pursuits of video residues,\" _IEEE T. Circ. Syst. Vid._, vol. 9, no. 7, pp. 984-993, October 1999.
* [17] P. Czerepinski, C. Davies, N. Canagarajah, and D. Bull, \"Matching pursuits video coding: dictionaries and fast implementation,\" _IEEE T. Circ. Syst. Vid._, vol. 10, no. 7, pp. 1103-1115, October 2000.
* [18] R. Neff and A. Zakhor, \"Matching pursuit video coding. part i: Dictionary approximation,\" _IEEE T. Circ. Syst. Vid._, vol. 12, no. 1, pp. 13-26, January 2002.
* [19] Y.-T. Chau, W.-L. Hwang, and C.-L. Huang, \"Gain-shape optimized dictionary for matching pursuit video coding,\" _Signal Processing_, vol. 83, pp. 1937-1943, September 2003.
* [20] P. Schmid-Saugeon and A. Zakhor, \"Dictionary design for matching pursuit and application to motion compensated video coding,\" _IEEE T. Circ. Syst. Vid._, vol. 14, no. 6, pp. 880-886, June 2004.
* [21] M. Wakin, D. Donoho, H. Choi, and R. Baraniuk, \"The multiscale structure of non-differentiable image manifolds,\" in _Wavelets XI. Proceedings of the SPIE_, vol. 5914, San Diego, CA, August 2005, pp. 413-429.
* [22] D. Donoho and C. Grimes, \"Image Manifolds which are Isometric to Euclidean Space,\" _Journal of Mathematical Imaging and Vision_, vol. 23, no. 1, pp. 5-24, primes 2005.
* [23] S. Amari, \"Differential Geometry of Curved Exponential Families-Curvatures and Information Loss,\" _The Annals of Statistics_, vol. 10, no. 2, pp. 357-385, June 1982.
* [24] G. Watson and K. Gilholm, \"Signal and image feature extraction from local maxima of generalised correlation,\" _Pattern Recognition_, vol. 31, no. 11, pp. 1733-1745, November 1998.
* [25] I. Tosic, P. Frossard, and P. Vandergheynst, \"Progressive coding of 3-d objects based on overcomplete decompositions,\" _IEEE T. Circ. Syst. Vid._, vol. 16, no. 11, pp. 1338-1349, November 2006.
* [26] I. Bogdanova, P. Vandergheynst, and J.-P. Gazeau, \"Continuous wavelet transform on the hyperboloid,\" _Appl. Comput. Harmon. Anal._, 2005, (accepted).
* [27] S. Lang, _Differential manifolds_. Addison-Wesley Reading, Mass, 1972.
* [28] M. Carmo, _Riemannian Geometry_. Birkhauser, 1992.
* [29] R. Gribonval and P. Vandergheynst, \"On the exponential convergence of matching pursuits in quasi-incoherent dictionaries,\" _IEEE T. Inform. Theory._, vol. 52, no. 1, pp. 255-261, January 2006.
* [30] S. Mallat, _A Wavelet Tour of Signal Processing_. Academic Press., 1998.
* [31] L. Jacques and C. De Vleeschouwer, \"Discretization effects of continuous dictionary in matching pursuits: Density, convergence and optimization.\" UCL, Tech. Rep. TR-LJ-2007.01, July 2007, [http://www.tele.ucl.ac.be/](http://www.tele.ucl.ac.be/)\\(\\sim\\)jacques/files/TR-LJ-2007.01.pdf.
* [32] P. Niyogi, S. Smale, and S. Weinberger, \"Finding the homology of submanifolds with high confidence from random samples,\" _Manuscript, Toyota Technological Institute, Chicago, Illinois_, 2004.
* [33] M. Davenport, M. Duarte, M. Wakin, J. Laska, D. Takhar, K. Kelly, and R. Baraniuk, \"The smashed filter for compressive classification and target recognition,\" in _Computational Imaging V at SPIE Electronic Imaging_, San Jose, California, January 2007.
* [34] M. B. Wakin, \"The geometry of low-dimensional signal manifolds,\" Ph.D. dissertation, Rice University, Houston, TX, USA, August 2006.
* [35] I. Daubechies, _Ten Lectures on Wavelets_. Society for Industrial and Applied Mathematics, 1992.
* [36] S. T. Ali, J.-P. Antoine, and J.-P. Gazeau, _Coherent States, Wavelets, and their Generalizations_. New York: Springer-Verlag, 2000.
* [37] O. Ferreira and P. Oliveira, \"Subgradient Algorithm on Riemannian Manifolds,\" _Journal of Optimization Theory and Applications_, vol. 97, no. 1, pp. 93-104, April 1998.
* [38] D. Gabay, \"Minimizing a differentiable function over a differential manifold,\" _Journal of Optimization Theory and Applications_, vol. 37, no. 2, pp. 177-219, June 1982.
* [39] S. Boyd and L. Vandenberghe, _Convex Optimization_. Cambridge University Press, 2004.
* [40] R. Mahony and J. Manton, \"The Geometry of the Newton Method on Non-Compact Lie Groups,\" _Journal of Global Optimization_, vol. 23, no. 3, pp. 309-327, August 2002.
* [41] R. Figueras i Ventura, O. Divorra Escoda, and P. Vandergheynst, \"A matching pursuit full search algorithm for image approximations,\" EPFL, 1015 Ecublens, Tech. Rep., December 2004.
* [42] R. M. F. i Ventura, \"Sparse image approximation with application to flexible image coding,\" Ph.D. dissertation, Swiss Federal Institute of Technology, Lausanne, Switzerland, July 2005.
* [43] J.-P. Antoine, R. Murenzi, P. Vandergheynst, and S. Ali, _Two-dimensional Wavelets and Their Relatives_. Cambridge University Press, 2004.
* [44] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, \"Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,\" in _In Proceedings of the 27th Annual Asilomar Conference on Signals, Systems and Computers_, November 1993.
* [45] E. Candes, J. Romberg, and T. Tao, \"Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,\" _IEEE T. Inform. Theory._, vol. 52, no. 2, pp. 489-509, 2006.
* [46] M. Duarte, M. Davenport, M. Wakin, and R. Baraniuk, \"Sparse Signal Detection from Incoherent Projections,\" in _IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP 2006 Proceedings._, vol. 3, May 2006.
* [47] H. Rauhut, K. Schnass, and P. Vandergheynst, \"Compressed sensing and redundant dictionaries,\" _IEEE T. Inform. Theory._, 2007, (submitted).
* [48] R. Baraniuk and M. Wakin, \"Random projections of smooth manifolds,\" _to appear in Foundations of Computational Mathematics_, 2007. | This paper studies the effect of discretizing the parametrization of a dictionary used for Matching Pursuit decompositions of signals. Our approach relies on viewing the continuously parametrized dictionary as an embedded manifold in the signal space on which the tools of differential (Riemannian) geometry can be applied. The main contribution of this paper is twofold. First, we prove that if a discrete dictionary reaches a minimal density criterion, then the corresponding discrete MP (dMP) is equivalent in terms of convergence to a weakened hypothetical continuous MP. Interestingly, the corresponding weakness factor depends on a density measure of the discrete dictionary. Second, we show that the insertion of a simple geometric gradient ascent optimization on the atom dMP selection maintains the previous comparison but with a weakness factor at least two times closer to unity than without optimization. Finally, we present numerical experiments confirming our theoretical predictions for decomposition of signals and images on regular discretizations of dictionary parametrizations.
_Keywords:_ Matching Pursuit, Riemannian geometry, Optimization, Convergence, Dictionary, Parametrization. | Provide a brief summary of the text. |
arxiv-format/0801_4648v1.md | # A note on boundary-layer friction in baroclinic cyclones
[
[
Department of Meteorology, University of Reading, UK
[
[
Department of Meteorology, University of Reading, UK
Met Office, Exeter, UK
######
Exman pumping, potential vorticity, friction velocity, baroclinic generation, cyclogenesis 22 May 20071 October 200730 October 200710 October
where \\(P\\) is the Ertel-Rossby potential vorticity, \\(\\xi_{h}\\) is the vertical component of absolute vorticity on the boundary layer top, \\(H_{\\rm s}\\) is the surface heat flux, \\(\\rho_{0}\\) is the density, \\(h\\) is the boundary layer depth and the square brackets indicate a depth-average over the boundary layer, after Cooper _et al._ (1992). It is this PV which is seen in his Fig. 4(b) (our Fig. 1(b)), rather than the PV generated by the baroclinic mechanism which is the focus of Adamson _et al._ (2006).
The importance of the surface heat-flux pattern under conditions of a horizontally uniform SST field is demonstrated by Fig. 1(c), where we have repeated the control experiment of Beare (2007) without any turbulent heat fluxes. This makes the simulation closer to that of Adamson _et al._ (2006), since their study focussed on the drag, and so their boundary layer scheme only parameterised momentum transfer and had no turbulent heat fluxes present. Comparing Fig. 1(c) to Fig. 1(b) shows that the PV is now located to the north and east of the cyclone (between \\(5-15\\)E, \\(50-60\\)N), confirming that the
Figure 1: Boundary layer depth-averaged potential vorticity (dashed contours, PV Units \\(=10^{-6}\\) K kg\\({}^{-1}\\) m\\({}^{2}\\) s\\({}^{-1}\\)) and near-surface potential temperature (solid contours at \\(4\\) K intervals) for (a) the Adamson _et al._ (2006) control experiment, (b) the Beare (2007) control experiment, (c) the Beare (2007) experiment without turbulent heat fluxes and (d) the Beare (2007) experiment with meridional SST gradient. Panel (a) is after \\(6\\) days, whilst panels (b)–(d) are after \\(48\\) hours, when the minimum surface pressure is approximately equal in all experiments. Values greater than \\(1.5\\) PVU are shaded and the zero contour is omitted for clarity. ‘L’ denotes the position of the low centre.
boundary layer PV in the Beare (2007) control run is predominantly generated by turbulent heat fluxes. The PV generated by heat fluxes remains close to the surface and ahead of the cyclone centre (shown between \\(10-20\\)E, \\(40-50\\)N in Fig. 1(b)) during the course of the Beare (2007) simulation; i.e., it is not vented from the boundary layer. It therefore never reaches a position above the low centre and cannot prevent communication between upper- and lower-level anomalies.
Beare's (2007) conclusion that the heat-flux generated PV is not dominant in the spin-down process is therefore justified. But the results are not contradictory to those of Adamson _et al._ (2006). Indeed, when the turbulent heat fluxes are switched off, the cyclone shows a slight filling of 2 hPa after \\(48\\) hours, which is consistent with the results of the PV inversion in Beare (2007) that the PV generated by the heat fluxes acts to deepen the cyclone.
We consider now the low-level jet seen in the simulations of Beare (2007). Formed by a reversal of the north-south temperature gradient generating an easetry wind shear, this cold air wraps around the cyclone centre, producing a cold conveyor belt. This provides ideal conditions for generation of large surface stress. The location of maximum surface stress, or equivalently of the friction velocity, is then found to have a large impact on cyclone development, consistent with the Ekman pumping mechanism. Such a low-level jet is not apparent in the LC1 lifecycle simulations of Adamson _et al._ (2006). Figure 2 shows the low-level winds in both simulations at an early stage of development. If these low-level winds are assumed to lie within the surface layer, the surface stress is given by the bulk aerodynamic formula:
\\[\\boldsymbol{\\tau}_{\\mathrm{s}}=\\rho_{0}C_{\\mathrm{D}}|\\mathbf{v_{1}}|\\mathbf{ v_{1}} \\tag{2}\\]
where \\(\\boldsymbol{\\tau}_{\\mathrm{s}}\\) is the horizontal surface stress vector, \\(C_{\\mathrm{D}}\\) is the drag coefficient and \\(\\mathbf{v_{1}}\\) is the horizontal wind vector on the lowest model level. The surface stress exerted on the cyclone is therefore proportional to the square of the low-level wind. It is noticeable that the strongest wind-speeds in Fig. 2(b) are to the southwest of the low centre, between \\(-15\\) and \\(-5\\)E, \\(50-55\\)N, whereas in Fig. 2(a) they are to the southeast, within the warm sector (\\(65-60\\)W, \\(45-50\\)N). Therefore, in the Adamson _et al._ (2006) experiment, the strongest winds, and therefore surface stress, are in a region of horizontal temperature gradients and hence significant PV generation. However, in Beare (2007) the low-level jet wrapping around the cyclone centre enhances wind-speeds to the southwest of the low, making the largest surface stress in a region of small horizontal temperature gradients and hence little PV generation.
By imposing a meridional SST gradient on the Beare (2007) experiment, we see both the Adamson _et al._ (2006) and Beare (2007) mechanisms at work in the same cyclone. The boundary layer depth-averaged PV, shown in Fig. 1(d), is now located to the north and east of the cyclone centre, similarly to Fig. 1(a) and consistent with Figs. 4, \\(5\\) and \\(11\\)(c) of Adamson _et al._ (2006).
Figure 3 may be compared with Figs. 9 and 10 of Adamson _et al._ (2006) and Fig. 4 of Beare (2007). Fig. 3(d) shows that the modified SST experiment has: (i) significant PV generation from the baroclinic mechanism, consistent with Adamson _et al._ (2006) (Fig. 3(a)); and, (ii) high values of friction velocity wrapping around the cyclone centre, consistent with Beare (2007) (Fig. 3(b)). The location and magnitude of the friction velocity is consistent with a low-level jet generating maximum surface stress in the well-mixed boundary layer. As discussed by Plant and Belcher (2007), PV generation through the baroclinic mechanism has some reinforcement from Ekman and turbulent heat-flux generation terms. The generation shown in Fig. 3(d) occurs between \\(5\\) and \\(20\\)E, \\(55-65\\)N, in a region well placed to allow ventilation
Figure 2: \\(50\\) m wind vectors for (a) the Adamson _et al._ (2006) control experiment after \\(4\\) days, (b) the Beare (2007) control experiment at \\(24\\) hours, when the minimum surface pressure is approximately equal. ‘L’ denotes the position of the low centre. For brevity, we do not show results of the Beare (2007) experiment without turbulent heat fluxes or with a meridional SST gradient, as they are very similar to (b).
from the boundary layer. Plant and Belcher (2007) discuss how this ventilation occurs by the cold conveyor belt at early stages of the lifecycle, transitioning to ventilation by the warm conveyor belt at later stages, as the cyclone wraps up. Once advected out of the boundary layer, the PV appears as a static stability anomaly above the cyclone centre.
## 4 Conclusions
This note aimed to clarify why two recent papers have provided different emphases for the boundary layer's interaction with an extratropical cyclone. We have shown why the experiments of Beare (2007) did not find evidence for the baroclinic mechanism, due to the boundary layer PV distribution being dominated by dynamically unimportant PV generation from surface heat fluxes. We have also shown that the much weaker low-level jet in the cold air stream southwest of the low centre in the experiments of Adamson _et al._ (2006) limited the Ekman pumping.
Figure 3: Boundary layer depth-averaged potential vorticity generation by the baroclinic mechanism (PV Units per day, negative values dotted, zero contour omitted for clarity) plotted over friction velocity (values greater than 0.5 ms\\({}^{-1}\\) light grey, values greater than 1 ms\\({}^{-1}\\) dark grey) for (a) the Adamson _et al._ (2006) control experiment, (b) the Beare (2007) control experiment, (c) the Beare (2007) experiment without turbulent heat fluxes and (d) the Beare (2007) experiment with meridional SST gradient. Panel (a) is after 6 days, whilst panels (b)–(d) are after 48 hours, when the minimum surface pressure is approximately equal in all experiments. ‘L’ denotes the position of the low centre.
Through the addition of a meridional SST gradient to the simulations of Beare (2007), we have produced results which show both mechanisms at work. It is beyond the scope of this note to establish the relative importance of each mechanism. That may, for instance, require the development of techniques for PV inversion _within_ the boundary layer.
In reality, there is of course a spectrum of midlatitude cyclones, and it is plausible that each mechanism could be dominant in different types of cyclone. Ekman pumping is a direct mechanism, reducing the angular momentum of a pre-existing barotropic circulation, whilst the baroclinic PV mechanism is somewhat indirect, weakening the growth of a baroclinic wave. Certainly more work is thus required to form a complete understanding of the boundary layer processes in baroclinic cyclones.
## Acknowledgements
We would like to thank Andy Brown of the Met Office, UK, for helpful discussions of the work. I. Boutle is supported by NERC CASE award NER/S/C/2006/14273.
## References
* Adamson et al. (2006) Adamson, D. S., Belcher, S. E., Hoskins, B. J., and Plant, R. S. (2006). Boundary-layer friction in midlatitude cyclones. _Q. J. R. Meteorol. Soc._, **132**, 101-124.
* Beare (2007) Beare, R. J. (2007). Boundary layer mechanisms in extratropical cyclones. _Q. J. R. Meteorol. Soc._, **133**, 503-515.
* Cooper et al. (1992) Cooper, I. M., Thorpe, A. J., and Bishop, C. H. (1992). The role of diffusive effects on potential vorticity in fronts. _Q. J. R. Meteorol. Soc._, **118**, 629-647.
* Plant and Belcher (2007) Plant, R. S. and Belcher, S. E. (2007). Numerical simulation of baroclinic waves with a parameterized boundary layer. _J. Atmos. Sci._ In press.
* Shapiro and Keyser (1990) Shapiro, M. A. and Keyser, D. (1990). Fronts, jet streams and the tropopause. Pp 167-191 in _Extratropical Cyclones: The Erik Palmen Memorial Volume_. Amer. Meteorol. Soc: Boston.
* Thorncroft et al. (1993) Thorncroft, C. D., Hoskins, B. J., and McIntyre, M. E. (1993). Two paradigms of baroclinic-wave life-cycle behaviour. _Q. J. R. Meteorol. Soc._, **119**, 17-55. | The interaction between extratropical cyclones and the underlying boundary layer has been a topic of recent discussion in papers by Adamson _et al._ (2006) and Beare (2007). Their results emphasise different mechanisms through which the boundary layer dynamics may modify the growth of a baroclinic cyclone. By using different sea-surface temperature distributions and comparing the low-level winds, the differences are exposed and both of the proposed mechanisms appear to be acting within a single simulation. | Summarize the following text. |
arxiv-format/0802_1412v1.md | # Extreme Learning Machine for land cover classification
Mahesh Pal
This paper is part of the Proceeding of Mapindia2008, 11\\({}^{\\text{th}}\\) Annual international conference and exhibition on geospatial information, technology and application, held at Greater Noida, India from 6-8 Feb. 2008.
Website of the conference is: www.mapindia.org
# Extreme Learning Machine for land cover classification
Mahesh Pal
Department of Civil Engineering
N. I. T. Kurukshetra, Harayana, 136119, INDIA
mpce [email protected], 01744 233356, Fax: 01744 238050
## 1 Introduction
Within the last two decades, neural network classifiers, particularly the feed-forward multilayer perceptron using back-propagation algorithm, have been extensively tested for different applications by remote sensing community (Benediktsson et al., 1990; Hepner et al., 1990; Heermann and Khazenie, 1992; Civco, 1993; Schaale and Furrer, 1995; Tso and Mather, 2001). The popularity of neural network based classifiers is due to their ability to learn and generalize well with test data. In particular, neural networks make no prior assumptions about the statistics of input data. This property makes neural networks an attractive solution to many land cover classification of remotely sensed data whose underlying distribution is quite often unknown. The multilayer feed forward network is one of the most widely used neural network architectures. Among the various learning algorithms, the error backpropagation algorithm is one of the most important and widely used algorithms in remote sensing. A number of studies have reported that use of back propagation neural classifier have problems in setting various parameters during training (Kavzoglu, 2001; Wilkinson, 1997). The choice of network architecture (i.e. number of hidden layers and nodes in each layer, learning rate as well as momentum), weight initialisation and number of iterations required for training are some of the important parameters the affects the learning performance of these classifiers. The other shortcomings of the conventional backpropagation learning algorithm are slow convergence rate and it can get stuck to a local minimum.
Recently, Huang et al., (2006) proposed extreme learning machine based classification approach (also called as single hidden layer feed forward neural network) with randomly assigned input weights and bias. They suggested that this classification approach may not require adjusting the input weights like a backpropagation method and found it working well with different data set in comparison to back propagation neural network. Keeping this in view present study compares the performance of extreme learning machine with a back propagation neural network using multispectral data.
### 2.0 Extreme Learning Machine
Let the training data with \\(K\\) number of samples be represented by \\(\\left\\{\\mathbf{x}_{i},\\mathbf{y}_{i}\\right\\}\\), where \\(\\mathbf{x}_{i}\\in\\mathbf{R}^{\\mathrm{P}}\\) and \\(\\mathbf{y}_{i}\\in\\mathbf{R}^{q}\\), a standard single hidden layer feed forward neural network (Huang and Babri, 1997) having \\(H\\) hidden neurons and activation function \\(f(x)\\) can be represented as:
\\[\\sum_{i=1}^{H}\\alpha_{i}\\;\\;f_{i}\\left(\\mathbf{x}_{j}\\right)=\\sum_{i=1}^{H} \\alpha_{i}\\;\\;f\\left(\\mathbf{w}_{i}\\;\\boldsymbol{\\cdot}\\;\\mathbf{x}_{j}\\;+\\;c _{i}\\right)=\\mathbf{e}_{j}\\;\\;\\;\\;\\;\\text{where}\\;j{=}1,\\; ,K, \\tag{1}\\]
Where \\(\\mathbf{w}_{i}\\) and \\(\\alpha_{i}\\) are the weight vectors connecting inputs and the \\(i\\)th hidden neurons and the \\(i\\)th hidden neurons and output neurons respectively, \\(c_{i}\\) is the threshold of the \\(i\\)th hidden neuron and \\(\\mathbf{e}_{j}\\)is the output from single hidden layer feed forward neural network (SHLFN) for the data point \\(j\\). The weight vector \\(\\mathbf{w}_{i}\\)is randomly generated and based on a continuous probability distribution (Huang et al., 2006).
Huang et al., (2006) suggested that a standard single layer feed forward neural network with \\(H\\) hidden neurons and activation function \\(f(x)\\) can approximate \\(K\\) training data with zero error means such that:
\\[\\sum_{j=1}^{K}\\left\\|\\mathbf{e}_{j}-\\mathbf{o}_{j}\\right\\|=0 \\tag{2}\\]
And the equation (1) can be expressed as
\\[\\sum_{i=1}^{H}\\alpha_{i}\\;\\;f\\left(\\mathbf{w}_{i}\\;\\boldsymbol{\\cdot}\\; \\mathbf{x}_{j}\\;+\\;c_{i}\\right)=\\mathbf{o}_{j}\\,,\\hskip 28.452756ptj=1,\\; ,\\;K \\tag{3}\\]
for particular values of \\(\\alpha_{i},\\mathbf{w}_{i},\\)_and_\\(c_{i}\\).
Further, Huang et al., (2006) proposed that equation (3) can be written in a compact form and represented by the following equation:
\\[\\mathbf{A}\\,\\alpha=\\mathbf{Y} \\tag{4}\\]
Where \\(A\\) is called the hidden layer output matrix of the neural network (Huang and Babri, 1997) and defined as:
\\[A\\!\\left
\\[C=\\sum\\limits_{j=1}^{K}\\left(\\sum\\limits_{i=1}^{H}\\alpha_{i}\\ \\ f\\left(\\mathbf{w}_{i}\\ \\cdot \\mathbf{x}_{j}\\ +c_{i}\\right)-\\mathbf{o}_{j}\\right)^{2} \\tag{8}\\]
which can be minimised to find suitable values of \\(\\mathbf{w}_{i}^{\\shortmid}\\), \\(c_{i}^{\\shortmid}\\)_and_\\(\\alpha^{\\shortmid}\\)\\((i=1,\\), ,\\(H\\)).
In case the hidden layer output matrix of neural network (i.e. \\(A\\)) is unknown, a gradient based learning algorithm minimise the \\(\\left\\|\\mathbf{A}\\,\\alpha-\\mathbf{Y}\\right\\|\\) by adjusting a vector \\(W\\) (i.e. a set of _weights_\\((\\mathbf{w}_{i}\\), \\(\\alpha_{i}\\)) _and biases_\\((c_{i})\\) ) iteratively by using the following relationship:
\\[\\mathbf{W}_{p}=\\mathbf{W}_{p-1}-\\eta\\ \\frac{\\partial\\,C\\big{(}\\mathbf{W} \\big{)}}{\\partial\\,\\mathbf{W}} \\tag{9}\\]
where \\(\\eta\\)is the learning rate and backpropagation learning algorithms is one of the most popular algorithm used to compute the gradients in a feed forward neural network.
Recently, the study carried out by Huang et al., (2003) proved that single layer feed forward neural network with randomly assigned input weights and hidden layer biases and with almost any nonzero activation function can universally approximate any continuous functions on any input data sets. Huang et al., (2006) suggested an alternate way to train a SHLFN by finding a least square solution \\(\\alpha^{\\shortmid}\\) of the linear system represented by equation 4:
\\[\\left\\|\\mathbf{A}\\big{(}\\mathbf{w}_{1},\\) ,\\(\\mathbf{w}_{H},c_{1},\\) ,\\(c_{H}\\big{)}\\alpha^{\\shortmid}-\\mathbf{Y} \\right\\|\\ =\\ \\underset{\\alpha}{min}\\left\\|\\mathbf{A}\\big{(}\\mathbf{w}_{1},\\) , \\(\\mathbf{w}_{H},c_{1},\\) ,\\(c_{H}\\big{)}\\alpha-\\mathbf{Y}\\right\\| \\tag{10}\\]
If the number \\(H=K\\), matrix \\(A\\) is square and invertible but in most of the cases number of hidden nodes are less than the number of training samples, which makes matrix \\(A\\) to be a non square matrix and there may not exist \\(\\mathbf{w}_{i},c_{i},\\alpha_{i}\\)\\((i=1,\\) \\(H)\\) such that \\(\\mathbf{A}\\,\\alpha=\\mathbf{Y}\\). To overcome this problem, Huang et al., (2006) proposed in using smallest norm least squares solution of \\(\\mathbf{A}\\,\\alpha=\\mathbf{Y}\\), thus, the solution of equation 4 becomes:
\\[\\alpha^{\\shortmid}=\\mathbf{A}^{\\text{@}}\\,\\mathbf{Y} \\tag{11}\\]
Where \\(\\mathbf{A}^{\\text{@}}\\)is called Moore-Penrose generalized inverse of matrix \\(A\\)(Serre, 2002). This solution has the following important properties (Huang et al., 2006):
1. The smallest training error can be reached by this solution.
2. Smallest norm of weights and best generalization performance.
3. The minimum norm least-square solution is a unique solution, thus involving no local minima like one in backpropagation learning algorithm.
Thus, the algorithm proposed by Huang et al., (2006) and called as extreme learning machine can be summarized as:
With the training data set \\(\\big{\\{}\\mathbf{x}_{i},\\mathbf{y}_{i}\\big{\\}},\\ \\mathbf{x}_{i}\\in\\mathbf{R}^{ \\text{n}}\\), \\(\\mathbf{y}_{i}\\)\\(=\\)\\(\\in\\mathbf{R}^{\\text{m}}\\)having \\(K\\)number of samples, a standard SHLFN algorithm with \\(H\\) hidden neurons and activation function \\(g\\big{(}x\\big{)}\\) will work as:
1. Assign random input weights \\(\\mathbf{w}_{i}\\) and bias \\(c_{i}\\), \\(i=1,\\) \\(H\\).
2. Calculate the hidden layer output matrix \\(A\\).
3. Calculate the output weights \\(\\alpha\\) by using the following equation
\\[\\alpha\\ \\ =\\mathbf{A}^{\\text{@}}\\,\\mathbf{Y}\\]
Where \\(A\\)\\(\\alpha\\) and \\(Y\\)are as defined in equations 5 and 6.
## 3 Data Sets and Methodology
The study areas used in this study is located near the town of Littleport in eastern England and the image was acquired on 19 June 2000. A sub-image consisting of 307-pixel (columns) by 330-pixel (rows) covering the area of interest was used for subsequent analysis and classification problem involved in identification of seven land cover types (i.e. wheat, potato, sugar beet, onion, peas, lettuce and beans). A total of 4737 pixels were selected for all seven classes using stratified random sampling. The pixels collected were divided into two subsets, one of which was used for training and the second for testing the classifiers, so as to remove any bias resulting from the use of the same set of pixels for both training and testing. Also, because the same test and training data sets are used for each classifier, any difference resulting from sampling variations was avoided. A total of 2700 training and 2037 test pixels were used.
## 4 Results
The purpose of the present study is to evaluate the performance of extreme learning machine for land cover classification and comparing its performance with a back propagation neural network classifier. Unlike back propagation neural network, the design of extreme learning machine requires setting of one user-defined parameter i.e. number of hidden nodes in hidden layer. A number of experiments were carried out by using the training and test data set of 2700 pixels and varying the hidden nodes from 25 to 450. Results suggests that extreme learning machine achieves highest classification accuracy with a total of 300 hidden nodes.
Table 1 provides the results obtained by using extreme learning machine as well as back propagation neural network using ETM\\(+\\) (England) data set. Results suggest that extreme learning machine perform well in comparison to the back propagation neural network. With this dataset, extreme learning machine provide a classification accuracy of 89% in comparison to 87.87% by a back propagation neural network. Computational cost (i.e. training and test time) of a classifier often represents a significant proportion of cost in remote sensing classifications. For all experiments in this study, a personal computer with a Pentium IV processor and 512 MB of RAM was used. Table 1 also provide the computational cost using ETM\\(+\\) data set with extreme learning machine and back propagation neural network. The results (table 1) suggest the usefulness of extreme learning machine in comparison to back propagation neural network in term of computational cost also.
### 5.0 Conclusions
The main aim of this study was to assess the usefulness of extreme learning machine based classification approach for land cover classification using multispectral data. The performance of extreme learning machine was compared with a back propagation neural network. The results presented above suggest that extreme learning machine works equally well to back propagation neural network in term of classification accuracy and involves in using a smaller computational cost. Another conclusion about the use extreme learning machine classification approach is that unlike a back propagation neural network classifier its performance is affected by one user-defined parameter only which can easily be identified for a particular data set.
## References
* [1] Benediktsson, J. A., Swain, P. H. & Erase, O. K. (1990). Neural network approaches versus statistical methods in classification of multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**, 540-551.
* [2] Civco, D. L. (1993). Artificial neural networks for land-cover classification and mapping. _International Journal of Geographic Information Systems_, **7**, 173-183.
* [3] Heerman, P. D. & Khazenie, N. (1992). Classification of multispectral remote sensing data using a back propagation neural network. _IEEE Transactions on Geoscience and Remote Sensing_, **30**, 81-88.
* [4] Hepner, G. F., Logan, T., Ritter, N. & Bryant, N. (1990). Artificial neural network classification using a minimal training set: comparison to conventional supervised classification. _Photogrammetric Engineering and Remote Sensing_, **56**, 469-473.
* [5] Huang, G.-B. & Babri, H. A. (1997). General approximation theorem on feed forward networks. _IEEE Proceedings of International Conference on Information, Communications and Signal Processing_, 698-702.
* [6] Huang, G.-B. Zhu, Q.-Y. & Siew, C.-K. (2006). Extreme learning machine: Theory and applications, _Neurocomputing, 70_, 489-501
* [7] Huang, G.-B., Chen, L. & Siew, C.-K. (2003). Universal approximation using incremental feed forward networks with arbitrary input weights, _Technical Report ICIS/46/2003_, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
* [8] Kavzoglu, T. (2001). _An Investigation of the Design and Use of Feed-forward Artificial Neural Networks in the Classification of Remotely Sensed Images_. PhD thesis. School of Geography, The University of Nottingham, Nottingham, UK.
* [9] Serre, D. (2002). _Matrices: Theory and applications_. Springer-Verlag New York.
* [10] Schaale M. & Furrer, R. (1995). Land surface classification by neural networks. _International Journal of Remote Sensing_, 16, 3003-3032.
* [11] Tso, B. K. C. & Mather, P.M. (2001). _Classification Methods for remotely Sensed Data_. London: Taylor and Francis Ltd.
* [12] Wilkinson, G.G. (1997). Open questions in neurocomputing for Earth observation. _Neuro-Computational in Remote Sensing Data Analysis_. New York: Springer-Verlag, 3-13.
Mahesh Pal is currently working as assistant professor in the department of civil engineering, NIT Kurukshetra, Haryana. He has 35 Paper in international/national journals and conferences. His current interests include kernel based approaches for land cover classification and application of GIS in construction management. | This paper explores the potential of extreme learning machine based supervised classification algorithm for land cover classification. In comparison to a backpropagation neural network, which requires setting of several user-defined parameters and may produce local minima, extreme learning machine require setting of one parameter and produce a unique solution. ETM+ multispectral data set (England) was used to judge the suitability of extreme learning machine for remote sensing classifications. A back propagation neural network was used to compare its performance in term of classification accuracy and computational cost. Results suggest that the extreme learning machine perform equally well to back propagation neural network in term of classification accuracy with this data set. The computational cost using extreme learning machine is very small in comparison to back propagation neural network. | Give a concise overview of the text below. |
arxiv-format/0802_2110v1.md | # Pair Partitioning in time reversal acoustics
Hernan L. Calvo
[email protected] Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Ciudad Universitaria, 5000 Cordoba, Argentina
Horacio M. Pastawski
Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Ciudad Universitaria, 5000 Cordoba, Argentina
## I Introduction
In the recent years, the group of M. Fink developed an experimental technique called Time Reversal Mirror (TRM) [1] that allows time reversal of ultrasonic waves. An ultrasonic pulse is emitted from the inside of the control region (called cavity) and the excitation is detected as it escapes through a set of transducers placed at the boundaries. These transducers can act alternatively like microphones or loudspeakers and the registered signal is played back in the time reversed sequence. Thus, the signal focalizes in space and time in the source point forming a Loschmidt Echo [2]. It is remarkable that the quality of focalization gets better by increasing the inhomogeneities inside the cavity. This property allows for applications in many fields [3; 4]. In order to have a first formal description for the exact reversion, we introduced a time reversal procedure denoted Perfect Inverse Filter (PIF) in the quantum domain [5]. The PIF is based in the injection of a wave function that precisely compensates the feedback effects by means of the renormalization of the registered signal in the frequency domain. This also accounts for the correlations between the transducers. Recently, we proved that these concepts apply for classical waves [6]. We applied it to the numerical evaluation of the reversal of excitations in a linear chain of classical coupled harmonic oscillators with satisfactory results. A key issue in assessing the stability of the reversal procedure is have a numerical integrator that is stable and perfectly reversible. Therefore, we developed a numerical algorithm, the Pair Partitioning method (PP), that allowed the precise test of the reversal procedure. In the next section we develop the main idea of the method and then we use it to obtain the numerical results that we compare with the analytical solution for the homogeneous system. Additionally, we introduce a method to approximate the solution of an infinite system using a finite one. For this we introduce a non-homogeneous fictitious friction term that can simulate the diffusion of the excitation occurring in an unbounded system. These strategies are tested through a numerical simulation of a time reversal experiment.
## II Wave dynamics in the pair partitioning method
The system to be used is shown in the figure 1: a one-dimensional chain of \\(N=2s\\) coupled oscillators with masses \\(m_{i}\\) and natural frequencies \\(\\omega_{i}\\) that can be represented as a set of coupled pendulums.
If \\(p_{i}\\) denotes the impulse and \\(u_{i}\\) the displacement amplitude from the equilibrium position for the \\(i\\)th oscillator, the Hamiltonian writes
\\[\\mathcal{H}=\\sum_{i=1}^{N}\\left(\\frac{p_{i}^{2}}{2m_{i}}+\\frac{m_{i}\\omega_{i }^{2}}{2}u_{i}^{2}\\right)+\\sum_{i=1}^{N-1}\\frac{K_{i}}{2}\\left(u_{i+1}-u_{i} \\right)^{2}, \\tag{1}\\]
where \\(K_{i}\\) is the elastic coefficient that accounts for the coupling between the oscillators \\(i\\) and \\(i+1\\). Notice that wecould rewrite the Hamiltonian in terms of each coupling separating it in non-interacting terms including even pairs and odd pairs each:
\\[\\begin{split}\\mathcal{H}&=\\mathcal{H}_{\\text{odd}}+ \\mathcal{H}_{\\text{even}}=\\mathcal{H}_{1,2}+\\mathcal{H}_{3,4}+\\ldots+\\mathcal{H }_{N-1,N}\\\\ &+\\mathcal{H}_{2,3}+\\mathcal{H}_{4,5}+\\ldots+\\mathcal{H}_{N-2,N-1 },\\end{split} \\tag{2}\\]
with
\\[\\mathcal{H}_{n,n+1}=\\frac{p_{n}^{2}}{2\\tilde{m}_{n}}+\\frac{\\tilde{m}_{n}\\tilde {\\omega}_{n}^{2}}{2}u_{n}^{2}+\\frac{p_{n+1}^{2}}{2\\tilde{m}_{n+1}}+\\frac{ \\tilde{m}_{n+1}\\tilde{\\omega}_{n+1}^{2}}{2}u_{n+1}^{2}+\\frac{K_{n}}{2}\\left(u_ {n+1}-u_{n}\\right)^{2}, \\tag{3}\\]
and
\\[\\begin{split}\\tilde{m}_{1}&=m_{1},\\ \\tilde{\\omega}_{1}=\\omega_{1},\\ \\tilde{m}_{N}=m_{N},\\ \\tilde{\\omega}_{N}=\\omega_{N},\\\\ \\tilde{m}_{n}&=2m_{n},\\ \\tilde{\\omega}_{n}=\\omega_{n}/ 2,\\ n=2,\\ldots,N-1.\\end{split} \\tag{4}\\]
A good approximation to the overall dynamics, inspired in the Trotter method used in quantum mechanics [7], can be obtained solving analytically the equations of motion for each independent Hamiltonian in a time step \\(\\tau\\). Therefore, the pair \\(\\mathcal{H}_{n,n+1}\\) has
\\[\\begin{split}\\ddot{u}_{n}&=-\\left(\\tilde{\\omega}_{n }^{2}+\\frac{K_{n}}{\\tilde{m}_{n}}\\right)u_{n}+\\frac{K_{n}}{\\tilde{m}_{n}}u_{n+1 },\\\\ \\ddot{u}_{n+1}&=-\\left(\\tilde{\\omega}_{n+1}^{2}+ \\frac{K_{n}}{\\tilde{m}_{n+1}}\\right)u_{n+1}+\\frac{K_{n}}{\\tilde{m}_{n+1}}u_{n}.\\end{split} \\tag{5}\\]
At each small time step \\(\\tau\\), the evolution for the even couplings is obtained and the resulting positions and velocities are used as initial conditions for the set of Hamiltonians accounting for odd couplings and so on. Since the equations of motion are solved separately, we could consider only the two coupled oscillators system, e.g.
\\[\\begin{split}\\ddot{u}_{1}&=-\\omega_{1}^{2}u_{1}+ \\omega_{12}^{2}u_{2},\\\\ \\ddot{u}_{2}&=-\\omega_{2}^{2}u_{2}+\\omega_{21}^{2}u_ {1}.\\end{split} \\tag{6}\\]
For this system, it is easy to obtain the corresponding normal modes
Figure 1: Scheme of the \\(N\\) coupled oscillators system to be solved by the PP numerical method.
\\[U_{\\pm}=\\mp\\frac{\\omega_{21}^{2}}{\\omega_{+}^{2}-\\omega_{-}^{2}}u_{1}\\pm\\frac{ \\omega_{1}^{2}-\\omega_{\\mp}^{2}}{\\omega_{+}^{2}-\\omega_{-}^{2}}u_{2}, \\tag{7}\\]
with characteristic frequencies
\\[\\omega_{\\pm}^{2}=\\frac{\\omega_{1}^{2}+\\omega_{2}^{2}}{2}\\pm\\sqrt{\\left(\\frac{ \\omega_{1}^{2}-\\omega_{2}^{2}}{2}\\right)^{2}+\\omega_{12}^{2}\\omega_{21}^{2}}. \\tag{8}\\]
From (7), \\(U_{\\pm}(t)\\) are obtained and these values are used for the evolution after the temporal step \\(t\\to t+\\tau\\), i.e.
\\[\\begin{split} U_{\\pm}(t+\\tau)&=U_{\\pm}(t)\\cos( \\omega_{\\pm}\\tau)+\\dot{U}_{\\pm}(t)\\sin(\\omega_{\\pm}\\tau)/\\omega_{\\pm},\\\\ \\dot{U}_{\\pm}(t+\\tau)&=\\dot{U}_{\\pm}(t)\\cos(\\omega_ {\\pm}\\tau)-U_{\\pm}(t)\\sin(\\omega_{\\pm}\\tau)\\omega_{\\pm}.\\end{split} \\tag{9}\\]
Once we have \\(U_{\\pm}(t+\\tau)\\) and \\(\\dot{U}_{\\pm}(t+\\tau)\\) we can go back to the natural basis by means of the inverse of (7)
\\[\\begin{split} u_{1}&=\\frac{\\omega_{1}^{2}-\\omega_{ +}^{2}}{\\omega_{21}^{2}}U_{+}+\\frac{\\omega_{1}^{2}-\\omega_{-}^{2}}{\\omega_{21 }^{2}}U_{-},\\\\ u_{2}&=U_{+}+U_{-}.\\end{split} \\tag{10}\\]
Then, one obtains the displacements and momenta for all oscillators at time \\(t+\\tau\\). The above steps are summarized in the Pair Partitioning algorithm:
1. Determine all the masses and natural frequencies of the partitioned system \\(\\tilde{m}_{n},\\tilde{\\omega}_{n}\\).
2. For even couplings \\(\\mathcal{H}_{2,3},\\mathcal{H}_{4,5},\\ldots\\), rewrite the initial conditions \\(\\{u_{i}(0),\\dot{u}_{i}(0)\\}\\) for the normal modes according to (7).
3. Calculate the normal modes evolution for even couplings, according to (9) and obtain \\(\\{u_{i}^{\\rm even}(\\tau),\\dot{u}_{i}^{\\rm even}(\\tau)\\}\\) from (10).
4. Calculate the normal modes for odd couplings \\(\\mathcal{H}_{1,2},\\mathcal{H}_{3,4},\\ldots\\), using the recent positions and velocities.
5. Calculate the normal modes evolution for odd couplings and give the positions and velocities \\(\\{u_{i}(\\tau),\\dot{u}_{i}(\\tau)\\}\\).
6. Go back to the step 2 with \\(\\{u_{i}(\\tau),\\dot{u}_{i}(\\tau)\\}\\).
Therefore, applying \\(n\\) times the PP algorithm we obtain the positions and velocities for all oscillators at time \\(t=n\\tau\\).
As an example we consider the homogeneous system where the \\(N\\) oscillators have identical masses and the only natural frequencies correspond to the surface \\(\\omega_{1}=\\omega_{N}=\\omega_{\\rm x}=\\sqrt{K/m}\\). The displacement amplitude of the \\(i\\)th oscillator due to an initial displacement in the \\(j\\)th oscillator can be expressed analytically as
\\[u_{i\\gets j}^{\\rm th.}(t)=\\frac{2}{N+1}\\sum_{k=1}^{N}\\sin\\left(i\\frac{k \\pi}{N+1}\\right)\\sin\\left(j\\frac{\\pi}{N+1}\\right)\\cos(\\omega_{k}t), \\tag{11}\\]
with
\\[\\omega_{k}=\\omega_{\\rm x}\\sin\\left(\\frac{k\\pi}{2(N+1)}\\right), \\tag{12}\\]
the characteristic frequency for the \\(k\\)th normal mode. In the figure 2 the analytical and numerical results are compared for the surface oscillator displacement in a case when all the oscillators were initially in their equilibrium positions except \\(u_{1}(0)=u_{1}^{0}\\). We use \\(N=200\\) in two cases where the temporal steps are \\(\\tau=10^{-2}\\omega_{\\rm x}^{-1}\\) and \\(\\tau=10^{-3}\\omega_{\\rm x}^{-1}\\).
We have taken \\(N\\) such that no mesoscopics echoes [8] appear in the interval of time shown. We also notice that the error
\\[\\varepsilon(t)=\\frac{\\left|u_{1}^{\\rm th.}(t)-u_{1}^{\\rm PP}(t)\\right|}{u_{0}}, \\tag{13}\\]
drops as the temporal step \\(\\tau\\) diminishes. We observe a quadratic dependence \\(\\max\\!\\varepsilon(t)=\\alpha\\tau^{2}\\) in complete analogy with the Trotter method. In the particular case of the homogeneous system we have \\(\\alpha\\simeq 0.0445\\).
### Unbounded systems as damped oscillations
The solution of wave dynamics in infinite media remains as a delicate problem. In such case, the initially localized excitation spreads through the systems in a way that resembles actual dissipation. In contrast, finite systems present periodic revivals, the mesoscopic echoes, that show that energy remains in the system. In order to get rid the mesoscopic echoes and obtaining a form of \"dissipation\" using a finite number of oscillators, we add a fictitious \"friction\" term. The friction coefficients \\(\\eta_{i}\\geq 0\\) can be included between the 2th and 3th steps of the PP algorithm supposing that the displacement amplitude decays exponentially
\\[u_{i}(t)\\to u_{i}(t)\\exp(-\\eta_{i}\\tau), \\tag{14}\\]
as occurs in a damped oscillator in the limit \\(\\eta/\\omega\\ll 1\\). For the homogeneous system with \\(N=100\\) oscillators were the cavity ending at site \\(x_{R}\\) we choose a progressive increase in the damping as
\\[\\eta_{i}=0.1\\frac{i-x_{R}}{N-x_{R}},\\ i=x_{R},\\ldots,N. \\tag{15}\\]
We compare the result of this approximation with the undamped case for the displacement amplitude in \\(x_{R}=10\\). The figure 3 shows how the dynamics in the damped system has no mesoscopic echoes whereas in the undamped system we observe the echo at \\(t_{M}\\simeq 2N\\omega_{\\rm x}^{-1}\\).
As we will see for TRM and PIF procedures, this last result is very usefull since it allows to obtain the dynamics of an open system with a small number of oscillators (e.g. \\(N\\simeq 100\\)).
Figure 2: Time evolution for the surface oscillator. Top, comparison between the analytical and numerical results shows all curves superposed. Bottom, error in the displacement for both temporal steps.
## III Numerical test for the perfect inverse filter procedure
As we mentioned above, the Time Reversal Mirror procedure consists in the injection, at the boundaries of the cavity, of a signal proportional to that recorded during the forward propagation. In contrast, the Perfect Inverse Filter corrects this recorded signal to accounts for the contributions of multiple reflections and normal dispersion of the previously injected signal in a manner that their instantaneous total sum coincides precisely with the time reversed signal at the boundaries. The continuity of the wave equation ensures that perfect time reversal occurs at every point inside the cavity. The procedure for such correction is described somewhere else [5; 6]. Here, it is enough to notice that the imposition of an appropriate wave amplitude at the boundaries should give the perfect reversal of an excitation originally localized inside the cavity. An example of such situation would be a \" surface pendulum\" coupled to a semi-infinite linear chain of harmonically coupled masses [9]. In such a case, we know that the energy decays in a approximately exponential way, where the decay rate can be assimilated to a \"friction coefficient \". However, for very short times, the local energy decays with a quadratic law while for very long times the exponential decay gives rise to a power law characteristic of the slow diffusion of the energy [10]. In the figure 4 we show how this overall decay is reversed by controling the amplitude at site \\(x_{R}=10\\).
As long as we have been able to wait until a neglegible amount of energy is left in the cavity, the control of the boundaries is enough to reverse the whole dynamics inside the cavity. As a comparison, the theoretical reversal of the decay of a surface excitation in a semi-infinite chain is shown. We see that the region they differ is when the injected signal is still negligible as compared to the energy still remaining in the cavity (as a concequence of the very slow power law decay).
## IV Discussion
We have presented a numerical strategy for the solution of the wave equation that is completely time reversible. This involves the iterative application of the exact evolution of pairs of coupled effective oscillators where the energy is conserved, hence deserving the name of Pair Partitioning method. This is complemented with an original strategy for dealing with wave propagation through infinite media. While various tests of these procedures remain to be done, we have shown, through the solution of simple but highly non-trivial examples, that the method is numerically stable
Figure 3: Comparison between the undamped evolution (black) and the damped evolution (blue) for the displacement amplitude \\(u_{R}(t)\\).
and can be used to revert wave dynamics up to a desired precision.
## References
* (1) M. Fink, _Time-reversed acoustics_, Scientific American **281** (may), 91-97 (1999).
* (2) R.A. Jalabert and H.M. Pastawski, _Environment-independent decoherence rate in classically chaotic systems_, Physical Review Letters **86**, 2490-2493 (2001).
* (3) M. Fink, G. Montaldo and M. Tanter, _Time-reversal acoustics in biomedical engineering_, Annual Reviews of Biomedical Engineering **5**, 465-497 (2003).
* (4) G.F. Edelman, T. Akal, W.S. Hodkiss, S. Kim, W.A. Kuperman and H. C. Song, _An initial demonstration of underwater acoustic communication using time reversal_, IEEE Journal of Ocean Engineering **27** (3), 602-609 (2002).
* (5) H.M. Pastawski, E.P. Danieli, H.L. Calvo and L.E.F. Foa-Torres, _Towards a time-reversal mirror for quantum systems_, Europhysics Letters **77**, 40001 (2007).
* (6) H.L. Calvo, E.P. Danieli and H.M. Pastawski, _Time reversal mirror and perfect inverse filter in a microscopic model for sound propagation_, Physica B **398** (2), 317-320 (2007).
* (7) H. deRaedt, _Computer simulation of quantum phenomena in nanoscale devices_, Annual Reviews of Computational Physics IV, 107-146 (1996).
* (8) H.M. Pastawski and P.R. Levstein and G. Usaj, _Quantum dynamical echoes in the spin diffusion in mesoscopic systems_, Physical Review Letters **75**, 4310-4313 (1995).
* (9) H.L. Calvo and H.M. Pastawski, _Dynamical phase transition in vibrational surface modes_, Brazilian Journal of Physics **36** (3b), 963-966 (2006).
* (10) E. Rufeil Fiori and H.M. Pastawski, _Non-Markovian decay beyond the Fermi Golden Rule: survival collapse of the polarization in spin chains_, Chemical Physics Letters **420**, 35-41 (2006).
Figure 4: Local energy of the surface oscillator recovering. Here, we choose for the detection time \\(t_{R}=1000\\tau\\). evolution (blue) for the displacement amplitude \\(u_{R}(t)\\). | Time reversal of acoustic waves can be achieved efficiently by the persistent control of excitations in a finite region of the system. The procedure, called Time Reversal Mirror, is stable against the inhomogeneities of the medium and it has numerous applications in medical physics, oceanography and communications. As a first step in the study of this robustness, we apply the Perfect Inverse Filter procedure that accounts for the memory effects of the system. In the numerical evaluation of such procedures we developed the Pair Partitioning method for a system of coupled oscillators. The algorithm, inspired in the Trotter strategy for quantum dynamics, obtains the dynamic for a chain of coupled harmonic oscillators by the separation of the system in pairs and applying a stroboscopic sequence that alternates the evolution of each pair. We analyze here the formal basis of the method and discuss his extension for including energy dissipation inside the medium. | Provide a brief summary of the text. |
arxiv-format/0802_2138v1.md | # Support Vector classifiers for Land Cover Classification
Mahesh Pal
Lecturer, department of Civil engineering
National Institute of Technology
Kurukshetra 136119
Haryana (India)
[email protected]
Paul M. Mather
Prof., School of geography
University of Nottingham
University Park
Nottingham, NG7 2RD, UK
[email protected]
######
### Map India 2003 Image Processing & Interpretation
and the error is propagated from the output back to the input layer. The weights on the backwards path through the network are updated according to an update rule and a learning rate. ANNs are not solely specified by the characteristics of their processing units and the selected training or learning rule. The network topology, i.e. the number of hidden layers, the number of units, and their interconnections, also has an influence on classifier performance. In this study we use the network architecture and number of patterns used for training suggested by Kavzoglu (2001).
### Support vector classifier
In the two-class case, a support vector classifier attempts to locate a hyperplane that maximises the distance from the members of each class to the optimal hyperplane. The principle of a support vector classifier is briefly described next.
Assume that the training data with \\(k\\) number of samples is represented by \\(\\{\\textbf{x}_{i},\\textbf{y}_{i}\\}\\), i = 1, , k, where **x**\\(\\in\\)**R**\\({}^{\\text{ n}}\\) is an n-dimensional vector and **y**\\(\\in\\)**{-1,+1} is the class label. These training patterns are said to be linearly separable if a vector **w** (which determining the orientation of a discriminating plane) and a scalar \\(b\\) (determine offset of the discriminating plane from origin) can be defined so that inequalities (1) and (2) are satisfied.
\\[\\textbf{w}\\cdot\\textbf{x}_{i}\\ +\\textbf{b}\\geq+1\\qquad\\text{for all y = +1} \\tag{1}\\]
\\[\\textbf{w}\\cdot\\textbf{x}_{i}+\\textbf{b}\\leq-1\\qquad\\text{for all y = -1} \\tag{2}\\]
The aim is to find a hyperplane which divides the data so that that all the points with the same label lie on the same side of the hyperplane. This amounts to finding **w** and \\(b\\) so that
\\[\\textbf{y}_{i}\\ \\left(\\textbf{w}\\cdot\\textbf{x}_{i}\\ +\\textbf{b}\\ \\right)>0 \\tag{3}\\]
If a hyperplane exists that satisfies (3), the two classes is said to be _linearly separable_. In this case, it is always possible to rescale **w** and \\(b\\) so that
\\[\\min_{\\text{isis}\\textbf{y}_{i}}\\ \\left(\\textbf{w}\\cdot\\textbf{x}_{i}\\ + \\textbf{b}\\ \\right)\\geq 1\\]
That is, the distance from the closest point to the hyperplane is 1/\\(\\left\\|\\textbf{w}\\right\\|\\). Then (3) can be written as
### Map India 2003 Image Processing & Interpretation
\\[y_{i}\\left(w\\cdot x_{i}+b\\right)\\geq 1 \\tag{4}\\]
The hyperplane for which the distance to the closest point is maximal is called the _optimal separating hyperplane_ (OSH) (Vapnik, 1995). As the distance to the closest point is \\(\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left| \\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\!\\left|\\! \\
### Map Indic 2003 Image Processing & Interpretation
Where it is not possible to have a hyperplane defined by linear equations on the training data, the techniques described above for linearly separable data can be extended to allow for non-linear decision surfaces. A technique introduced by Boser et al. (1992) maps input data into a high dimensional feature space through some nonlinear mapping. The transformation to a higher dimensional space spreads the data out in a way that facilitates the finding of linear hyperplanes. After replacing **x** by its mapping in the feature space \\(\\mathbf{\\Phi(x)}\\), equation (5) can be written as:
\\[\\mathrm{L}\\left(\\mathbf{\\lambda}\\right)=\\sum_{i}\\lambda_{i}\\ -\\frac{1}{2}\\sum_{i,j} \\lambda_{i}\\lambda_{j}y_{i}y_{j}\\ \\left(\\Phi\\left(\\mathbf{x}_{i}\\ \\right)\\cdot\\Phi\\left(\\mathbf{x}_{j}\\ \\right)\\right) \\tag{11}\\]
To reduce computational demands in feature space, it is convenient to introduce the concept of the _kernel function_ K (Cristianini and Shawe-Taylor, 2000; Cortes and Vapnik 1995) such that:
\\[\\mathrm{K}\\left(\\mathbf{x}_{i}\\,\\mathbf{x}_{j}\\ \\right)=\\mathbf{\\Phi} \\left(\\mathbf{x}_{i}\\ \\right)\\cdot\\mathbf{\\Phi}\\left(\\mathbf{x}_{j}\\ \\right) \\tag{12}\\]
Then, to solve equation (11) only the kernel function is computed in place of computing \\(\\mathbf{\\Phi(x)}\\), which could be computationally expensive. A number of kernel functions are used for support vector classifier. Details of some kernel functions and their parameters used with SVM classifiers are discussed by Vapnik (1995). SVM was initially designed for binary (two-class) problems. When dealing with several classes, an appropriate multi-class method is needed. A number of methods are suggested in literature to create multi-class classifiers using two-class methods (Hsu and Lin, 2002). In this study, a \"one against one\" approach (Knerr et al., 1990) (In this method, all possible two-class classifiers are evaluated from the training set of n classes, each classifier being trained on only two out of n classes. There would be a total of n (n-1)/2 classifiers. Applying each classifier to the vectors of the test data gives one vote to the winning class. The pixel is given the label of the class with most votes. To generate multi-class SVMs and a radial basis kernel function (defined as \\(\\mathrm{e}^{\\ -\\gamma\\ \\mid\\mathbf{x}\\ \\mid\\mathbf{y}\\ \\mid^{\\mathrm{t}}}\\) ) was used.
## 3 Data
The first of the two study areas used in the work reported here are located near the town of Littleport in eastern England. The second is a wetland area of the La
### Map India 2003 Image Processing & Interpretation
Mancha region of Spain. For the Littleport area, ETM\\(+\\) data acquired on 19\\({}^{\\text{th}}\\) June 2000 is used. The classification problem involves the identification of seven land cover types (wheat, potato, sugar beet, onion, peas, lettuce and beans) for the ETM\\(+\\) data set. For the La Mancha study area, hyperspectral data acquired on 29\\({}^{\\text{th}}\\) June 2000 by the DAIS 7915 airborne imaging spectrometer were available. Eight different land cover types (wheat, water body, dry salt lake, hydrophytic vegetation, vineyards, bare soil, pasture lands and built up area) were specified. The DAIS data show moderate to severe striping problems in the optical infrared region between bands 41 and 72. Initially, the first 72 bands in the wavelength range 0.4 \\(\\upmu\\)m to 2.5 \\(\\upmu\\)m were selected. All of these bands were examined visually to determine the severity of striping. Seven bands displaying very severe striping problems (bands 41 - 42 and 68 - 72) were removed from the data set. The striping in the remaining bands was removed by automatically enhancing the Fourier transform of each image (Cannon et al., 1983; Srinivasan et al., 1988). The input image is first divided into overlapping 128-by-128-pixel blocks. The Fourier transform of each block is calculated and the log-magnitudes of each FFT block are then averaged. The averaging process removes all frequency domain quantities except those which are present in each block; i.e., some sort of periodic interference. The average power spectrum is then used as a filter to adjust the FFT of the entire image. When an inverse Fourier transform is performed, the result is an image with periodic noise eliminated or significantly reduced.
## 4 Result and discussions
Random sampling was used to collect the training and test pixels for both ETM\\(+\\) and DAIS data set. Total selected pixels were divided into two parts, one for training and one for testing the classifiers, so as to remove any possible bias resulting from the use of same set of pixels for both the testing and training phases. A total of 2700 training pixels and 2037 test pixels for ETM\\(+\\) data and a total of 1600 training (200 pixels/class) and 3800 test pixels were used for DAIS data. Kappa values and overall classification accuracies are calculated for each of the classifiers used in this study with ETM\\(+\\) data, while overall accuracy is calculated for the DAIS hyperspectral data. The Z statistic is also used to test the significance of apparent differences between the three classification algorithms,
### Map India 2003 Image Processing & Interpretation
using ETM\\(+\\) data. For this study, a standard back-propagation neural classifier was used. All user-defined parameters are set as recommended by Kavzoglu (2001).
Like neural network classifiers the performance of support vector classifier depends on a number of user-defined parameters which may influence the final classification accuracy. For this study a radial basis kernel with \\(\\gamma\\) (kernel specific parameter) value as two and C = 5000 is used for both data sets. The values of these parameters were chosen after a number of trials and the same parameters are used with DAIS data set. This study also suggests that, in comparison with the NN classifier, it is easier to fix the values of the user defined parameters for SVM.
As mentioned earlier, SVM involves in solving a quadratic programming problem with linear equality and inequality constraints which has only a global optimum. In comparison the presence of local minima is a significant problem in training the neural network classifiers.
### Map India 2003 Image Processing & Interpretation
Further, the training time taken by support vector classifier is 0.30 minutes in comparison of 58 minutes by the NN classifier on a dual processor sun machine. Results suggest that support vector classifier performance is statistically significant in comparison with NN and ML classifiers. To study the behaviour of support vector classifier with DAIS Hyperspectral data a total of sixty five features (bands) was used, a total of 65 features (spectral bands) were available, as seven features with severe striping were discarded, as explained above. The initial number of features used was five, and the experiment was repeated with 10, 15, , 65 features, giving a total of 13 experiments. Figure 1 suggests that, in comparison to the other classifiers, the performance of the support vector classifier is quite good with small training data set irrespective of the number of features used.
Results obtained from analysis of the hyperspectral data suggest that classification accuracy using SVM increases almost continuously as a function of the number of features, with the size of the training data set held constant, whereas the overall classification accuracies produced by the ML, DT and NN classifiers decline slightly once the number of bands exceeds 50 or so. Thus, suggesting that the
Figure 1: Classification accuracies obtained with DAIS hyperspectral data using different classification algorithms. The training data set size is 200 pixels/class.
## 5 Conclusions
The objective of this study was to assess the utility of support vector classifiers for land cover classification using multi- and hyper-spectral data sets in comparison with most frequently used ML and NN classifiers. The results presented above suggest several conclusions. First the support vector classifier outperforms ML and NN classifiers in term of classification accuracy with both data sets. Several user-defined parameters affects the performance of the support vector classifier, but this study suggests that it is easier to find appropriate values for these parameters than it is for parameters defining the NN classifier. The level of classification accuracy achieved with the support vector classifier is better than both ML and NN classifiers when used with small number of training data.
## Acknowledgement
The DAIS data were collected and processed by DLR and were kindly made available by Prof. J. Gumuzzio of the Autonomous University of Madrid. Computing facilities were provided by the School of Geography, University of Nottingham.
## References
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** Neural network approaches versus statistical methods in classification of multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [BishopBishop1995]**Bishop, C. M. (1995)**_Neural Networks for Pattern Recognition_. Oxford: Clarendon Press
* [Boser, Guyon, and VapnikBoser et al.1992]**Boser, B., Guyon, I., and Vapnik, V. N., 1992,** A training algorithm for optimal margin classifiers. _Proceedings of 5\\({}^{th}\\) Annual Workshop on Computer Learning Theory_, Pittsburgh, PA: ACM, 144-152.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-5541.
* [Benediktsson, Swain, and EraseBenediktsson et al.1991]**Benedikson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-5541.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benediktsson, J. A., Swain, P. H., and Erase, O. K., 1990,** The multisource remote sensing data. _IEEE Transactions on Geoscience and Remote Sensing_, **28**,540-551.
* [Benediktsson, Swain, and EraseBenediktsson et al.1990]**Benedikson, J. A., Swain, P. H., and Erase, O. K., 199
**Cannon, M., Lehar, A., and Preston, F., 1983,** Background pattern removal by power spectral filtering. _Applied Optics_, **22(6),** 777-779.
* **Chang, C., and Lin, C., 2001,** _LIBSVM: A Library for Support Vector Machines_**. Computer Science and Information Engineering, National Taiwan University, Taiwan, [http://www.csie.ntu.edu.tw/](http://www.csie.ntu.edu.tw/)\\(\\sim\\)cjlin/libsvm.**
* **Cortes, C., and Vapnik, V. N., 1995,** Support vector networks. _Machine Learning_, **20,** 273- 297.
* # Introduction
* **Gualtieri, J. A. and Cromp, R. F., 1998,** Support vector machines for hyperspectral remote sensing classification. _Proceedings of the 27th AIPR Workshop: Advances in Computer Assisted Recognition_**, Washington, DC, October 27, 221-232.**
* **Heerman, P. D., and Khazenie, N., 1992,** Classification of multispectral remote sensing data using a back propagation neural network. _IEEE Transactions on Geoscience and Remote Sensing_, **30,** 81-88.
* **Hsu, C. W., and Lin, C. -J., 2002,** A comparison of methods for multi-class support vector machines, _IEEE Transactions on Neural Networks_, **13,** 415-425.
* **Huang, C., Davis, L. S. and Townshend, J. R. G., 2002,** An assessment of support vector machines for land cover classification. _International Journal of Remote Sensing_, **23,** 725-749.
* **Hughes, G. F., 1968,** On the mean accuracy of statistical pattern recognizers. _IEEE Transactions on Information Theory_, **IT-14,** 55-63.
* **Kavzoglu, T., 2001,** _An Investigation of the Design and Use of Feed-forward Artificial Neural Networks in the Classification of Remotely Sensed Images_**. PhD thesis. School of Geography, The University of Nottingham, Nottingham, UK.**
* **Knerr, S., Personnaz, L., and Dreyfus, G., 1990,** Single-layer learning revisited: A stepwise procedure for building and training neural network. _Neurocomputing: Algorithms, Architectures and Applications_**, NATO ASI, Berlin: Springer-Verlag.**
* **Osuna, E. E., Freund, R., and Girosi, F., 1997,** _Support vector machines: training and applications_**. A. I. Memo No. 1602, CBCL paper No. 144, Artificial Intelligence laboratory, Massachusetts Institute of Technology, [ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1602.pdf](ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-1602.pdf)**
* **Map India Conference 2003**
* **GISdevelopment.net, All rights reserved.**
**Map Indic 2003** **Image Processing & Interpretation**
**Srinivasan, R., Cannon, M., and White, J., 1988, Landsat data destriping using power spectral filtering.** _Optical Engineering_**, 27, 939-943.**
**Vapnik, V. N., 1995, The** _Nature of Statistical Learning Theory_**. New York: Springer-Verlag.**
**Wilkinson, G. G. (1997) Open questions in neurocomputing for earth observation. In** _Neuro-Computation in Remote Sensing Data Analysis_**, edited by I. Kanellopoulos, G. G. Wilkinson, F. Roli and J. Austin. London: Springer, 3-13.**
**Zhu, G. and Blumberg, D. G., 2002, Classification using ASTER data and SVM algorithms; The case study of Beer Sheva, Israel.** _Remote Sensing of Environment_**, 80, 233-240.** | The proposed algorithm is a fast-forward multi-layer perceptron using back-propagation, due to its ability to handle any kind of numerical data, and to its freedom from distributional assumptions. Although neural networks may generally be used to classify data at least as accurately as statistical classification approaches a number of studies have reported that users of neural classifiers have problems in setting the choice of various parameters during training (Wilkinson, 1997). The choice of architecture of the network, the sample size for training, learning algorithms, and number of iterations required for training are some of these problems. A new classification system based on statistical learning theory (Vapnik, 1995), called the support vector machine has recently been applied to the problem of remote sensing data classification (Huang et al., 2002; Zhu and Blumberg, 2002; Gualtieri and Cromp, 1998). This technique is said to be independent of the dimensionality of feature space as the main idea behind this classification technique is to separate the classes with a surface that maximise the margin between them, using boundary pixels to create the decision surface. The data points that are closest to the hyperplane are termed \"support vectors\". The number of support vectors is thus small as they are points close to the class boundaries (Vapnik, 1995). One major advantage of support vector classifiers is the use of quadratic programming, which provides global minima only. The absence of local minima is a significant difference from the neural network classifiers. | Condense the content of the following passage. |
arxiv-format/0802_2318v3.md | # Tables of Hyperonic Matter Equation of State
for Core-Collapse Supernovae 1
Footnote 1: [http://nucl.sci.hokudai.ac.jp/](http://nucl.sci.hokudai.ac.jp/)\\(\\tilde{\\;}\\tilde{\\;}\\)chikako/EOS
Chikako Ishizuka\\({}^{1}\\), Akira Ohnishi\\({}^{1,2}\\), Kohsuke Tsubakihara\\({}^{1}\\)
Kohsuke Sumiyoshi\\({}^{3}\\) and Shoichi Yamada\\({}^{4}\\)
\\({}^{1}\\) Department of Physics, Faculty of Science,
Hokkaido University, Sapporo 060-0810, Japan \\({}^{2}\\) Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, Japan \\({}^{3}\\) Numazu College of Technology, Numazu, Japan \\({}^{4}\\) Science and Engineering, Waseda University, Tokyo, Japan [email protected] [email protected] [email protected]
November 3, 2021
## 1 Introduction
The equation of state (EOS) plays an important role in high density phenomena such as high energy heavy-ion collisions, neutron stars, supernova explosions, and black hole formations [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. The recent discovery of the strongly interacting quark gluon plasma (sQGP) [14] attracts attentions to the EOS and transport coefficients in the quark gluon plasma (QGP). The core region of neutron stars, where matter becomes very dense (\\(\\sim 10^{15}\\)g/cm\\({}^{3}\\)), is an interesting play ground of quark and hadronic matter models. Various ideas for the new form inside neutron stars have been proposed including strangeness and quark degrees of freedom [1, 2, 3, 4, 5, 6, 7, 8, 9]. Core-collapse supernovae also involve high densityand temperature. The nuclear repulsion at high densities drives the shock wave at core bounce and the passage of shock wave heats up the matter inside the supernova core. A hot, lepton-rich neutron star (proto-neutron star) is born after the explosion and cools down by emitting supernova neutrinos. When black holes are formed for more massive cores, extremely high density and temperature are involved, where hyperons should appear and quarks would be deconfined. In order to describe the whole evolution of core-collapse supernovae by numerical simulations, one needs to prepare the set of microphysics under such extreme conditions. One of the most important ingredients is the set of equation of state (EOS) that contains necessary physical quantities. It is to be noted that one must cover a wide range of temperature, density and composition in a consistent manner and theoretical framework.
Until now, the two sets of EOS (Lattimer-Swesty EOS [15] and Shen EOS [16]) have been widely used and applied to numerical simulations of core-collapse supernovae [10, 11] and black hole formations [12, 13]. The Lattimer-Swesty EOS is based on a compressible liquid-drop model, whose mass and mean field potential are motivated by non-relativistic zero-range Skyrme type interactions. The Shen EOS is based on a relativistic mean field (RMF) model, whose interactions are determined by fitting the binding energies and nuclear radii of stable as well as unstable nuclei [17]. Coexistence of nuclei and uniform matter is included in the Thomas-Fermi approximation in the Wigner-Seitz cell, and the alpha particles are assumed to follow the statistical distribution with excluded volume effects.
The constituents in these EOSs are neutrons, protons, alpha-particles and nuclei, restricting the framework within the non-strange baryons. These degrees of freedom may be enough to simulate the early stage of hydrodynamical evolution of supernova explosions. However, in order to clarify the long-time evolution from core collapse [11] to proto-neutron star cooling [18, 19], black hole formation [12, 13], neutron star mergers and gamma ray bursts that may involve higher density/temperature, it would be necessary to include other particle degrees of freedom. Especially, hyperons (baryons containing strange quarks) are commonly believed to appear in neutron star core and to modify the neutron star profile [1, 2, 3, 4, 5, 6, 7]. While there are several works which include the hyperons in the proto-neutron star cooling [19], there has been no study on the dynamics of core-collapse supernovae adopting the EOS with hyperons. This is partially because EOS table of supernova matter including hyperons has not been available in public. In addition, the determination of the interaction for hyperons has been difficult having large uncertainties so far.
Recently, developments in hypernuclear physics have narrowed down the allowed range of hyperon potential depth in nuclear matter. The potential depth of \\(\\Lambda\\) has been well known to be around \\(U_{\\Lambda}^{(N)}(\\rho_{0})\\simeq-30\\)MeV from bound state spectroscopy. For \\(\\Sigma\\) baryons, it was considered to feel similar potential to \\(\\Lambda\\), because it contains the same number of light \\((u,d)\\) quarks. From the recently observed quasi-free \\(\\Sigma\\) production spectra [20], it is now believed that \\(\\Sigma\\) baryons would feel repulsive potential in nuclear matter; \\(U_{\\Sigma}(\\rho_{0})\\simeq+30\\)MeV [21, 22, 23], Also for \\(\\Xi\\) baryons, the analyses of the twin hypernuclear formation [24] and the \\(\\Xi\\) production spectra [25, 26, 27], suggest the potential depth of around \\(U_{\\Xi}^{(N)}(\\rho_{0})\\simeq-15{\\rm MeV}\\). These \\(\\Sigma\\) and \\(\\Xi\\) hyperons are particularly important in neutron stars, since nuclear matter can take a large energy gain from neutron Fermi energy and symmetry energy by replacing, for example, two neutrons with a proton and a negatively charged hyperon (\\(\\Sigma^{-}\\) or \\(\\Xi^{-}\\)). The updates on the interactions of hyperons may have impact on supernova dynamics and thermal evolution of proto-neutron stars.
In this paper, we present new sets of EOS of dense matter with hyperons, abbreviated as EOS\\(Y\\), under the current understanding of interaction. We provide the data table covering a wide range of temperature (\\(T\\)), density (\\(\\rho_{{}_{B}}\\)), and charge-to-baryon number ratio in hadronic part (\\(Y_{C}\\)), which enables one to apply to supernova simulations. Our framework is based on the RMF theory with the parameter set TM1 [17], which was used to derive the EOS table by Shen et al. [16], and is extended to include hyperons by considering the flavor SU(3) Lagrangian [2]. Therefore, our EOS table is smoothly connected with Shen EOS and can be used easily as an extension of Shen EOS table in numerical simulations.
It is well known that the RMF predicts large values of incompressibility (\\(K\\sim 300\\) MeV) and symmetry energy, and these are sometimes considered to cause problems in applying to dense matter EOS, since they lead to too high maximum mass of neutron stars without hyperons and may be unfavorable to core-collapse explosions. It should be noted that \\(K\\) and symmetry energy values are not yet well determined separately in a model independent manner. Analyses of collective flow data at AGS energies suggest \\(K=210-300\\) MeV [28, 29], and collective flows at SPS energies are shown to be more sensitive to the mean field of resonance hadrons rather than to the cold matter EOS [30]. For symmetry energy, it is possible to describe binding energies, proton-neutron radius differences and isovector giant monopole resonances simultaneously by incorporating density dependent coupling (DD-ME1) in RMF and relativistic RPA [31] with a larger value of symmetry energy than that in non-relativistic models. Note that the interaction of RMF-TM1 is constrained by the nuclear masses, radii, neutron skins and excitations [17]. The large value of \\(K\\) leads to a large neutron star mass (stiff EOS), which seems not preferable for explosion. On the other hand, the large symmetry energy is known to be preferable for explosion, having less free proton fraction and smaller electron captures. These two effects are competing each other in the sophisticated numerical simulations [11]. We note also that the RMF fulfills automatically the causality (the sound velocity should not exceed the light velocity) whereas the non-relativistic frameworks breaks down at high densities appearing in the simulations. Thus at present we do not find problems in applying RMF EOS to dense matter compared to non-relativistic models.
This paper is arranged as follows. In section 2, we describe the framework to calculate the dense matter at finite temperature including hyperons. We explain the updated information on hyperon potentials in nuclear matter and adopted potential values in EOS\\(Y\\). We describe also the prescriptions to provide the data table for the wide range of density including sub-saturation densities where finite nuclei appear. In section 3, we report the properties of EOS\\(Y\\) in comparisons with nucleonic EOS (TM1/Shen EOS). We apply EOS\\(Y\\) to cold neutron star matter and supernova matter. We show several properties of EOS\\(Y\\) at finite temperatures by examining energies, chemical potentials and compositions. The data tables are successfully applied to hydrodynamical calculations of adiabatic collapse of iron core of massive stars. We examine the possibilities of hyperon appearance in supernova cores. Summary and discussions are given in section 4. In the appendix, we provide the the definitions of quantities in EOS\\(Y\\).
## 2 Model and Method
In this work, we construct the EOS table of supernova matter based on a relativistic mean field model. We adopt the parameter set TM1 [17] for non-strange sector. For its flavor SU(3) extension, we start from the work by Schaffner and Mishustin [2], and we include the updated information on hyperon potentials from recent experimental and theoretical hypernuclear physics developments. Low density part of the EOS is connected with the Shen EOS.
### Relativistic mean field model with hyperons
The relativistic mean field (RMF) theory is constructed to describe nuclear matter and nuclei based on the relativistic Bruckner-Hartree-Fock theory [32], which successfully describes the nuclear matter saturation. It is preferable to adopt the relativistic frameworks for astrophysical applications, since they automatically satisfy the causality, _i.e._ the sound velocity is always less than the speed of light.
The RMF parameter set TM1 is determined to describe binding energies and nuclear radii of finite nuclei from Ca to Pb isotopes and fulfills the nuclear matter saturation. The incompressibility of symmetric uniform matter and the symmetry energy parameters are found to be \\(K=281\\) MeV and \\(a_{sym}=36.9\\) MeV. When it is applied to neutron stars, the maximum mass of cold neutron stars with TM1 is \\(2.17M_{\\odot}\\).
The extension of the RMF to flavor SU(3) has been investigated by many authors. A typical form of the Lagrangian density including hyperons is given as [2],
\\[\\mathcal{L} =\\sum_{B}\\bar{\\Psi}_{B}\\left(i\\partial\\!\\!\\!/-M_{B}\\right)\\Psi_{B }+\\frac{1}{2}\\partial^{\\mu}\\sigma\\partial_{\\mu}\\sigma-U_{\\sigma}(\\sigma)\\] \\[-\\frac{1}{4}\\omega^{\\mu\
u}\\omega_{\\mu\
u}+\\frac{1}{2}m_{\\omega} ^{2}\\omega^{\\mu}\\omega_{\\mu}-\\frac{1}{4}\\vec{R}^{\\mu\
u}\\cdot\\vec{R}_{\\mu\
u}+ \\frac{1}{2}m_{\\rho}^{2}\\vec{R}^{\\mu}\\cdot\\vec{R}_{\\mu}\\] \\[-\\sum_{B}\\bar{\\Psi}_{B}\\left(g_{\\sigma B}\\sigma+g_{\\omega B} \\omega\\!\\!\\!/+g_{\\rho B}\\vec{R}\\cdot\\vec{t}_{B}\\right)\\Psi_{B}+\\frac{1}{4}\\,c _{\\omega}(\\omega^{\\mu}\\omega_{\\mu})^{2}+\\mathcal{L}^{YY}\\,\\] \\[U_{\\sigma}(\\sigma)=\\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+\\frac{g_{ 3}}{3}\\sigma^{3}+\\frac{g_{4}}{4}\\sigma^{4}\\,\\]\\[{\\cal L}^{YY}=\\frac{1}{2}\\partial_{\
u}\\zeta\\partial^{\
u}\\zeta- \\frac{1}{2}m_{\\zeta}^{2}\\zeta^{2}-\\frac{1}{4}\\phi_{\\mu\
u}\\phi^{\\mu\
u}+\\frac{1} {2}m_{\\phi}^{2}\\phi_{\\mu}\\phi^{\\mu}\\] \\[\\quad-\\sum_{B}\\bar{\\Psi}_{B}\\left(g_{\\zeta B}\\zeta+g_{\\phi B} \\gamma^{\\mu}\\phi_{\\mu}\\right)\\Psi_{B}\\, \\tag{1}\\]
where the sum runs over all the octet baryons. In this Lagrangian, hidden strangeness (\\(\\bar{s}s\\)) scalar and vector mesons, \\(\\zeta\\) and \\(\\phi\\), are included in addition to \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\) (represented by \\(\\vec{R}^{\\mu}\\)) mesons. Strength tensors of \\(\\omega\\), \\(\\rho\\) and \\(\\phi\\) mesons are shown in \\(\\omega^{\\mu\
u}\\), \\(\\vec{R}^{\\mu\
u}\\) and \\(\\phi^{\\mu\
u}\\), respectively. The Lagrangian contains meson masses, coupling constants, and self-coupling constants as parameters.
In introducing hyperons in RMF, we have large ambiguities in hyperon-meson coupling constants. One of the ways to determine the parameters is to rely on symmetries. Schaffner and Mishustin [2] have determined hyperon-vector meson coupling constants based on the SU(6) (flavor-spin) symmetry,
\\[\\frac{1}{3}g_{\\omega N}=\\frac{1}{2}g_{\\omega\\Lambda}=\\frac{1}{2}g_{\\omega \\Sigma}=g_{\\omega\\Xi}\\,g_{\\rho N}=\\frac{1}{2}g_{\\rho\\Sigma}=g_{\\rho\\Xi}\\,\\quad g_{\\rho \\Lambda}=0\\, \\tag{2}\\]
\\[2g_{\\phi\\Lambda}=2g_{\\phi\\Sigma}=g_{\\phi\\Xi}=-\\frac{2\\sqrt{2}}{3}g_{\\omega N} \\,\\quad g_{\\phi N}=0. \\tag{3}\\]
Scalar mesons in RMF may partially represent contributions from some other components than \\(\\bar{q}q\\), such as \\(\\pi\\pi\\) in \\(\\sigma\\). In Ref. [2], the scalar meson couplings to hyperons have been given based on the assumption that hyperons feel potentials in nuclear and hyperon matter as,
\\[U_{\\Lambda}^{(N)}=U_{\\Sigma}^{(N)}=-30\\ {\\rm MeV}\\,,\\quad U_{ \\Xi}^{(N)}=\\ -28{\\rm MeV}\\,,\\] \\[U_{\\Sigma}^{(\\Sigma)}\\sim U_{\\Lambda}^{(\\Sigma)}\\sim U_{\\Sigma} ^{(\\Lambda)}\\sim 2U_{\\Lambda}^{(\\Lambda)}\\sim-40\\ {\\rm MeV}\\,,\\]
where \\(U_{B}^{(B^{\\prime})}\\) denotes the potential of \\(B\\) in baryonic matter at around \\(\\rho_{0}\\) composed of \\(B^{\\prime}\\). Recent developments in hypernuclear physics suggest that hyperon potentials in nuclear matter are repulsive for \\(\\Sigma\\)[20, 21, 22, 23], and weakly attractive for \\(\\Xi\\)[24, 25, 26, 27], respectively.
\\(\\Xi\\) hyperons are expected to have nuclear bound states, and the bound state spectroscopy at forthcoming facilities such as J-PARC and FAIR will give a strong constraints on the \\(\\Xi\\) potential in nuclear matter. At present, the depth of the \\(\\Xi^{-}\\)-nucleus potential has been suggested to be around 15 MeV from the analysis of twin hypernuclear formation [24] and the \\((K^{-},K^{+})\\) spectrum in the bound state region [25]. In the former, the binding energy of the \\(\\Xi^{-}\\)-nuclear system is found to be consistent with a shallow \\(\\Xi^{-}\\)-nuclear potential in an event accompanied by two single hyperfragments emitted from a \\(\\Xi^{-}\\) nuclear capture at rest (a twin hypernuclei) found in a nuclear emulsion [24]. In the latter, while the resolution of experimental data is not enough to distinguish the bound state peaks, the observed yield or the spectrum shape in the bound state region is found to be in agreement with the calculated results with \\(U_{\\Xi}^{(N)}\\simeq-15\\) MeV [25, 26, 27].
For \\(\\Sigma\\) hyperons, it is necessary to analyze continuum spectra. In the observed (quasi-)bound \\(\\Sigma\\) nucleus \\(\\frac{4}{\\Sigma}\\)He [33], the coupling effect is strong and the repulsive contribution in the \\(T=3/2\\), \\({}^{3}S_{1}\\) channel is suppressed, then it does not strongly constrain the \\(\\Sigma\\) potential in nuclear matter. The analysis of \\(\\Sigma^{-}\\) atomic data suggested a \\(\\Sigma^{-}\\)-nucleus potential having a shallow attractive pocket around the nuclear surface and repulsion inside the nucleus [34, 35, 36], but the atomic energy shift is not sensitive to the potential inside the nucleus. In the distorted wave impulse approximation (DWIA) analyses of the quasi free (QF) spectrum in the continuum region [20, 21, 22, 23], it is suggested that the \\(\\Sigma\\) hyperon would feel repulsive real potential of \\(10\\sim 90{\\rm MeV}\\). Recent theoretical analyses favor the strength of repulsion of around \\(+30{\\rm MeV}\\)[21, 22, 23]. This repulsion may come from the Pauli blocking effects between quarks due to the isovector nature of the diquark pair in \\(\\Sigma\\)[37]. In a Quark-Meson Coupling (QMC) model, medium modification of the color hyperfine interaction in the quark bag is found to be the origin of repulsive \\(\\Sigma\\) potential [38]. The \\(\\Sigma\\) potential in nuclear matter at saturation density is predicted to be around \\(+30\\) MeV (repulsion) in a quark cluster model \\(YN\\) potential [37], and a chiral model also predicts a similar repulsion [39].
From these discussions, we adopt the following potential strength as _recommended_ values,
\\[U_{\\Sigma}^{(N)}(\\rho_{0})\\simeq+30~{}{\\rm MeV}\\,,\\quad U_{\\Xi}^{(N)}(\\rho_{0 })\\simeq-15~{}{\\rm MeV}~{}. \\tag{4}\\]
The above spectroscopic studies have been done mainly with non-relativistic frameworks for hyperons, then the potential should be regarded as the Schrodinger equivalent potential in RMF. The Schrodinger equivalent potential is related to the scalar (\\(U_{s}\\)) and vector (\\(U_{v}\\)) potentials as,
\\[U_{ B}(\\rho,E({\\bf p})) = U_{s}(\\rho)+\\frac{E({\\bf p})}{M_{ B}}\\,U_{v}(\\rho) \\tag{5}\\] \\[= g_{\\sigma B}\\sigma+g_{\\zeta B}\\zeta+\\frac{E}{M}~{}(g_{\\omega B} \\omega+g_{\\rho B}R+g_{\\phi B}\\phi)~{}~{},\\]
where \\(R\\) represents the expectation value of the \\(\\rho\\) meson. We have fixed \\(g_{\\sigma B}\\) value by fitting the hyperon potential depth in normal symmetric nuclear matter,
\\[U_{ B}^{(N)}(\\rho_{0})=g_{\\sigma B}\\sigma^{(N)}(\\rho_{0})+g_{ \\omega B}\\omega^{(N)}(\\rho_{0})~{}, \\tag{6}\\]
where \\(\\sigma^{(N)}(\\rho_{0})\\) and \\(\\omega^{(N)}(\\rho_{0})\\) represent the expectation values of \\(\\sigma\\) and \\(\\omega\\) mesons in symmetric nuclear matter at \\(\\rho_{0}\\). We adopt the parameter set TM1 for nucleon sector, and we determine \\(g_{\\sigma\\Sigma}\\) and \\(g_{\\sigma\\Xi}\\) to reproduce the potential depths of \\(\\Sigma\\) and \\(\\Xi\\) hyperons in Eq. (4) as listed in Table 1. We choose other hyperon-meson coupling constants referring to the values in Ref. [2]. We show the values of \\(g_{\\sigma\\Sigma}\\) and \\(g_{\\sigma\\Xi}\\) for different potentials in Table 2.
In the literatiure, the instability due to the negative effective mass of nucleon has been reported [40, 41]. As \\(\\sigma\\) increases, the nucleon effective mass reaches zero at \\(\\sigma=M_{N}/g_{\\sigma N}\\) where hyperon effective masses are still positive and act to further increase \\(\\sigma\\), leading to the nucleon negative effective mass. This effect depends very much on \\(\\sigma Y\\) couplings, which are small within the current sets of parameters, and we did not find this instability in the range of data tables. However, we found this instability occurs at very high densities/temperatures which are relevant in the black hole formation [42].
### Free thermal pions
In black hole formation processes as found in Ref. [12], the temperature goes up to around 100 MeV. At these high temperatures, pion contributions become dominant. Charged pions may condensate at high densities in neutron star matter [1, 8]. To estimate the effect of pion mixtures, we also prepare the EOS table including free thermal pions assuming the pion mass is not affected by the interaction. This is of course oversimplification, however, the first trial to include pions. Further sophisticated studies are necessary.
The density of free thermal pions is calculated to be
\\[\\rho_{\\pi}=\\rho_{\\pi}^{Cond}+\\int\\frac{d^{3}p}{(2\\pi)^{3}}\\,\\frac{1}{\\exp((E_{ \\pi}({\\bf p})-\\mu_{\\pi})/T)-1}\\, \\tag{7}\\]
where \\(\\mu_{\\pi}=\\mu_{C},0,-\\mu_{C}\\) for \\(\\pi^{+},\\pi^{0},\\pi^{-}\\), respectively. When the absolute value of the chemical potential reaches the pion mass, pion condensation occurs; _i.e._ the amount of condensed \\(\\pi\\) at zero momentum can take any value at \\(\\mu_{C}=\\pm m_{\\pi}\\). We have determined the amount of condensed \\(\\pi\\) in the following way. First we solve the equilibrium condition and obtain \\(\\mu_{B}\\) and \\(\\mu_{C}\\) without condensed \\(\\pi\\). When \\(|\\mu_{C}|>m_{\\pi}\\), we set \\(|\\mu_{C}|=m_{\\pi}\\) and re-evaluate hadron densities except for the condensed \\(\\pi\\) to satisfy the condition of
\\begin{table}
\\begin{tabular}{c|c c c c c} \\hline \\multicolumn{2}{c}{\\(m_{\\sigma}\\) (MeV)} & \\multicolumn{2}{c}{\\(g_{3}\\) (MeV)} & \\multicolumn{2}{c}{\\(g_{4}\\)} & \\multicolumn{2}{c}{\\(c_{\\omega}\\)} \\\\ \\hline \\multicolumn{2}{c}{511.198} & \\multicolumn{2}{c}{1426.466} & \\multicolumn{2}{c}{0.6183} & \\multicolumn{2}{c}{71.3075} \\\\ \\hline \\(g_{MB}\\) & \\(\\sigma\\) & \\(\\zeta\\) & \\(\\omega\\) & \\(\\rho\\) & \\(\\phi\\) \\\\ \\hline \\(N\\) & 10.0289 & 0 & 12.6139 & 4.6322 & 0 \\\\ \\(\\Lambda\\) & 6.21 & 6.67 & 8.41 & 0 & \\(-\\)5.95 \\\\ \\(\\Sigma\\) & 4.36 & 6.67 & 8.41 & 9.26 & \\(-\\)5.95 \\\\ \\(\\Xi\\) & 3.11 & 12.35 & 4.20 & 4.63 & \\(-\\)11.89 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: The coupling constants of the parameter sets.
\\begin{table}
\\begin{tabular}{c c c} \\hline \\(U_{\\Sigma}^{(N)}(\\rho_{0})\\) (MeV) & \\(g_{\\sigma\\Sigma}\\) \\\\ \\hline \\(+90\\) & 2.58 & \\\\ \\(+30\\) & 4.36 & present \\\\ \\(0\\) & 5.35 & \\\\ \\(-10\\) & 5.63 & \\\\ \\(-30\\) & 6.21 & Ref. [2] \\\\ \\hline \\(U_{\\Xi}^{(N)}(\\rho_{0})\\) (MeV) & \\(g_{\\sigma\\Xi}\\) & \\\\ \\(-15\\) & 3.11 & present \\\\ \\(-28\\) & 3.49 & Ref. [2] \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: The coupling constants of \\(\\Sigma N\\) and \\(\\Xi N\\).
\\(\\rho_{{}_{B}}=\\rho_{{}_{B}}\\)(Given). Finally, the amount of condensed \\(\\pi\\) is given so as to satisfy the charge density condition, \\(\\rho_{{}_{C}}=Y_{C}\\)(Given)\\(\\rho_{{}_{B}}\\).
The pion condensation in the current treatment is a simple \\(s\\)-wave Bose-Einstein condensation, which is different from the pion condensation derived from \\(p\\)-wave \\(\\pi N\\) interaction [8]. We mention that pion condensation will be suppressed after considering \\(s\\)-wave \\(\\pi N\\) repulsive interaction discussed in the energy shift of deeply bound pionic atoms [43, 44, 45] and in pion-nucleus scattering [46].
### Low density
By using these potentials, we can immediately obtain a EOS of uniform dense matter with strangeness based on the RMF theory. We also need to cover the low-density region below \\(\\rho_{0}\\), where the inhomogeneous matter appears. Here we connect the uniform matter EOS with Shen EOS [16], which is based on the same RMF parameter set TM1 and treats the inhomogeneity with the Thomas-Fermi approach.
We include the contribution from inhomogeneity by adding the free energy difference of Shen EOS values from those in uniform matter,
\\[F=F_{RMF}^{Y}+\\Delta F_{Nucl} \\tag{8}\\]
at \\(\\rho_{{}_{B}}\\leq\\rho_{0}\\), where
\\[\\Delta F_{Nucl}=F_{Shen}-F_{RMF}^{(np)}. \\tag{9}\\]
Other variables are derived in the same way as the above equations. The deviation due to inhomogeneity \\(\\Delta F\\) vanishes at \\(\\rho_{{}_{B}}>\\rho_{0}\\). These prescriptions produce the extended EOS tables for studies in astrophysics, containing inhomogeneity at low density and strangeness information. The compositions of \\(n,p,\\alpha,A,Y\\) are consistent with Shen EOS table and the sum of each component ratio becomes unity.
### Tabulation of thermodynamical quantities
Thermodynamical quantities are provided in the data table as a function of baryon mass density \\(\\rho_{{}_{B}}\\), charge ratio \\(Y_{C}\\), and temperature \\(T\\). Here \\(Y_{C}\\) means the charge ratio defined as \\(Y_{C}=n_{C}/n_{B}\\) and \\(n_{C}\\) is a charge density. See Appendix for the list of quantities and their definitions, which are slightly revised from the original table of Shen EOS.
For the purpose of numerical simulations, we prepare the EOS table containing the contributions of leptons and photons by adding the energy, pressure and entropy from electrons, positrons and photons to the hadronic EOS. We treat electrons and positrons as ideal Fermi gas with the finite rest mass and calculate photons according to the standard expressions for radiations.
The baryon mass density, charge ratio and temperature cover the following range,
* \\(\\rho_{{}_{B}}=10^{5.1}\\sim 10^{15.4}\\) (\\(g\\)/cm\\({}^{3}\\)) (104 points)
* \\(Y_{C}=0\\) and \\(0.01\\sim 0.56\\) (72 points)
* \\(T=0\\) and \\(0.1\\sim 100\\) (MeV) (32 points)Mesh points for \\(\\rho_{{}_{B}}\\), \\(Y_{C}(>0)\\) and \\(T(>0)\\) are taken as approximate geometric sequences with \\(\\Delta\\log_{10}\\rho_{{}_{B}}=0.1\\), \\(\\Delta\\log_{10}Y_{C}=0.025\\) (\\(-2.00\\leq\\log_{10}Y_{C}\\leq-0.25\\)) and \\(\\Delta\\log_{10}T\\simeq 0.1\\), respectively. These ranges and mesh points are the same as those in Shen EOS. By connecting smoothly in the way described above, the EOS table is combined with Shen EOS at lower densities below \\(\\rho_{0}\\) while it includes full baryon octet at high densities so that one can see the effects of hyperon mixture. Some tabulated quantities in the EOS table need attentions. The values of the mass \\(A\\) and charge number \\(Z\\) of heavy nucleus are taken from Shen EOS at densities below \\(\\rho_{0}\\) and are set to be zero above \\(\\rho_{0}\\). Similarly, the fraction of \\(\\alpha\\)-particle and heavy nucleus are taken from Shen EOS at low densities and are set to be zero at high densities.
## 3 Properties of EOS tables and astrophysical applications
We report the properties of dense matter in the present EOS table with hyperons (EOS\\(Y\\)) and their applications to neutron stars and supernovae. We adopt hereafter the case of \\((U_{\\Sigma}^{(N)}(\\rho_{0}),U_{\\Xi}^{(N)}(\\rho_{0}))=(+30~{}{\\rm MeV},-15~{}{ \\rm MeV})\\) as a standard case, which is currently the most recommended set of hyperon potentials. We also consider the case with pion contribution (EOS\\(Y\\pi\\)) and the attractive hyperon potential case [2] \\((U_{\\Sigma}^{(N)}(\\rho_{0}),U_{\\Xi}^{(N)}(\\rho_{0}))=(-30~{}{\\rm MeV},-28~{}{ \\rm MeV})\\), abbreviated as EOS\\(Y\\)(SM).
### Neutron star matter
We first study the EOS of neutron star matter, which is under the \\(\\beta\\) equilibrium at zero temperature. We here add electron and muon contributions under the \\(\\beta\\) equilibrium and charge neutrality conditions. We consider uniform matter ignoring finite nuclear effects.
We show particle compositions in neutron star matter in Fig. 1 to see the appearance of new degrees of freedom. We display the cases of nucleonic (TM1, upper-left) and hyperonic (EOS\\(Y\\), upper-right) EOS. Results with hyperonic EOS with attractive \\(\\Sigma\\) potential (EOS\\(Y\\)(SM), lower-left) and hyperonic EOS with pions (EOS\\(Y\\pi\\), lower-right) are also shown for comparison. The particle composition of neutron star matter is very sensitive to the choice of hyperon potentials. With attractive \\(\\Sigma\\) potential, \\(\\Sigma^{-}\\) appears at lower densities than \\(\\Lambda\\). With repulsive \\(\\Sigma\\) potential, \\(\\Lambda\\) appears first followed by \\(\\Xi^{-}\\) and \\(\\Xi^{0}\\). This behavior is different from the previous works that adopt attractive potentials [1, 2, 3, 4, 5], and pointed out in Refs. [2, 3, 4, 38]. When we allow the appearance of pions, condensed pions (\\(\\pi^{-}\\)) appear prior to hyperons. With \\(\\pi^{c}\\) condensation, the charge chemical potential is restricted to be \\(|\\mu^{c}|\\leq m_{\\pi}\\) and the proton fraction becomes larger, then the neutron chemical potential is reduced. As a result, the threshold density of hyperons are shifted up. The density of \\(\\Lambda\\) appearance is about \\(0.37~{}{\\rm fm}^{-3}\\) and other hyperons such as \\(\\Xi^{-}\\) are also suppressed.
In the left panel of Fig. 2, we show the energy per baryon (\\(E/B\\)) and chemical potentials (\\(\\mu_{n}\\) and \\(\\mu_{p}\\)) in neutron star matter in nucleonic EOS (TM1), EOSYEOS\\(Y\\pi\\). Compared with nucleonic EOS, \\(E/B\\) is much lower in EOS\\(Y\\) at high densities. Chemical potentials are also suppressed with hyperons correspondingly. There are several origins for this energy gain. First, nucleon Fermi energy decreases due to the hyperon mixture. Secondly, the lepton contribution is suppressed when negatively charged hyperons emerge. In addition, the repulsive vector potential becomes small, because the \\(\\omega Y\\) couplings are smaller than \\(\\omega N\\) and the isospin asymmetry becomes smaller when negatively charged hyperons appear, as shown in the long-dashed lines in Fig. 3.
For \\(E/B\\) and \\(\\mu_{n}\\), pionic effects are small and only visible around \\(\\rho_{ B}\\sim 0.4\\) fm\\({}^{-3}\\), while we find large differences in \\(\\mu_{p}\\). The equality \\(\\mu_{C}=\\mu_{p}-\\mu_{n}=-m_{\\pi}\\) under \\(\\pi^{-}\\) condensation reads the Fermi energy relation, \\(E_{F}(n)=E_{F}(p)+m_{\\pi}\\). Since a neutron on the Fermi surface is replaced with a proton and a \\(\\pi^{-}\\) having the same total energy, we have to pay the cost of the pion rest mass energy in exchange for the Fermi energy reduction and symmetry energy gain. At higher densities where hyperons appear, pionic effects becomes smaller, and disappear at \\(\\rho_{ B}=0.88\\) fm\\({}^{-3}\\).
The \\(s\\)-wave pion condensation would be suppressed when we include the \\(\\pi N\\)
\\begin{table}
\\begin{tabular}{c|c c c c} \\hline EOS & TM1/Shen EOS & EOS\\(Y\\)(SM) & EOS\\(Y\\) & EOS\\(Y\\pi\\) \\\\ \\hline Constituents & \\(Ne(\\mu)\\) & \\(NYe(\\mu)\\) & \\(NYe(\\mu)\\) & \\(NY\\pi e(\\mu)\\) \\\\ \\hline \\(U_{\\Sigma}^{(N)}\\) (MeV) & & \\(-30\\) & \\(+30\\) & \\(+30\\) \\\\ \\(U_{\\Xi}^{(N)}\\) (MeV) & & \\(-28\\) & \\(-15\\) & \\(-15\\) \\\\ \\hline \\(\\rho_{(thr)}\\)(fm\\({}^{-3}\\)) & & & & \\\\ \\(\\Lambda\\) & & \\(0.32\\) & \\(0.32\\) & \\(0.37\\) \\\\ \\(\\Sigma^{-}\\) & & \\(0.29\\) & \\(1.14\\) & \\(1.1\\) \\\\ \\(\\Sigma^{0}\\) & & \\(0.57\\) & \\(1.34\\) & \\(1.3\\) \\\\ \\(\\Sigma^{+}\\) & & \\(0.69\\) & \\(1.47\\) & \\(1.5\\) \\\\ \\(\\Xi^{-}\\) & & \\(0.43\\) & \\(0.40\\) & \\(0.56\\) \\\\ \\(\\Xi^{0}\\) & & \\(0.62\\) & \\(0.71\\) & \\(0.74\\) \\\\ \\(\\pi^{-}\\) & & & & \\(0.16(0.88)\\) \\\\ \\hline \\(M_{NS}^{(max)}(M_{\\odot})\\) & \\(2.17\\) & \\(1.55\\) & \\(1.63\\) & \\(1.65\\) \\\\ \\(\\rho_{B}^{(max)}\\)(fm\\({}^{-3}\\)) & \\(1.12\\) & \\(0.79\\) & \\(0.79\\) & \\(0.97\\) \\\\ \\(M_{NS}^{(thr)}(M_{\\odot})\\) & & \\(1.17(\\Sigma^{-})\\) & \\(1.28(\\Lambda)\\) & \\(1.22(\\Lambda)\\) \\\\ & & & & \\(0.51(\\pi^{-})\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Constituents, assumed hyperon potentials, threshold densities, maximum masses of neutron stars, central densities giving maximum masses of neutron stars and neutron star masses at the threshold central densities in nucleonic EOS (TM1/Shen EOS [17, 16]), hyperonic EOS with attractive hyperon potentials (EOS\\(Y\\)(SM) [2]), hyperonic EOS with repulsive hyperon potentials (EOS\\(Y\\), present work) and hyperonic EOS with repulsive hyperon potentials including pions (EOS\\(Y\\pi\\), present work). For \\(\\pi^{-}\\), the maximum density of condensation is also shown in parentheses. Threshold densities of protons and muons are \\(1.1\\times 10^{-4}\\) fm\\({}^{-3}\\) and \\(0.11\\) fm\\({}^{-3}\\), respectively.
interaction. We evaluate the pion energy by using the potential of the form [43, 44, 45, 46],
\\[U_{s}(\\pi^{-})=-\\frac{2\\pi}{m_{\\pi}}\\left[\\left(1+\\frac{m_{\\pi}}{M_{N}}\\right)(b _{0}\\rho_{ B}+b_{1}\\delta\\rho)+\\left(1+\\frac{m_{\\pi}}{2M_{N}}\\right)\\mbox{Re}B_{0}\\, \\rho_{ B}^{2}\\right]\\,, \\tag{10}\\]
where \\(b_{1}=b_{1}^{\\rm free}/(1-\\alpha\\rho_{ B}/\\rho_{0}),\\delta\\rho=\\rho_{n}-\\rho_{p}\\). As typical examples, we adopt the parameter sets from the analyses of pionic atom data; \\(b_{0}=-0.023/m_{\\pi}\\), \\(b_{1}=-0.085/m_{\\pi}\\) (\\(\\alpha=0\\)), \\(\\mbox{Re}B_{0}=-0.021/m_{\\pi}^{4}\\) (less repulsive) [44], and \\(b_{0}=-0.0233/m_{\\pi}\\), \\(b_{1}^{\\rm free}=-0.1473/m_{\\pi}\\), \\(\\alpha=0.367\\) (\\(b_{1}=-0.1149/m_{\\pi}\\) at \\(\\rho_{ B}=0.6\\rho_{0}\\)), \\(\\mbox{Re}B_{0}=-0.019/m_{\\pi}^{4}\\) (more repulsive) [45]. In Fig. 3, we show the pion energy, \\(E_{\\pi}=\\sqrt{m_{\\pi}^{2}+2m_{\\pi}U_{s}}\\), calculated with these potentials and proton fraction in TM1 EOS. With these potentials, we find that the existence of the \\(s\\)-wave pion condensed region, \\(E_{\\pi}<\\mu_{e}\\), depends on the pion optical potential parameters. Since the pion potential above the normal nuclear density is not yet known, the realization of pion condensation may be marginal and model-dependent.
We apply the above four EOSs of neutron star matter discussed above (TM1 EOS, EOS\\(Y\\)(SM), EOS\\(Y\\), EOS\\(Y\\pi\\)) to the hydrostatic structure of neutron stars by solving the Tolman-Oppenheimer-Volkoff equation. We plot the gravitational mass of neutron
Figure 1: Composition of neutron star matter in nucleonic EOS (TM1, upper-left), hyperonic EOS with attractive potential (EOSY(SM), lower-left), hyperonic EOS with repulsive potential without (EOS\\(Y\\), upper-right) and with pions (EOS\\(Y\\pi\\), lower-right). The number fraction of particles are plotted as functions of baryon density. The species of particles are denoted as in the legend.
Figure 3: Meson expectation values and electron chemical potential in neutron star matter for TM1 EOS (thin lines) and hyperon EOS(EOSY) (thick lines). Thin solid lines show the pion energy, \\(E_{\\pi}=\\sqrt{m_{\\pi}^{2}+2m_{\\pi}U_{s}(\\pi^{-})}\\), where lower less repulsive (upper more repulsive) line shows the results with the pion optical potential in Ref. [44] ([45]) by using the proton fraction in TM1 EOS.
Figure 2: EOS of neutron star matter and supernova matter for nucleonic EOS (TM1/Shen EOS), hyperon EOS(EOSY), hyperon EOS with free pions (EOS\\(Y\\pi\\)). The upper panels display the energy per baryon, and the lower panels the chemical potentials of proton and neutron. The dashed line is for the nucleonic EOS, the solid line is for hyperon EOS (EOS\\(Y\\)), and dotted line is for hyperon EOS with pions (EOS\\(Y\\pi\\)).
stars as a function of central baryon mass density in Fig.4. The maximum mass of neutron stars in EOS\\(Y\\) is smaller than the case of nucleonic EOS because of the softness from hyperons. The maximum mass is 1.63 \\(M_{\\odot}\\) for EOS\\(Y\\) in contrast to 2.17 \\(M_{\\odot}\\) for nucleonic EOS when we adopt repulsive potentials for hyperons. The maximum mass is further reduced to be 1.55 \\(M_{\\odot}\\) in EOS\\(Y\\)(SM) with attractive potentials. The neutron star masses with EOS\\(Y\\pi\\) are reduced in the mid range of central density 0.16 fm\\({}^{-3}<\\rho_{{}_{B}}<0.8\\) fm\\({}^{-3}\\), but the maximum mass (1.65\\(M_{\\odot}\\)) is almost the same. This is because the maximum mass is mainly determined by the EOS at densities around 0.8 fm\\({}^{-3}\\) or more, where the condensed pion density is small. The central density of a typical neutron star having 1.4\\(M_{\\odot}\\) is 0.35 fm\\({}^{-3}\\) in nucleonic EOS (TM1), which is a little above the threshold density of \\(\\Lambda\\) in EOS\\(Y\\). In this case, hyperons are limited only in the core region, and the neutron star mass does not get a large reduction as seen in Fig. 4. We summarize the neutron star masses and the threshold densities to have hyperons in neutron star matter in Table 3.
Figure 4: Neutron star masses are shown as functions of central density. The dashed, thin solid, thick solid, dotted lines show the results with nucleonic EOS (TM1 [17]), hyperonic EOS with attractive hyperon potentials (EOS\\(Y\\)(SM) [2]), hyperonic EOS with repulsive hyperon potentials (EOS\\(Y\\)) and hyperonic EOS with pions (EOS\\(Y\\pi\\)), respectively. Filled points show the maximum neutron star masses, and open circles show the threshold densities of hyperons and pions.
### Hyperonic matter at finite temperatures
We next study the EOS of supernova matter, where the hadronic charge fraction (\\(Y_{C}\\)) is fixed at finite temperature. We here show the results including electron and photon contributions. Finite nuclear effects in Shen EOS are included. The treatment of leptons in supernova matter are explained in A.4.
In order to demonstrate the contents of the EOS table, we show the energy and compositions as functions of baryon density by choosing \\(T=10\\) MeV and \\(Y_{C}=0.4\\) as an example. In the upper-right panel of Fig. 2, we plot energy per baryon (\\(E/B\\)) in EOS\\(Y\\) together with the results in Shen EOS for comparison. At \\(\\rho_{{}_{B}}>0.4\\) fm\\({}^{-3}\\), the energy is lower in EOS\\(Y\\) than in Shen EOS due to hyperons (See Fig. 5). In the lower-right panel of Fig. 2, neutron and proton chemical potentials are shown. In EOS\\(Y\\), the difference of chemical potentials between neutron and proton is small and neutron chemical potential becomes smaller than proton chemical potential at 0.7 fm\\({}^{-3}\\). This is because more SU\\({}_{f}\\)(3) symmetric matter is preferred toward high densities, and the charge fraction under \\(\\beta\\) equilibrium decreases below \\(Y_{C}=0.4\\) at high densities.
We show the particle compositions as functions of baryon density in EOS\\(Y\\) and EOS\\(Y\\pi\\) at \\(Y_{C}=0.4\\) in the left panels of Fig. 5. In moder
Figure 5: Composition of supernova matter at \\((T,Y_{C})=(10\\ {\\rm MeV},0.4)\\) (left) and \\((T,Y_{C})=(10\\ {\\rm MeV},0.2)\\) (right) in the hyperonic EOS table without (EOS\\(Y\\), upper) and with (EOS\\(Y\\pi\\), lower) pions. The number fraction of particles are plotted as functions of baryon density. The species of particles are denoted as in the legend.
matter (\\(Y_{C}=0.4\\)), the fraction of \\(\\Lambda\\) particle grows at around \\(\\rho_{{}_{B}}\\sim 0.4\\) fm\\({}^{-3}\\), and becomes comparable to nucleons at higher densities. The fractions of other strange baryons increase slowly and remains small until very high density. In the right panels of Fig. 5, we show the particle compositions at \\(Y_{C}=0.2\\). With this small charge fraction, total hyperon fraction reaches 1 % at 0.25 fm\\({}^{-3}\\).
In Fig. 6, we plot the contour map of the fraction of hyperons (sum of strange baryons) in the density-temperature plane. In order to have a significant amount of hyperons, one needs high density or temperature. In supernova core, the entropy per baryon is typically around 1-2 k\\({}_{B}\\), therefore, one needs high densities 0.3-0.4 fm\\({}^{-3}\\) to have 1% mixture of hyperons and 0.45 fm\\({}^{-3}\\) for 10%.
In supernova matter, the effects of hyperons and pions are limited. When isospin asymmetry is not high (ex. \\(Y_{C}=Y_{e}=0.4\\)), total amount of hyperons which appears at 0.4 fm\\({}^{-3}\\) is around 1% of baryons. The value of \\(Y_{C}\\) remains high due to the neutrino trapping during the collapse and bounce [48]. However, after the deleptonization due to neutrino emission, \\(Y_{C}\\) becomes smaller and hyperons may appear in the proto-neutron star cooling process. Dense matter at higher densities and temperatures may appear also in black hole formations, and hyperon effects can be expected in such processes.
Figure 6: Hyperon fraction contours and adiabatic paths in supernova matter at \\(Y_{C}=0.4\\) from the hyperonic EOS table without pions (EOSY). The contours of the fixed number fraction of hyperons (sum of strange baryons) are shown by dashed lines. The solid lines denote the contour of fixed entropy per baryon (isentropy). The dotted line shows the trajectory of the dense matter at center during core collapse and bounce.
### Applications to core-collapse supernovae
As an application of the EOS table with hyperons, we perform the numerical calculations of hydrodynamics of core-collapse supernovae. This calculation is aimed to test the data of EOS table for numerical simulations and to provide the basic information on the properties of EOS in supernovae such as the appearance of hyperons. For this purpose, we calculate the adiabatic collapse of iron core of massive stars of \\(15M_{\\odot}\\)[47]. In the same way as the hydrodynamical calculations in Sumiyoshi et al. [10], we calculate the general relativistic hydrodynamics under the spherical symmetry without neutrino-transfer, which is time-consuming, by assuming that the electron fraction is fixed to the initial value in the stellar model. Numerical simulations by neutrino-radiation hydrodynamics are in progress.
We have found that the adiabatic collapse of \\(15M_{\\odot}\\) star with EOS\\(Y\\) leads to a prompt explosion. This _model_ explosion is caused by the large electron fraction assumed and is quite similar to the case obtained with Shen EOS [10]. The explosion energy is almost the same as the case with Shen EOS and the difference turns out to be small within 0.5%. We plot the trajectory of density and temperature of the central grid in the hydrodynamical calculation in Fig. 6.
We have examined the appearance of hyperons during the evolution of core-collapse and bounce. We find that the fraction of hyperons turn out to be very small within \\(10^{-3}\\). This is because the density does not increase drastically even at the core bounce in the current model. The peak density is 0.24 fm\\({}^{-3}\\) which is lower than the threshold density 0.60 fm\\({}^{-3}\\) at temperature 21.5 MeV and electron fraction 0.42, where \\(\\Lambda\\) hyperons appear by the same order as nucleons. This small mixture does not affect largely the dynamics in the model explosion.
A large electron fraction leads to a large proton fraction, and, therefore, suppresses the appearance of hyperons [48]. We note here that this is the outcome of simple adiabatic hydrodynamics without the treatment of neutrinos. When electron captures and neutrino trapping are taken into account [11], electron fraction might be smaller than the current value and may enhance the hyperon appearance. The hyperons will definitely appear in the thermal evolution of proto-neutron stars after 20 seconds [19], during which the central density becomes high and the electron fraction gets smaller. In recent findings of black hole formation from massive stars of \\(40M_{\\odot}\\)[12, 13], the hyperon EOS is necessary since the density becomes extremely high during the collapse toward the black hole. It would be interesting to perform the full simulations of core-collapse supernovae and related astrophysical phenomena.
## 4 Summary and discussion
In this paper, we have presented several sets of equation of state (EOS) of supernova matter (finite temperature nuclear matter with lepton mixture) including hyperons (EOS\\(Y\\)) using an SU\\({}_{f}\\)(3) extended relativistic mean field (RMF) model with a widecoverage of density, temperature, and charge fraction. Supernova matter EOS is one of the most essential parts in numerical simulations of core collapse supernovae. At present, two sets of supernova matter EOS (Lattimer-Swesty EOS [15] and Shen EOS [16]) are widely used. The constituents in these EOSs are nucleons and nuclei, then it is desired to include hyperons, which are believed to appear at high densities. Here we have extended the relativistic EOS by Shen et al. [16] by introducing hyperons.
We start from the RMF parameter set TM1 for nucleon sector [17], which well describes the bulk properties of nuclei in the wide mass and isospin range. For hyperon-meson coupling constants, we adopt the values in Ref. [2] as the starting points. Hyperon-vector meson couplings are fixed based on the flavor-spin SU(6) symmetry, and hyperon-scalar meson couplings are determined to give the hyperon potentials in nucleonic and hyperonic matter. Hyperon potentials in nuclear matter around the normal density, \\(U_{Y}^{(N)}(\\rho_{0})\\), are accessible in hypernuclear production reactions. Recent developments in hypernuclear physics suggest the following potentials for \\(\\Sigma\\)[21, 22, 23] and \\(\\Xi\\) baryons [24, 25, 26, 27]
\\[U_{\\Sigma}^{(N)}(\\rho_{0})\\simeq+30\\ {\\rm MeV}\\,\\quad U_{\\Xi}^{(N)}(\\rho_{0}) \\simeq-15\\ {\\rm MeV}. \\tag{11}\\]
These potentials are consistent with those in the quark-cluster model for \\(YN\\) interaction [37] and a chiral model prediction [39]. In this paper, we have modified \\(g_{\\sigma\\Sigma}\\) and \\(g_{\\sigma\\Xi}\\) to explain these potentials, while other coupling constants are unchanged from those in Ref. [2].
The \\(\\Sigma\\) potential in nuclear matter still has ambiguities. Recent theoretical analysis [23] has shown that the shape and absolute values of quasi-free \\(\\Sigma\\) production spectra are well explained in a Woods-Saxon potential with \\(U_{\\Sigma}(\\rho_{0})\\simeq+15\\ {\\rm MeV}\\). On the other hand, a few MeV attractive pocket is known to be required to explain the energy shift of \\(\\Sigma^{-}\\) atom [34, 35, 36], then the central repulsion would be stronger to cancel the effects of this pocket. Other theoretical analyses [21, 22] suggest that \\(U_{\\Sigma}(\\rho_{0})\\simeq+30\\ {\\rm MeV}\\) would be preferred in order to explain the shape or the absolute yield in \\(\\Sigma\\) production spectra. In any of these analyses, \\(\\Sigma\\) potential should be repulsive or less attractive than that for \\(\\Lambda\\), then the effects of \\(\\Sigma\\) hyperons are much smaller than those in the attractive case. It is to be noted that the ambiguities in \\(U_{\\Sigma}\\) do not affect the supernova EOS very much as far as \\(\\Sigma\\) hyperon fraction is small.
Formation of finite nuclei at low densities is another important ingredient in supernova simulations. In the present EOS, effects of finite nuclear formation are included by using the Shen EOS [16], in which formation of finite nuclei is included in the Thomas-Fermi approximation. Effects from finite nuclei are evaluated by the difference of free energy and its derivatives in the Shen EOS from the EOS of uniform nucleonic matter (TM1) without hyperons at each \\((T,\\rho_{ B},Y_{C})\\).
We have examined the properties of the EOS with hyperons in neutron star matter (\\(T=0\\), \\(\\beta\\)-equilibrium) and supernova matter. Hyperon effects are significant in neutron stars as discussed already in the literature [1, 2, 3, 4, 5, 6, 7]. Hyperons appear at around \\(\\rho_{ B}\\simeq 2\\rho_{0}\\) in cold matter under \\(\\beta\\)-equilibrium and soften the EOS. The maximum mass of neutron stars decreases from \\(2.17M_{\\odot}\\) to \\(1.55M_{\\odot}\\) and \\(1.63M_{\\odot}\\) when hyperons are included with attractive and repulsive hyperon potentials, respectively. In prompt phase in supernova explosions, on the other hand, hyperon effects are found to be small in a spherical, adiabatic collapse of a \\(15M_{\\odot}\\) star by the hydrodynamics without neutrino transfer. In the case with \\(Y_{C}=Y_{e}=0.4\\) as a typical example, hyperon fraction becomes meaningful (\\(Y_{Y}>1\\%\\)) at \\(\\rho_{{}^{B}}>0.4\\) fm\\({}^{-3}\\) or \\(T>40\\) MeV. In the spherical and adiabatic core collapse calculation of a massive star with the \\(15M_{\\odot}\\)[47], the maximum density and temperature are found to be \\((\\rho_{{}^{B}},T)=(0.24\\ {\\rm fm}^{-3},22\\ {\\rm MeV})\\), which do not reach the region of the above hyperon mixture region. It should be noted that this conclusion is model dependent. Hyperons may appear more abundantly in more realistic calculations with neutrino transfer, which are in progress.
We have also discussed the roles of pions in neutron stars and supernovae. In this work, we have examined the effects of free thermal pions [1]. In neutron star matter, the absolute value of the charge chemical potential \\(\\mu_{C}=\\mu_{p}-\\mu_{n}\\) is calculated to be larger than the pion mass at \\(\\rho_{{}^{B}}\\gtrsim\\rho_{0}\\), thus charged pions can condensate as far as the pion-nucleon interaction is not very repulsive. The EOS softening from pions is moderate and limited in the density range \\(\\rho_{{}^{B}}<0.88\\ {\\rm fm}^{-3}\\) without (\\(p\\)-wave) \\(\\pi N\\) attraction, then the maximum mass of neutron stars (\\(1.65M_{\\odot}\\)) is almost the same as that without pions. In supernova explosions, temperatures are not very high and pion contributions are small. At higher temperatures as in the case of black hole formation or high energy heavy-ion collisions, the role of pions should be significant.
There are several points to be improved for deeper understanding of supernova matter EOS. First, it is necessary to examine the coupling constants of hyperons with hidden strangeness mesons, \\(\\zeta\\) and \\(\\phi\\), which critically decide \\(YY\\) interaction. In this paper, we have adopted \\(g_{\\zeta Y}\\) and \\(g_{\\phi Y}\\) in Ref. [2], where the couplings are determined based on the SU(6) relation and a conjecture on the hyperon potential depth in hyperon matter. An alternative way to determine these couplings would be to invoke various hypernuclear and hyperon atom data, such as the double \\(\\Lambda\\) hypernuclear binding energy in \\({}^{6}_{\\Lambda\\Lambda}\\)He [49] and atomic energy shifts in \\(\\Sigma^{-}\\) atom [34, 35, 36]. In Ref. [36], Tsubakihara et al. have determined the scalar couplings of \\(g_{\\sigma\\Lambda}\\), \\(g_{\\zeta\\Lambda}\\), \\(g_{\\sigma\\Sigma}\\) and \\(g_{\\zeta\\Sigma}\\), by using the double \\(\\Lambda\\) hypernuclear bond energy and the atomic energy shift of \\(\\Sigma^{-}\\) atom, while vector couplings are fixed from the SU\\({}_{f}\\)(3) relations. At present, available data are so scarce that we cannot fix these couplings based on the data unambiguously, but future coming J-PARC and FAIR facilities will provide much more data on \\(YY\\) interaction. Next, it is desired to respect chiral symmetry in order to describe very dense matter, in which spontaneous broken chiral symmetry will be partially restored. A chiral symmetric RMF model [50] is recently developed based on a scalar meson self-energy derived in the strong coupling limit of lattice QCD, and it describes binding energies and radii of normal nuclei in a comparable precision to TM1. An SU\\({}_{f}\\)(3) extended version of this chiral RMF is now being developed [36]. Finally, distribution of finite nuclear species may be important at low densities [51]. At finite temperatures, the entropy increase by the formation of various fragments will contribute to gain the free energy comparedwith the single heavy-nuclear configuration assumed in the Thomas-Fermi approach. It is not straightforward but challenging to include nuclear statistical equilibrium (NSE) distribution in a consistent way in the EOS based on RMF.
These challenging developments of hadronic and nuclear physics are important to understand the extreme conditions in compact objects and to clarify the mechanism of explosive phenomena in astrophysics.
## Acknowledgements
We would like to thank Dr. Daisuke Jido and Mr. Takayasu Sekihara for useful discussions. This work is supported in part by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research under the grant numbers, 15540243, 1707005, 18540291, 18540295 and 19540252, the 21st-Century COE Program \"Holistic Research and Education Center for Physics of Self-organization Systems\", and Yukawa International Program for Quark-hadron Sciences (YIPQS). The authors also would like to thank National Astronomical Observatory of Japan (NAOJ), Japan Atomic Energy Agency (JAEA), and Yukawa Institute for Theoretical Physics (YITP) for computing resources.
## Appendix A Note on the EOS table
### Locations of data tables
The data tables are available on
\\protect\\rrulewidth0pt[http://nucl.sci.hokudai.ac.jp/](http://nucl.sci.hokudai.ac.jp/)\\(\\sim\\)chikako/EOS/index.html
or upon request to A. Ohnishi. On the web page, the tables under the name of '***.tbl' (EOS table) and '***.rat' (composition table) are available for the sets using \\(U_{\\Sigma}^{(N)}=-30,0,+30,+90\\) [MeV] at normal density. The most recommended potential is \\(U_{\\Sigma}^{(N)}=+30\\) MeV. For other hyperons, we adopt the potential depth \\(U_{\\Lambda}^{(N)}=-30\\) MeV and \\(U_{\\Xi}^{(N)}=-15\\) MeV as described in section 2.1. The EOS table with thermal pions are available in addition to the standard choice without pions. As described in section 2.2, the set with thermal free pions are aimed only for the assessment of pion contributions in a simple treatment. Further careful treatment of pion interactions is necessary.
### Definition of quantities in the EOS table
We list the definitions of the physical quantities tabulated in the EOS table, '***.tbl'. We note that the order of quantities in the list is partly different from the original Shen EOS table [16] since the current table contains lepton and photon contributions. The definitions of quantities follows the ones in Shen EOS unless stated specifically below.
1. Logarithm of baryon mass density: \\(\\log_{10}(\\rho_{{}_{B}})\\) [\\(g/\\)cm\\({}^{3}\\)] The baryon mass density \\(\\rho_{{}_{B}}\\) is defined by \\[\\rho_{{}_{B}}=M_{u}n_{B}\\] (1)where \\(M_{u}\\) and \\(n_{B}\\) are the atomic mass unit and the baryon number density, respectively.
2. Charge ratio: \\(Y_{C}C=n_{C}/n_{B}\\) The charge density \\(n_{C}\\) is defined by \\[n_{C}={\\sum_{B}}q_{i}n_{i}\\,\\] (12) where \\(q_{i}\\) and \\(n_{i}\\) are the charge and the number density of the baryons and the sum runs over the baryon octet.
3. Entropy per baryon: \\(S/B\\) [\\(k_{B}\\)] The entropy per baryon contains the contributions from hadrons, leptons and photons.
4. Temperature: \\(T\\) [MeV]
5. Pressure: \\(P\\) [MeV/fm\\({}^{3}\\)] The pressure contains the contributions from hadrons, leptons and photons.
6. Chemical potential of neutron: \\(\\mu_{n}\\) [MeV] The chemical potential of neutron is measured relative to the nucleon mass \\(M_{N}=938\\) MeV. It is connected with the baryon chemical potential as \\[\\mu_{n}=\\mu_{B}-M_{N}\\] (13)
7. Chemical potential of proton: \\(\\mu_{p}\\) [MeV] The chemical potential of proton is measured relative to the nucleon mass \\(M_{N}\\). The relation to the baryon and charge chemical potentials reads \\[\\mu_{p}=\\mu_{B}+\\mu_{C}-M_{N}\\] (14)
8. Chemical potential of electron: \\(\\mu_{e}\\) [MeV] The chemical potential of electron is determined by the charge neutrality as \\[n_{e}={\\sum_{B}}q_{i}n_{i}.\\] (15) See below for the descriptions on the lepton contributions.
9. Free neutron fraction: \\(Y_{n}\\) In the uniform matter, the free neutron fraction is simply the ratio, \\(n_{n}/n_{B}\\). For the non-uniform matter at low density, the definition follows the one in Shen EOS.
10. Free proton fraction: \\(Y_{p}\\) The ratio, \\(n_{p}/n_{B}\\), as in \\(Y_{n}\\) above. For the fraction of strangeness baryons, see the description below on the table of number fractions.
11. Mass number of heavy nucleus: \\(A\\) The values of from (11) to (14) are taken from Shen EOS table or zero set above normal nuclear density.
12. Charge number of heavy nucleus: \\(Z\\)
13. Heavy nucleus fraction:14. Alpha-particle fraction: \\(X_{\\alpha}\\)
15. Energy per baryon: \\(E/B\\) [MeV] The energy per baryon is defined with respect to the free nucleon mass \\(M_{N}\\) and contains the contributions from hadrons, leptons and photons.
16. Free energy per baryon: \\(F/B\\) [MeV] The free energy per baryon is defined with respect to the atomic mass unit \\(M_{u}\\) and contains the contributions from hadrons, leptons and photons.
17. Effective mass: \\(M^{*}\\) [MeV] The effective mass of nucleon is obtained in the RMF theory for uniform matter. In non-uniform matter, we replace the effective mass \\(M_{N}^{*}\\) by the free nucleon mass \\(M_{N}\\).
### Data table of composition
In order to provide the information on the appearance of hyperons other than nucleons, we prepare a separate data table (***.rat) for the number fractions. The number fraction, \\(Y_{i}=n_{i}/n_{B}\\), is given as a function of \\((\\rho_{{}_{B}},T,Y_{C})\\) in the following order.
1. Logarithm of baryon mass density: \\(\\log_{10}\\rho_{{}_{B}}\\) [g/cm\\({}^{3}\\)]
2. Temperature: \\(T\\) [MeV]
3. Charge ratio: \\(Y_{C}\\)
4. Neutron ratio (including neutrons in alpha): \\(Y_{n}\\)
5. Proton ratio (including protons in alpha): \\(Y_{p}\\)
6. \\(\\Lambda\\) ratio: \\(Y_{\\Lambda}\\)
7. \\(\\Sigma^{-}\\) ratio: \\(Y_{\\Sigma^{-}}\\)
8. \\(\\Sigma^{0}\\) ratio: \\(Y_{\\Sigma^{0}}\\)
9. \\(\\Sigma^{+}\\) ratio: \\(Y_{\\Sigma^{+}}\\)
10. \\(\\Xi^{-}\\) ratio: \\(Y_{\\Xi^{-}}\\)
11. \\(\\Xi^{0}\\) ratio: \\(Y_{\\Xi^{0}}\\)
### Treatment of leptons
We describe briefly on the contribution of leptons (electrons, muons and neutrinos) in the current study.
We remark that we take into account muons in the case of cold neutron stars. Muons appear abundantly at densities higher than the threshold when the electron chemical potential exceeds the muon rest mass. The chemical potentials of electrons and muons are related with
\\[\\mu_{\\mu}=\\mu_{e} \\tag{12}\\]This is because neutrinos freely escape from the neutron star and are not trapped inside. Accordingly, the contributions of electrons and muons are taken into account in the discussions of cold neutron stars.
For the EOS table for core-collapse supernovae, we add the contributions of electrons, positrons and photons while muons are not added because of the following reason. In the supernova cores, neutrinos are trapped inside the supernova core and the Fermi energies of neutrinos becomes high and non-zero. If the chemical equilibrium holds, the chemical potentials follows the relations,
\\[\\mu_{\\mu}-\\mu_{\
u_{\\mu}} = \\mu_{e}-\\mu_{\
u_{e}}\\, \\tag{12}\\] \\[n_{\\mu}+n_{\
u_{\\mu}} = 0. \\tag{13}\\]
The latter relation comes from the fact that the net \\(\\mu\\)-type lepton number is zero. Because of positive values of \\(\\mu_{\
u_{e}}\\), r.h.s of the chemical equilibrium is reduced. In addition, the appearance of muon requires the production of anti-neutrinos of \\(\\mu\\)-type and leads to the negative value of \\(\\mu_{\
u_{\\mu}}\\). Therefore, the appearance of muons are suppressed in supernova core.
Contributions of neutrinos are not added because the treatment depends on the density region. In the central part of supernova core, neutrinos are trapped and the chemical equilibrium are reached together with neutrinos. Outside the neutrino trapping surface, typically at \\(\\sim 10^{11}\\) g/cm\\({}^{3}\\), neutrinos escape freely and neutrinos do not contribute. They also depend on the method of neutrino-radiation in numerical simulations.
## References
* [1] Glendenning N K 2000 \"Compact Stars: Nuclear Physics, Particle Physics and General Relativity\" (Springer-Verlag, Berlin, 2000), and references therein.
* [2] Schaffner J and Mishustin I 1996 _Phys. Rev._ C **53** 1416
* [3] Balberg S and Gal A 1997 _Nucl. Phys._ A **625** 435
* [4] Sahu P K and Ohnishi A 2001 _Nucl. Phys._ A **691** 439c
* [5] Nishizaki S, Takatsuka T and Yamamoto Y 2002 _Prog. Theor. Phys._**108** 703; Baldo M, Burgio G F and Schulze H J 2000 _Phys. Rev._ C **61** 055801; Vidana I, Polls A, Ramos A, Hjorth-Jensen M and Stoks V G J 2000 _Phys. Rev._ C **61** 025802;
* [6] Sugahara Y and Toki H 1994 _Prog. Theor. Phys._**92** 803
* [7] Shen H 2002 _Phys. Rev._ C **65** 035802
* [8] Kunihiro T, Takatsuka T, Tamagaki R and Tatsumi T 1993 _Prog. Theor. Phys. Suppl._**112** 123, and references therein.
* [9] Kaplan D B and Nelson A E 1986 _Phys. Lett._ B **175** 57; Lee C H 1996 _Phys. Rep._**275** 255; Weber F 2005, _Prog. Part. Nucl. Phys._**54** 193; Bailin D and Love A 1984 _Phys. Rept._**107** 325; Alford M G, Rajagopal K and Wilczek F 1999 _Nucl. Phys._ B **537** 443
* [10] Sumiyoshi K, Suzuki H, Yamada S and Toki H 2004 _Nucl. Phys._ A **730** 227
* [11] Sumiyoshi K, Yamada S, Suzuki H, Shen H, Chiba S and Toki H 2005 _Astrophys. J._**629** 922.
* [12] Sumiyoshi K, Yamada S, Suzuki H, Chiba S 2006 _Phys. Rev. Lett._**97** 091101
* [13] Sumiyoshi K, Yamada S, Suzuki H 2007 _Astrophys. J._**667** 382
* [14] For recent progresses, see, for example, Gyulassy M and McLerran L 2005 _Nucl. Phys._ A **750** 30
* [15] Lattimer J M and Swesty F D 1991 _Nucl. Phys._ A **535** 331* [16] Shen H, Toki H, Oyamatsu K and Sumiyoshi K 1998 _Prog. Theor. Phys._**100** 1013
* [17] Sugahara Y and Toki H 1994 _Nucl. Phys._ A **579** 557
* [18] Suzuki H 1994 \"Physics and Astrophysics of Neutrinos\", edited by M. Fukugita and A. Suzuki (Springer-Verlag, Berlin), 763
* [19] Pons J A, Reddy S, Prakash M, Lattimer J M, Miralles J A 1999 _Astrophys. J_**513** 780; Pons J A, Miralles J A, Prakash M, and Lattimer J M 2001 _Astrophys. J_**553** 382
* [20] Noumi H _et al_ 2002 _Phys. Rev. Lett._**89** 072301 [Erratum: _Phys. Rev. Lett._**90** (2003) 049902]; Saha P K _et al_ 2004 _Phys. Rev._ C **70** 044613
* [21] Harada T and Hirabayashi Y 2006 _Nucl. Phys._ A **759** 143; _Nucl. Phys._ A **767** 206
* [22] Kohno M _et al_ 2004 _Prog. Theor. Phys._**112** 895; Kohno M, Fujiwara Y, Watanabe Y, Ogata K and Kawai M 2006 _Phys. Rev._ C **74** 064613
* [23] Maekawa H 2008 _PhD Thesis_ Hokkaido University; Maekawa H, Tsubakihara K, Matsumiya H and Ohnishi A 2008 in preparation
* [24] Aoki S _et al._ 1995 _Phys. Lett._ B **355** p 45
* [25] Fukuda T _et al._ 1998 _Phys. Rev._ C **58** p 1306; Khaustov P _et al._ 2000 _Phys. Rev._ C **61** p 054603
* [26] Maekawa H, Tsubakihara and Ohnishi A 2007 _Euro. Phys. J._ A **33** 269 (_Preprint_ nucl-th/0701066)
* [27] Maekawa H, Tsubakihara K, Matsumiya H and Ohnishi A 2007 (_Preprint_ arXiv:0704.3929 [nucl-th])
* [28] Danielewicz P, Lacey R and Lynch W G 2002 _Science_**298** 1592
* [29] Sahu P K, Cassing W, Mosel U and Ohnishi A 2000 _Nucl. Phys._ A **672** 376
* [30] Isse M, Ohnishi A, Otuka N, Sahu P K and Nara Y 2005 _Phys. Rev._ C **72** 064908
* [31] Niksic T, Vretenar D and Ring P 2002 _Phys. Rev._ C **66** 064302
* [32] Brockmann R and Machleidt R 1990 _Phys. Rev._ C **42** 1965
* [33] Nagae T _et al_ 1998 _Phys. Rev. Lett._**80** 1605; Harada T, Shinmura S, Akaishi Y and Tanaka H 1990 _Nucl. Phys._ A **507** 715; Harada T 1998 _Phys. Rev. Lett._**81** 5287
* [34] Batty C J, Friedman E and Gal A 1994 _Phys. Lett._ B **335** 273; _Prog. Theor. Phys. Suppl._**117** 227
* [35] Mares J, Friedman E, Gal A and Jennings B K 1995 _Nucl. Phys._ A **594** 311
* [36] Tsubakihara K, Maekawa H, Ohnishi A 2007 _Euro. Phys. J._ A **33** 295 (_Preprint_ nucl-th/0702008)
* [37] Kohno M _et al_ 2000 _Nucl. Phys._ A **674** 229
* [38] Rikovska-Stone J, Guichon P A M, Matevosyan H H and Thomas A W 2007 _Nucl. Phys._ A **792** 341
* [39] Kaiser N 2005 _Phys. Rev._ C **71** 068201
* [40] Knorren R, Prakash M and Ellis P J 1995 _Phys. Rev._ C **52** 3470
* [41] Menezes D P and Providencia C 2003 _Phys. Rev._ C **68** 035804
* [42] Sumiyoshi K, Ishizuka C, Ohnishi A, Yamada S, Suzuki H 2008, in preparation.
* [43] Suzuki K _et al._ 2004 _Phys. Rev. Lett._**92** 072302; Hirenzaki S, Toki H and Yamazaki T 1991 _Phys. Rev._ C **44** 2472; Itahashi K _et al._ 2000 _Phys. Rev._ C **62** 025202
* [44] Batty C J, Friedman E and Gal A 1983 _Nucl. Phys._ A **402** 411
* [45] Kienle P and Yamazaki T 2004 _Prog. Part. Nucl. Phys._**52** 85
* [46] Friedman E _et al._ 2004 _Phys. Rev. Lett._**93** 122302
* [47] Woosley S E and Weaver T A 1995 _Astrophys. J. Suppl._**101** 181
* [48] Thorsson V, Prakash M and Lattimer J M 1994 _Nucl. Phys._ A **572** 693 (Erratum-ibid. A **574** 851) Keil W and Janka H T 1995 _Astron. Astrophys._**296** 145 Prakash M, Cooke J R and Lattimer J M 1995 _Phys. Rev._ D **52** 661 Prakash M, Bombaci I, Prakash M, Ellis P J, Lattimer J M and Knorren R 1997 _Phys. Rept._**280** 1 Vidana I, Bombaci I, Polls A and Ramos A 2003 _Astron. Astrophys._**399** 687
* [49] Takahashi H _et al_ 2001 _Phys. Rev. Lett._**87** 212502
* [50] Tsubakihara K and Ohnishi A 2007 _Prog. Theor. Phys._**117** 903 (_Preprint_ nucl-th/0607046)
* [51] Ishizuka C, Ohnishi A and Sumiyoshi K 2003 _Nucl. Phys._ A **723** 517 | We present sets of equation of state (EOS) of nuclear matter including hyperons using an SU\\({}_{f}\\)(3) extended relativistic mean field (RMF) model with a wide coverage of density, temperature, and charge fraction for numerical simulations of core collapse supernovae. Coupling constants of \\(\\Sigma\\) and \\(\\Xi\\) hyperons with the \\(\\sigma\\) meson are determined to fit the hyperon potential depths in nuclear matter, \\(U_{\\Sigma}(\\rho_{0})\\simeq+30\\)MeV and \\(U_{\\Xi}(\\rho_{0})\\simeq-15\\)MeV, which are suggested from recent analyses of hyperon production reactions. At low densities, the EOS of uniform matter is connected with the EOS by Shen et al., in which formation of finite nuclei is included in the Thomas-Fermi approximation. In the present EOS, the maximum mass of neutron stars decreases from \\(2.17M_{\\odot}\\) (\\(Ne\\mu\\)) to \\(1.63M_{\\odot}\\) (\\(NYe\\mu\\)) when hyperons are included. In a spherical, adiabatic collapse of a \\(15M_{\\odot}\\) star by the hydrodynamics without neutrino transfer, hyperon effects are found to be small, since the temperature and density do not reach the region of hyperon mixture, where the hyperon fraction is above 1 % (\\(T>40\\)MeV or \\(\\rho_{{}_{B}}>0.4\\) fm\\({}^{-3}\\)). | Summarize the following text. |
arxiv-format/0802_2411v1.md | **Multiclass Approaches for Support Vector Machine Based Land Cover Classification**
Mahesh Pal
Lecturer, Department of Civil engineering
National Institute of Technology
Kurukshetra, 136119, Haryana (INDIA)
[email protected]
## 1 Introduction
A new classification system based on statistical learning theory (Vapnik, 1995), called the support vector machine (Boser _et al._, 1992) has recently been applied to the problem of remote sensing data classification (Foody and Mathur, 2004; Gualtieri and Cromp, 1998; Huang et al., 2002; Pal and Mather, 2003; Zhu and Blumberg, 2002). This technique is said to be independent of the dimensionality of feature space as the main idea behind this classification technique is to separate the classes with a surface that maximise the margin between them, using boundary pixels to create the decision surface. The data points that are closest to the hyperplane are termed \"support vectors\". The number of support vectors is thus small as they are points close to the class boundaries (Vapnik, 1995). One major advantage of support vector classifiers is the use of quadratic programming, which provides global minima only. The absence of local minima is a significant difference from the neural network classifiers. Like neural classifiers, applications of SVMs to any classification problem require the determination of several user-defined parameters. Some of these parameters are the choice of a suitable multiclass approach, Choice of an appropriate kernel and related parameters, determination of a suitable value of regularisation parameter (i.e. C) and a suitable optimisation technique. SVMs were initially developed to perform binary classification; though, applications of binary classification are very limited. Most of the practical applications involve multiclass classification, especially in remote sensing land cover classification. A number of methods have been proposed to implement SVMs to produce multiclass classification. Most of the research in generating multiclass support vector classifiers can be divided in two categories. One approach involves in constructing several binary classifiers andcombing their results while other approach considers all data in one optimisation formulation. This paper compares the performance of some of the multi class approaches in term of classification accuracy and the computational cost for land cover classification using remote sensing data.
## 2 Support Vector Machines
SVM are based on statistical learning theory and have the aim of determining the location of decision boundaries that produce the optimal separation of classes (Vapnik 1995). In the case of a two-class pattern recognition problem in which the classes are linearly separable the SVM selects from among the infinite number of linear decision boundaries the one that minimises the generalisation error. Thus, the selected decision boundary will be one that leaves the greatest margin between the two classes, where margin is defined as the sum of the distances to the hyperplane from the closest points of the two classes (Vapnik, 1995). This problem of maximising the margin can be solved using standard Quadratic Programming (QP) optimisation techniques. The data points that are closest to the hyperplane are used to measure the margin; hence these data points are termed'support vectors'. Consequently, the number of support vectors is small (Vapnik, 1995).
If the two classes are not linearly separable, the SVM tries to find the hyperplane that maximises the margin while, at the same time, minimising a quantity proportional to the number of misclassification errors. The trade-off between margin and misclassification error is controlled by a user-defined constant (Cortes and Vapnik, 1995). SVM can also be extended to handle non-linear decision surfaces. Boser et al.
(1992) propose a method of projecting the input data onto a high-dimensional feature space using kernel functions (Vapnik 1995) and formulating a linear classification problem in that feature space. Further, more detailed discussion of the computational aspects of SVM can be found in Vapnik (1995) and Cristianini and Shawe-Taylor, (2000).
SVM were initially designed for binary (two-class) problems. When dealing with multiple classes, an appropriate multi-class method is needed. Vapnik (1995) suggested comparing one class with the others taken together. This strategy generates \\(n\\) classifiers, where \\(n\\) is the number of classes. The final output is the class that corresponds to the SVM with the largest margin, as defined above. For multi-class problems one has to determine \\(n\\) hyperplanes. Thus, this method requires the solution of \\(n\\) QP optimisation problems, each of which separates one class from the remaining classes. This strategy can be described as 'one against the rest'.
A second approach is to combine several classifiers ('one against one'). Knerr et al. (1990) perform pair-wise comparisons between all \\(n\\) classes. Thus, all possible two-class classifiers are evaluated from the training set of \\(n\\) classes, each classifier being trained on only two out of \\(n\\) classes, giving a total of _n(n-1)/2_ classifiers. Applying each classifier to the test data vectors gives one vote to the winning class. The data is assigned the label of the class with most votes. The results of a recent analysis of multi-class strategies are provided by Hsu and Lin (2002).
**3. SVM for Multiclass Classification**
Originally, SVMs were developed to perform binary classification. However, applications of binary classification are very limited especially in remote sensing land cover classification where most of the classification problems involve more than two classes. A number of methods to generate multiclass SVMs from binary SVMs have been proposed by researchers and is still a continuing research topic. This section provides a brief description of some methods implemented to solve multi-class classification problem with SVM in present study.
**3.1 One against the Rest approach**
This method is also called _winner-take-all_ classification. Suppose the dataset is to be classified into \\(M\\) classes. Therefore, \\(M\\) binary SVM classifiers may be created where each classifier is trained to distinguish one class from the remaining \\(M\\)-1 classes. For example, class one binary classifier is designed to discriminate between class one data vectors and the data vectors of the remaining classes. Other SVM classifiers are constructed in the same manner. During the testing or application phase, data vectors are classified by finding margin from the linear separating hyperplane. The final output is the class that corresponds to the SVM with the largest margin.
However, if the outputs corresponding to two or more classes are very close to each other, those points are labeled as _unclassified_, and a subjective decision may have to be made by the analyst. Otherwise, a reject decision (Scholkopf and Smola, 2002) may also be applied using a threshold to decide the class label. This multiclass method has an advantage in the sense that the number of binary classifiers to construct equals the number of classes. However, there are some drawbacks. First, during the training phase,the memory requirement is very high and amounts to at the square of the total number of training samples. This may cause problems for large training data sets and may lead to computer memory problems. Second, suppose there are \\(M\\) classes and each has an equal number of training samples. During the training phase, the ratio of training samples of one class to rest of the classes will be \\(1\\!:\\!\\left(M-1\\right)\\). This ratio, therefore, shows that training sample sizes will be unbalanced. Because of these limitations, the _one against one_ approach of multiclass classification has been proposed.
### 3.2 One against One Approach
In this method, SVM classifiers for all possible pairs of classes are created (Knerr _et al._, 1990; Hastie and Tibshirani, 1998). Therefore, for \\(M\\) classes, there will be binary classifiers. The output from each classifier in the form of a class label is obtained. The class label that occurs the most is assigned to that point in the data vector. In case of a tie, a tie-breaking strategy may be adopted. A common tie-breaking strategy is to randomly select one of the class labels that are tied.
The number of classifiers created by this method is generally much larger than the previous method. However, the number of training data vectors required for each classifier is much smaller. The ratio of training data vector size for one class against another is also. Therefore, this method is considered more symmetric than the One-against-the-rest method. Moreover, the memory required to create the kernel matrix is much smaller. However, the main disadvantage of this method is the increase in the number of classifiers as the number of class increases. For example, for 7 classes of interest, 21 classifiers will be created.
**3.3 Decision Directed Acyclic Graph based Approach**
Platt _et al._ (2000) proposed a multiclass classification method called _Directed Acyclic Graph SVM_ (DAGSVM) based on the _Decision Directed Acyclic Graph_ (DDAG) structure that forms a tree-like structure. The DDAG method in essence is similar to pairwise classification such that, for an \\(M\\) class classification problem, the number of binary classifiers is equal to \\(\\dfrac{1}{2}\\,M\\left(M-1\\right)\\)and each classifier is trained to classify two classes of interest. Each classifier is treated as a node in the graph structure. Nodes in DDAG are organized in a triangle with the single root node at the top and increasing thereafter in an increment of one in each layer until the last layer that will have \\(M\\) nodes.
The DDAG evaluates an input vector \\(\\mathbf{x}\\) starting at the root node and moves to the next layer based on the output values. For instance, it exits to the left edge if the output from the binary classifier is negative, and it exits to the right edge if the output from the binary classifier is positive. The binary classifier of the next node is then evaluated. The path followed is called the _evaluation path_. The DDAG method basically eliminates one class out from a list. Initially the list contains all classes. Each node evaluates the first class against the last class in the list. For example, the root node evaluates class 1 against class \\(M\\). If the evaluation results in one class out of two classes, the other is eliminated from the list. The process then tests the first and the last class in the new list. It is terminated when only one class remains in the list. The class label associated with the input data will be the class label of the node in the final layer of the evaluation path or the class remained in the list. Although the number of binary classifiers still equals thepairwise classification method, the inputs are evaluated \\(M-1\\) times instead of \\(\\dfrac{1}{2}\\,M\\big{(}M-1\\big{)}\\)times as is the case with pairwise classification.
### Multiclass Objective Function
Instead of creating many binary classifiers to determine the class labels, this method attempts to directly solve a multiclass problem (Weston and Watkins, 1998, Lee _et al._, 2001; Crammer and Singer, 2001; Scholkopf and Smola, 2002). This is achieved by modifying the binary class objective function and adding a constraint to it for every class. The modified objective function allows simultaneous computation of multiclass classification and is given by (Weston and Watkins, 1998),
\\[\\min_{\\mathbf{w},b,\\xi}\\!\\!\\left[\\frac{1}{2}\\sum_{i=1}^{M}\\!\\!\\left\\|w\\right\\| ^{2}+C\\sum_{i=1}^{k}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\
Therefore, it may be able to handle massive data sets but the memory requirement and thus, the computational time may be very high.
To summarize, it may be said that the choice of a multiclass method depends on the problem in hand. A user should consider the accuracy requirement, the computational time, the resources available and the nature of the problem. For example, the multiclass objective function approach may not be suitable for a problem that contains a large number of training samples and classes due to the requirement of large memory and extremely long computational time.
### 3.5 Error-Correcting Output Code based approach
The concept of Error- Correcting Output Coding (ECOC) based multi-class method is to apply binary (two-class) classifiers to solve the multi-class classification problems. This approach works by converting \\(M\\) class classification problem into a large number \\(L\\) of 2-class classification problems. ECOC assigns a unique code word to a class instead of assigning each class a label. A (\\(L\\), \\(M\\), \\(d\\)) error correcting code is a \\(L\\) bit long, having \\(C\\) unique code words with a Hamming distance of \\(d\\). The hamming distance between two code words is the number of bit positions in which both differs. In a classification problem \\(M\\) is the number of classes and \\(L\\) is a number decided by the method used to generate error-correcting codes. Several methods such as Hadamard-Matrix codes, BCH codes (Bose and Ray-chauduri, 1960; Peterson and Weldon, 1972), random codes (James, 1998) and exhaustive codes (Dietterich and Bakiri, 1995) are proposed to generate error correcting codes. Dietterich and Bakiri, (1995) proposed to use codes with maximum Hamming distance between each other and suggested that it (d - 1)/ 2 errors can be corrected in the code words for a Hamming distance d between the codes.
Decomposition of a C class multi-class problem having \\(K_{1}\\), ,\\(K_{C}\\)as the class labels generates a set of \\(m\\) binary classifiers represented by \\(f_{1}\\), \\(f_{m}\\). A binary classifier subdivides the input patterns into two complementary super classes \\(K_{i}^{1}\\) and \\(K_{i}^{-1}\\) grouping together one or more classes of multi-class problem. Let \\(M=\\left|b_{ij}\\right|\\) is a decomposition matrix of dimension \\(m\\times C\\), connecting classes \\(K_{1}\\), ,\\(K_{C}\\) to the super classes \\(K_{i}^{1}\\) and \\(K_{i}^{-1}\\), where an element of matrix \\(M\\) can be defined as:
\\[b_{ij}\\ =\\left\\{\\begin{array}{ll}1&\\mbox{if }\\ K_{C}\\subseteq\\ K_{i}^{1}\\\\ -1&\\mbox{if }\\ K_{C}\\subseteq\\ K_{i}^{-1}\\end{array}\\right.\\]
Therefore, for \\(M\\) classes, a _coding matrix_\\(D\\in\\left\\{\\pm 1\\right\\}^{M\\times C}\\) is obtained.
When a new data is to be classified, the trained binary classifiers (or hypothesis) produce the estimated probability '\\(e_{i}\\)' that the test data comes from the \\(i\\)th super group one, thus producing a vector of probability estimates, \\(\\mathbf{e}\\!=\\!\\left(e_{1},e_{2}, ,e_{m}\\right)^{T}\\) from all \\(
Allwein _et al._ (2000) proposed another scheme using a margin-based binary learning algorithm to replace the Hamming distance based decoding and proposed to use a _coding matrix_\\(D\\in\\left\\{+1,\\,0,-1\\right\\}^{M\\times C}\\) in place of \\(\\left\\{\\pm\\,1\\right\\}^{M\\times C}\\). This approach subdivides the input patterns into three complementary super classes \\(K_{i}^{\\ 1}\\), \\(K_{i}^{\\ -1}\\)and some classes with zero as class label. During training process, a classification algorithm is provided with training data set labeled as \\(K_{i}^{\\ 1}\\)and \\(K_{i}^{\\ -1}\\)while omitting all examples with class label zero.
Allwein _et al._ (2000) proposed and two types of random codes. The first type of random code called dense codes has\\(\\left\\lceil 10\\log_{2}C\\right\\rceil\\) columns for a problem with C classes. Dense random codes for each multiclass problem were chosen from \\(\\left\\{-1,1\\right\\}\\) by examining 10,000 random codes. These codes are chosen in a way to have largest hamming distance and have no identical column. The second type of code, called a sparse code, was chosen randomly from \\(\\left\\{-1,\\,0,1\\right\\}\\) having \\(\\left\\lceil 15\\log_{2}K\\right\\rceil\\) columns. Codes were selected by examining 10,000 random codes as in case of dense coding in way that no code had a row or column containing only zeros as well as to have maximum hamming distance. This study uses an exhaustive approach (Dietterich and Bakiri, 1995) as well as both approaches suggested by Allwein _et al._ (2000) to generate error correcting out put codes to solve a multiclass problem with support vector machine.
## 4 Data and Analysis
For this study, Landsat-7 Enhanced Thematic Mapper (ETM+) data (19/06/2000) of an agricultural area near Littleport (Cambridgeshire), UK was used. An area of 307-pixel (columns) by 330-pixel (rows) covering the area of interest was used for this study.
The classification problem involved the identification of seven land cover types (wheat, potato, sugar beet, onion, peas, lettuce and beans). Field Data printouts for the relevant crop seasons were collected from farmers and their representative agencies. The other areas were surveyed on the ground to prepare the ground reference image. A total of 4737 pixels were selected for all seven classes by using equalised random sampling. Pixels were then divided into two parts so as to remove any possible bias caused by using the same pixels for training and testing the classifiers. A total of 2700 training and 2037 test pixels were used. A radial basis kernel function with kernel width \\(\\gamma=2\\) and regularisation parameter C = 5000 was used. All the processing with support vector machines was done on a window based Pentium IV processor with 256 MB of RAM was used.
## 5 Result and Conclusions
Table 1 provides the classification accuracy and training time with different multiclass approaches used in present study. The results suggest that except exhaustive technique based ECOC approach all multiclass methods provide comparable results in term of classification accuracy. An accuracy of 89% (kappa value = 0.87) is achieved with exhaustive technique based ECOC approach but at a large computation cost (806.6 minutes) in comparison to _one vs. one, one vs. rest, Weston and Watkins approach and DAG_ approaches. A classification accuracy of 87.9% (kappa value = 0.86) is achieved by _one vs. one_ with a very small training time of 6.4 seconds while _DAG approach_ requires a training time of 6.5 seconds and achieves a classification accuracy of 87.63%.
Further, results using exhaustive technique based ECOC approach are not significantly better in comparison to _one vs. one_ approach in term of classification accuracy. Approach suggested by Crammer and Singer (2001) requires a large training time (approx. 347 minutes) with no appreciable gain in term of classification accuracy in comparison to other multiclass approaches. Sparse random coding approach provides a classification accuracy of 87.19% in comparison to 85.32% with dense coding approach. This suggests a comparable performance by sparse coding approach to other multiclass approaches used in present study.
Table 1. Classification accuracies achieved with different multiclass approaches used in present study.
\\begin{tabular}{|l|c|} \\hline Multiclass approach & Classification accuracy (\\%) \\\\ \\hline one against one & 87.90 \\\\ \\hline one against rest & 86.55 \\\\ \\hline Directed Acyclic Graph & 87.63 \\\\ \\hline Bound constrained approach & 87.29 \\\\ \\hline Crammer and Singer approach & 87.43 \\\\ \\hline ECOC (exhaustive approach) & 89.00 \\\\ \\hline ECOC (Dense coding approach) & 85.32 \\\\ \\hline ECOC (Sparse coding approach) & 87.19 \\\\ \\hline \\end{tabular}
Present study examined six approaches for the solution of multiclass classification problem using remote sensing data. _One against one_ and _DAG_ approach provide a comparable accuracy and requires almost same computational resources. The training time taken by _one against one and DAG_ techniques is less than that with the _one against the rest_ strategy. This study also concludes that the highest classification accuracy is achieved with exhaustive ECOC approach but requires very large training time. A comparison of accuracy achieved by exhaustive ECOC approach suggests no significant improvement in comparison to _one against one_ approach. The main problem with the 'one against the rest' strategy is that it may produce unclassified data, and hence lower classification accuracies. Finally, results suggest the suitability of _One against one approach_ for this type of data in term of classification accuracy and the computational cost. Further study is required to study the usefulness of this approach with other type of remote sensing data as well as data with large number classes.
## References
* Allwein et al. (2001) Allwein, E. L. Schapire, R. E. and Singer. Y., 2001, Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113-141.
* Bose and Ray-Chauduri (1960) Bose, R. C., and Ray-Chauduri, D. K., 1960, On a class of error-correcting binary group codes. _Information and Control_, 3, 68-79.
* Boser et al. (1992) Boser, H., Guyon, I. M., & Vapnik, V. N., 1992. A training algorithm for optimal margin classifiers. In Haussler, D., _Proceedings of the 5\\({}^{th}\\) Annual ACM Workshop on Computational Learning Theory_ 144-152. Pittsburgh, PA: ACM Press.
* Crammer and Singer (2001) Crammer, K. and Singer, Y.,2001, On the algorithmic implementation of multiclass kernel-based vector machines. _Journal of Machine Learning Research_, vol. 2, pp. 265-292
* Cristianini and Shawe-Taylor (2000) Cristianini, N. and Shawe-Taylor, J., 2000, _An Introduction to Support Vector Machines and other Kernel-based Learning Methods_. Cambridge, UK: Cambridge University Press.
* Dietterich and Bakiri (1995) Dietterich T. G. and Bakiri G., 1995, Solving multiclass learning problems via error correcting output codes. Journal of Artificial Intelligence Research, 2:263-286.
* Dietterich et al. (2000)Foody, G. M., and Mathur, A., 2004, A relative evaluation of multiclass Image classification by support vector machines. _IEEE Transactions on Geoscience and Remote Sensing_, 42, 1335-1343.
* [1998] Gualtieri, J. A., and Cromp, R. F.,1998, Support vector machines for hyperspectral remote sensing classification. _Proceedings of the of the SPIE, 27th AIPR Workshop: Advances in Computer Assisted Recognition_, Washington, DC, October 14-16, 221-232.
* [1998] Hastie, T. J. and Tibshirani, R. J., 1998, Classification by pairwise coupling. _Advances in Neural Information Processing Systems_ (Jordan, M. I., Kearns, M. J., and Solla, S. A., eds.), vol 10, The MIT Press.
* [2002] Huang, C., Davis, L. S., and Townshend, J. R. G., 2002, An assessment of support vector machines for land cover classification. _International Journal of Remote Sensing_, 23, 725-749.
* [2002] Hsu, C.-W., and C.-J. Lin, C.-J., 2002, A comparison of methods for multi-class support vector machines, _IEEE Transactions on Neural Networks_, 13, 415-425.JAMES, G., 1998, _Majority vote classifiers: Theory and Applications_. Ph.D. Thesis, Department of Statistics, Stanford University, Stanford, CA.
* [1990] Knerr, S., Personnaz, L., and Dreyfus, G., 1990, Single-layer learning revisited: A stepwise procedure for building and training neural network. _Neurocomputing: Algorithms, Architectures and Applications_, NATO ASI, Berlin: Springer-Verlag.
* [2001] Lee, Y., Lin, Y., and Wahba, G., 2001, _Multicategory support vector machines_ Tech. Rep. 1043, Department of Statistics, University of Wisconsin, Madison, WI.
Peterson, W. W., and Weldon, E. J. Jr., 1972, _Error correcting codes_. MIT Press, Cambridge, MA.
* Platt, C., Cristianini, N., and Shawe-Taylor, J., 2000. Large margin DAGs for multiclass classification. In S. A. Solla, T. K. Leen, and K. -R. Muller, editors, _Advance in Neural Information Processing Systems 12_, pp. 547-553, The MIT Press.
* Support Vector Machines, Regularization, Optimization and Beyond_. Cambridge, MA: The MIT Press.
* Vapnik, V. N.,1995, _The Nature of Statistical Learning Theory_. New York: Springer-Verlag.
* Weston, J. and Watkins, C., 1998, _Multi-class Support Vector Machines_. Royal Holloway, University of London, U. K., Technical Report CSD-TR-98-04.
* Zhu, G., and Blumberg, D. G., 2002, Classification using ASTER data and SVM algorithms; The case study of Beer Sheva, Israel. _Remote Sensing of Environment_, 80, 233-240. | SVMs were initially developed to perform binary classification; though, applications of binary classification are very limited. Most of the practical applications involve multiclass classification, especially in remote sensing land cover classification. A number of methods have been proposed to implement SVMs to produce multiclass classification. A number of methods to generate multiclass SVMs from binary SVMs have been proposed by researchers and is still a continuing research topic. This paper compares the performance of six multi-class approaches to solve classification problem with remote sensing data in term of classification accuracy and computational cost. _One vs. one, one vs. rest, Directed Acyclic Graph (DAG), and Error Corrected Output Coding (ECOC)_ based multiclass approaches creates many binary classifiers and combines their results to determine the class label of a test pixel. Another category of multi class approach modify the binary class objective function and allows simultaneous computation of multiclass classification by solving a single optimisation problem. Results from this study conclude the usefulness of _One vs. One_ multi class approach in term of accuracy and computational cost over other multi class approaches. | Provide a brief summary of the text. |
arxiv-format/0802_2423v3.md | # A Comprehensive View of the 2006 December 13 CME: From the Sun to Interplanetary Space
Y. Liu12, J. G. Luhmann1, R. Muller-Mellin3, P. C. Schroeder1, L. Wang1, R. P. Lin1, S. D. Bale1, Y. Li1, M. H. Acuna4, and J.-A. Sauvaud5
Footnote 1: affiliation: Space Sciences Laboratory, University of California, Berkeley, CA 94720, USA; [email protected].
Footnote 2: affiliation: State Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing 100080, China.
Footnote 3: affiliation: Institut für Experimentelle und Angewandte Physik, Universität Kiel, Kiel, Germany.
Footnote 4: affiliation: NASA Goddard Space Flight Center, Greenbelt, Maryland, USA.
Footnote 5: affiliation: Centre d’Etude Spatiale des Rayonnements, Centre National de la Recherche Scientifique, Toulouse, France.
######
shock waves -- solar-terrestrial relations -- solar wind -- Sun: coronal mass ejections -- Sun: radio radiation -- Sun: particle emission
Introduction
Coronal mass ejections (CMEs) are the most spectacular eruptions in the solar atmosphere and have been recognized as primary drivers of interplanetary disturbances. They are called interplanetary CMEs (ICMEs) when they move into the solar wind. Often associated with CMEs and ICMEs are radio bursts, shock waves, solar energetic particle (SEP) events, and prolonged southward magnetic field components. A southward field component can reconnect with geomagnetic fields and produce storms in the terrestrial environment (e.g., Dungey, 1961). Understanding CMEs and characterizing their interplanetary transport are crucial for space weather forecasting but require coordinated multi-wavelength observations in combination with in situ measurements.
The 2006 December 13 CME is the largest halo CME since the Halloween storm which occurred in October - November 2003 (e.g., Gopalswamy et al., 2005; Richardson et al., 2005; Lario et al., 2005), given the observed speeds of the CME and its forward shock, the time duration of the ICME at 1 AU, the SEP intensities and the angular extent of the shock (see SS2 and SS3). It is also the largest CME in the era of the Solar TErrestrial RElations Observatory (STEREO) up to the time of this writing. Different from the Halloween storm, this event is relatively isolated from other CMEs, so contamination by or mixing with other events is less pronounced; propagation into a solar wind environment near solar minimum would also make theoretical modeling easier. Accompanied by an X3.4 solar flare, the CME evolved into a magnetic cloud (MC) and produced significant space weather effects including SEP events, an interplanetary shock and radio bursts detected by various instruments aboard a fleet of spacecraft. Examining the evolution and propagation of this event through the heliosphere would provide benchmark studies for CMEs, associated phenomena and space weather.
The purpose of this work is to study the solar source and heliospheric consequences of this CME in the frame of the Sun-Earth connection. We combine EUV, coronagraph, radio, in situ particle, plasma and magnetic field measurements with modeling efforts in an attempt to give a comprehensive view of the event; particular attention is paid to tracking the CME/shock all the way from the Sun far into interplanetary space. We look at EUV and coronagraph images in SS2. Evolution of the CME in the heliosphere and its effects on particle transport are investigated in SS3. In SS4, we combine different data and demonstrate how the CME/shock propagation can be tracked using coordinated observations and magnetohydrodynamic (MHD) modeling. The results are summarized and discussed in SS5.
CME at the Sun
We look at CME observations from the Large Angle Spectroscopic Coronagraph (LASCO) and coronal observations from the Extreme-ultraviolet Imaging Telescope (EIT) aboard the SOlar and Heliospheric Observatory (SOHO). The Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) of STEREO was not turned on at the time of the CME. Figure 1 displays combined images as seen by EIT and LASCO/C2 during the CME times. The CME starts with a strong EUV brightening in the southwest quadrant at 02:30 UT. Around 02:54 UT, it forms a nearly complete halo; the EUV brightening is followed by a dimming which quickly spreads into a diffusive area of the solar disk. The CME moves further out around 03:06 UT and forms a spectacular ring of dense material. The good timing between the dimming and CME indicates that the reduced EUV brightness results from removal of the coronal plasma due to the lift-off of the CME (e.g., Thompson et al., 1998; Zarro et al., 1999). Depletion of the coronal material by CMEs is possible if the associated magnetic field is opened or stretched into interplanetary space as proposed by many CME models. Note that a small CME precedes the big event as can be seen in Figure 1 (left). Interactions of successive CMEs are thought to affect SEP production (e.g., Gopalswamy et al., 2004).
Faint diffuse EUV brightenings are also seen and appear to be propagation fronts of the dimming. These brightenings may represent coronal waves propagating away from the active region. They were first discovered by Neupert (1989) but popularized in EIT observations by Thompson et al. (1998); for that reason they have been referred to as \"EIT waves\". The low cadence rate of EIT observations (12 min for the 195 A band) does not allow an accurate determination of the speed of the waves. The brightenings moving toward the northeast hemisphere, however, seem to have a constant speed: they travel a distance of \\(\\sim\\)0.86 \\(R_{\\odot}\\) (solar radius) within 24 min from 02:24 UT to 02:48 UT and another 0.43 \\(R_{\\odot}\\) within 12 min from 02:48 UT to 03:00 UT (see Figure 1). The speed is estimated to be about 420 km s\\({}^{-1}\\). The CME speed projected on the sky is about 1774 km s\\({}^{-1}\\) as measured along a position angle of 193\\({}^{\\circ}\\) (counter clockwise from the north; see CME identification and parameters at the LASCO CME catalogue [http://cdaw.gsfc.nasa.gov](http://cdaw.gsfc.nasa.gov)), significantly larger than the EIT wave speed. It is not clear about the nature or origin of the coronal waves, although an unambiguous correlation between EIT waves and CMEs has been established (e.g., Biesecker et al., 2002). An observed metric type II burst starting at 02:27 UT, however, indicates that the present EIT wave is likely a shock wave.
A closer look at the images also reveals a sharp edge all the way around the CME front (see middle and right panels of Figure 1), reminiscent of shock signature. Given the fast expansion of the CME and a density at 1 AU comparable to the ambient solar wind (seeFigure 3), the CME density near the Sun must be much larger than the ambient density; the density increase due to shock compression is at most a factor of 4 of the background medium. The sheath region (a transition layer between the CME front and shock) should thus have a brightness weaker than the CME, consistent with the coronagraph observations. Therefore, the sharp white-light feature is likely the CME-driven shock. It is very rare to see the shock in white light, especially for halo CMEs (e.g., Vourlidas et al., 2003). Relationship between the EIT waves, metric type II burst and white-light shock will be further investigated in a separate work.
The CME is accompanied by an X3.4 solar flare located in the active region NOAA 10930 (S06\\({}^{\\circ}\\)W23\\({}^{\\circ}\\)). The flare seems to be induced by a strong shear in the magnetic field associated with a filament eruption, leading to two large ribbons which are twisted but largely horizontal around the filament channel (e.g., Zhang et al., 2007; Kosovichev & Sekii, 2007). We will compare the orientation of the filament channel with reconstruction of the associated MC observed in situ (see SS3.1 and Li et al., 2007)
## 3 Interplanetary Consequences
After the abrupt formation in the solar corona, the CME propagates into the interplanetary medium and is observed in situ by STEREO, ACE and Ulysses. We infer the ICME structure from in situ measurements of plasma and magnetic field parameters combined with a flux-rope reconstruction model. Connectivity of the ICME back to the Sun is indicated by energetic particles which could be channeled, constrained and reaccelerated by the transient structure.
### ICME at 1 AU
STEREO observed the ICME after an exit from the terrestrial magnetosheath. Figure 2 shows STEREO in situ measurements across the event from the Solar Wind Electron Analyzer (SWEA; Sauvaud et al., 2007) and the magnetometer (MAG; Acuna et al., 2007) of the in situ measurements of particles and CME transients investigation (IMPACT; Luhmann et al., 2007). STEREO A and B were not well separated, so they observed essentially the same structure. The plasma parameters (e.g., density, velocity and temperature) are not available from STEREO for that time period. Bi-directional streaming electrons (BDEs) seem coincident with the strong magnetic fields, indicative of closed field lines within the event; rotation of the field (see the field elevation angle) indicates an MC. The MC in terval is determined from the BDEs but also consistent with the reduced field variance and the rotation of the field. The magnetic field has a significant negative (southward) component which caused a major geomagnetic storm with \\(D_{st}\\sim-190\\) nT. Interestingly, there is a current sheet (indicated by the peak of the field elevation angle) within the MC, which might be due to the passage of the comet McNaught through the event (C. T. Russell, private communication). The magnetic field trailing behind the MC has a roughly constant direction, presumably stretched by the MC because of its large speed. A preceding shock, as can be seen from simultaneous increases in the electron flux and magnetic field strength, passed the spacecraft at 14:38 UT on December 14. The transit time is about 36 hr from the Sun to the Earth (assuming a launch time 02:30 UT on December 13), suggesting an average speed of \\(\\sim\\)1160 km s\\({}^{-1}\\). This speed is significantly smaller than the white-light speed close to the Sun (1774 km s\\({}^{-1}\\)) but larger than the shock speed at 1 AU (1030 km s\\({}^{-1}\\); see below), so the shock must be decelerated as the high speed flow overtakes the preceding solar wind. Propagation of the shock as well as the deceleration is calculated in SS4.
Complementary plasma parameters from ACE are displayed in Figure 3; the magnetic field is almost the same as measured at STEREO. The ICME interval is identified by combining the enhanced helium/proton density ratio and depressed proton temperature (as compared with the normal temperature expected from the observed speed); the boundaries also agree with the discontinuities in the density, bulk speed and magnetic field. The resulting radial width (average speed times the duration) is about 0.67 AU; the MC indicated by STEREO BDEs is about the first half of the time interval. The preceding shock is also apparent from the plasma parameters. The shock speed is about 1030 km s\\({}^{-1}\\) as calculated from the conservation of mass across the shock, i.e., \\(v_{s}=(n_{2}v_{2}-n_{1}v_{1})/(n_{2}-n_{1})\\), where \\(n_{1}=1.8\\) cm\\({}^{-3}\\), \\(n_{2}=6.0\\) cm\\({}^{-3}\\), \\(v_{1}=573\\) km s\\({}^{-1}\\) and \\(v_{2}=896\\) km s\\({}^{-1}\\) are average densities and speeds upstream (1) and downstream (2) of the shock, respectively. As shown above, the shock has to be decelerated when propagating from the Sun to the Earth. A least squares fit of the plasma and magnetic field data across the shock to the Rankine-Hugoniot relations (Vinas & Scudder, 1986) gives a shock normal with elevation angle \\(\\theta\\simeq-12.7^{\\circ}\\) and azimuthal angle \\(\\phi\\simeq 308^{\\circ}\\) in RTN coordinates (with \\(\\mathbf{R}\\) pointing from the Sun to the spacecraft, \\(\\mathbf{T}\\) parallel to the solar equatorial plane and along the planet motion direction, and \\(\\mathbf{N}\\) completing the right-handed system). The shock normal makes an angle of about \\(56^{\\circ}\\) with the upstream magnetic field, so the shock may be quasi-perpendicular. Liu et al. (2006a) find that the sheath regions between fast ICMEs and their preceding shocks are analogous to planetary magnetosheaths and often characterized by plasma depletion layers and mirror-mode waves. The proton density in the sheath of the current event first increases and then decreases quickly close to the MC, very similar to the case shown in their Figure 3. Magnetic fluctuations in the sheath appear consistent with mirror-mode waves but note that large depressions are due to current sheet crossings; a quasi-perpendicular shock would heat the plasma preferentially in the direction perpendicular to the field, so the plasma downstream of the shock may be unstable to the mirror-mode instability (e.g., Liu et al., 2007). There seems another ICME on December 17 as can be seen from the low proton temperature and declining speed. Interactions between the two events are likely present.
We reconstruct the MC structure using the Grad-Shafranov (GS) technique which includes the thermal pressure and can give a cross section without prescribing the geometry (e.g., Hau & Sonnerup, 1999; Hu & Sonnerup, 2002). This method relies on the feature that the thermal pressure and the axial magnetic field depend on the vector magnetic potential only (Schindler et al., 1973; Sturrock, 1994), which has been validated by observations from STEREO and ACE/Wind when these spacecraft are well separated (Liu et al., 2008). We apply the method to the plasma and magnetic field data between 22:48 UT on December 14 (the MC leading edge) and 04:34 UT on December 15 (right before the current sheet within the MC) when the magnetic field has the clearest rotation. All the data used in the reconstruction are from the MC interior. The reconstruction results are illustrated in Figure 4. The recovered cross section (in a flux-rope frame with \\(\\mathbf{x}\\) along the spacecraft trajectory and \\(\\mathbf{z}\\) in the direction of the axial field) shows nested helical field lines, suggestive of a flux-rope structure. The spacecraft (ACE and STEREO) seem to cross the MC close to the axis with an impact parameter of 0.01 AU; the maximum axial field is also very close to the leading edge (\\(x=0\\)). The transverse fields along the spacecraft path indicate a left-handed chirality. The reconstruction gives an axis elevation angle \\(\\theta\\simeq-57^{\\circ}\\) and azimuthal angle \\(\\theta\\simeq 261^{\\circ}\\) in RTN coordinates, as shown in Figure 4 (right). Since the axial field points southward and the field configuration is left-handed, a spacecraft would see a field which is first most negative and then becomes less negative as the MC passes the spacecraft along the radial direction. These results are consistent with observations (see Figures 2 and 3). We also apply the method to a larger interval inside the MC (with the current sheet excluded) and obtain a similar cross section and axis orientation. Note that the axis orientation is oblique to the filament channel; similar results are obtained for other cases (e.g., Wang et al., 2006). The MC axis orientation depends on which part of the CME is observed in situ, or the CME possibly rotates during the propagation in the heliosphere.
### SEP Events
Major SEP events are observed at 1 AU during the ICME passage. The ICME structure as well as its effects on energetic particle transport can also be inferred from particle measurements. Figure 5 shows the particle intensities measured by the Electron, Proton,and Alpha Monitor (EPAM) of ACE and the Solar Electron and Proton Telescope (SEPT) of STEREO A. Four particle enhancements are evident. The timing with the flares and shocks indicates that the first particle enhancement is associated with the injection from the X3.4 flare (02:38 UT on December 13), the second one associated with the ICME forward shock (14:38 UT on December 14), the third one with an X1.5 flare (22:14 UT on December 14) which occurred in the same active region (NOAA 10930; S06\\({}^{\\circ}\\)W46\\({}^{\\circ}\\)), and the fourth one with the shock downstream of the ICME (17:23 UT on December 16; see Figures 2 and 3). The X3.4 flare produced an intense electron flux which declines during a long time period; velocity dispersion is not clear in the electrons but present in the protons. Note that the CME-driven shock should be producing energetic particles throughout the interplanetary transit. It continues to accelerate protons to \\(\\sim\\)MeV energies at 1 AU but appears to have a small effect on the electrons, probably because the electron enhancement at the shock is masked by the large preshock intensities. There is an apparent exclusion of the protons from the ICME interior (see second panel), presumably screened off by the strong fields within the ICME; the proton signature also seems consistent with the BDE interval. Interestingly, there is an intensity enhancement of the electrons within the ICME due to the X1.5 flare; it is likely that the electrons stream along the field line from the active region and are trapped inside the ICME (e.g., Kahler & Reames, 1991; Larson et al., 1997), so the ICME may still be magnetically connected to the Sun. These features are very similar to the observations during the Halloween storm (e.g., Malandraki et al., 2005; McKibben et al., 2005).
The two bottom panels in Figure 5 show anisotropy information of the particles provided by STEREO A/SEPT. SEPT has two separate telescopes, one looking in the ecliptic plane along the nominal Parker spiral field toward and away from the Sun and the other looking vertical to the plane toward the south and north, respectively (Muller-Mellin et al., 2007). The lines in the two panels represent the intensity differences between the two directions for each telescope (i.e., differences between south and north and between the two opposite directions along the Parker field); the intensity difference is normalized by 4000 cm\\({}^{-2}\\) s\\({}^{-1}\\) sr\\({}^{-1}\\) MeV\\({}^{-1}\\) for the electrons and 1000 cm\\({}^{-2}\\) s\\({}^{-1}\\) sr\\({}^{-1}\\) MeV\\({}^{-1}\\) for the protons. The data before 18:10 UT on December 14 are discarded, because the SEPT doors were closed since the launch of STEREO and were opened one by one from 17:32 UT to 18:10 UT on December 14. The anisotropy information of the particles is thus not available before 18:10 UT on December 14. The X1.5 flare produced electrons and protons moving largely anti-sunward at 1 AU, but some anti-sunward particles may be mirrored back by the enhanced magnetic fields in the ICME/sheath and then propagate in the sunward direction. A beam of particles accelerated at the second shock, which may also be deflected by the first ICME, move sunward and northward.
Further information about the electron behavior across the ICME is provided by SWEAand the suprathermal electron instrument (STE; Lin et al., 2008) on board STEREO as shown in Figure 6. The electron energies range from \\(\\sim\\)1 eV to 100 keV. The pitch angle distribution of 247 eV electrons displayed in Figure 2 is measured by SWEA in the spacecraft frame; BDE signatures are discernible from \\(\\sim\\)50 eV to \\(\\sim\\)1.7 keV for this event. There seems a flux decrease associated with the BDE interval; the high-energy electrons from STE recover before the trailing edge of the BDE interval, earlier than the low-energy electrons. The electron intensity profiles are quantitatively different from what is shown in Figure 5, which may be due to different looking directions of the detectors and different connectivity of the spacecraft via the magnetic field line to the Sun.
The above results suggest that the particle transport is largely governed by the large-scale transient structures. The particles can be deflected and constrained by ICMEs and reaccelerated by their associated shocks, so modeling of the particle transport is complicated by the presence of these structures. The energetic particles, however, could be used to trace the ICME structure, which can then be compared with plasma and magnetic field measurements. More information on ion intensities, spectra, and composition regarding the events can be found in Mewaldt et al. (2007) and Mulligan et al. (2007).
### ICME at Ulysses and Beyond
Ulysses was at a distance 2.73 AU, latitude \\(-\\)74.9\\({}^{\\circ}\\) and longitude 123.3\\({}^{\\circ}\\) in the heliographic inertial system when it observed a large shock at 17:02 UT on December 17. Ulysses was about 117\\({}^{\\circ}\\) east and 74\\({}^{\\circ}\\) south of the Earth. The large spacecraft separation provides a great opportunity to measure the spatial extent of the CME-driven shock. Figure 7 displays the Ulysses data for a 5-day interval. The shock is apparent from the sharp increases in the plasma and magnetic field parameters. During this time period, there are no clear ICME signatures such as enhanced helium abundance, depressed proton temperature, and smooth strong magnetic fields compared with the ambient solar wind upstream of the shock. It is likely that the ICME is missed at Ulysses whereas the shock is observed. The shock has a speed about 870 km s\\({}^{-1}\\), smaller than the 1 AU speed (1030 km s\\({}^{-1}\\)) but not significantly. Given a speed difference larger than 740 km s\\({}^{-1}\\) between at the Sun and 1 AU, the primary deceleration of the shock must occur within 1 AU, and further out the shock moves with a roughly constant speed. Propagation of the shock from the Sun to Ulysses is quantified in SS4 by combining the coronagraph, radio, and in situ data with an MHD model. The shock normal has an elevation angle \\(\\theta\\simeq-38.3^{\\circ}\\) and azimuthal angle \\(\\phi\\simeq 77.5^{\\circ}\\) (RTN), resulting from the Rankine-Hugoniot calculations (Vinas & Scudder, 1986). The angle between the shock normal and the upstream magnetic field is about 68\\({}^{\\circ}\\), so the shock is also quasi-perpendicularat Ulysses.
To show that Ulysses observed the same shock as ACE/STEREO, we propagate the solar wind data outward from 1 AU using an MHD model (Wang et al., 2000). The model has had success in connecting solar wind observations at different spacecraft (e.g., Wang et al., 2001; Richardson et al., 2002, 2005, 2006; Liu et al., 2006b). The model assumes spherical symmetry (1D) since we have solar wind measurements at only a single point. We use the solar wind parameters observed at 1 AU (50 days long around the ICME) as input to the model and propagate the solar wind outward. Figure 8 shows the observed speeds at 1 AU and Ulysses and the model-predicted speeds at certain distances. Small streams smooth out due to stream interactions as can be seen from the traces, but the large stream associated with the shock persists out to Ulysses. The predicted arrival time of the shock at Ulysses is only about 3.6 hr earlier than observed. The time difference is negligible compared with the propagation time of \\(\\sim\\)75.1 hr from ACE to Ulysses. The ambient solar wind predicted by the model is slower than observed at Ulysses, which is reasonable since Ulysses is at the south pole (74\\({}^{\\circ}\\) south of ACE). Given the good stream alignment, we think that Ulysses and the near-Earth spacecraft observed the same shock. Ulysses may be observing the shock flank if the nose of the ICME is close to the ecliptic plane. The successful model-data comparison also indicates that the global shock surface is nearly spherical and the shock speed variation from the shock nose to flank is small. It is surprising that even at the south pole the shock is still observed; spacecraft configured as above (i.e., one close to the solar equatorial plane and the other at the pole) are rare while they observe the same shock. Note that the longitudinal size of the shock is also large with a lower limit of 117\\({}^{\\circ}\\) (the longitudinal separation between the Earth and Ulysses).
The large size of the shock indicates that the global configuration of the solar wind can be altered as the CME sweeps through the heliosphere. We also propagate the solar wind to large distances using the MHD model. The peak solar wind speeds quickly decrease as the high-speed flow interacts with the ambient medium. They are reduced to 630 km s\\({}^{-1}\\) at 10 AU and to 490 km s\\({}^{-1}\\) (close to the ambient level) by 50 AU. Therefore, the high-speed streams would not produce significant effects at large distances.
## 4 CME/Shock Propagation
Of particular interest for space weather forecasting is the CME/shock propagation in the inner heliosphere. In the absence of observations of plasma features, propagation of shocks within 1 AU can be characterized by type II radio emissions (e.g., Reiner et al., 2007). Type II radio bursts, typically drifting downward in frequency, are remote signatures of a shock moving through the heliosphere and driving plasma radiation near the plasma frequency and/or its second harmonic (e.g., Nelson & Melrose, 1985; Cane et al., 1987). The frequency drift results from the decrease of the plasma density as the shock propagates away from the Sun. The plasma frequency, \\(f_{p}\\) (kHz) = \\(8.97\\sqrt{n\\ ({\\rm cm^{-3}})}\\), can be converted to a heliocentric distance \\(r\\) by assuming a density model \\(n=n_{0}/r^{2}\\)
\\[r\\ ({\\rm AU})=\\frac{8.97\\sqrt{n_{0}\\ ({\\rm cm^{-3}})}}{f_{p}\\ ({\\rm kHz})}, \\tag{1}\\]
where \\(n_{0}\\) is the plasma density at 1 AU. The height-time profile of shock propagation can then be obtained from the frequency drift.
Figure 9 displays the dynamic spectrum as well as soft X-ray flux associated with the CME. An intense type III radio burst occurred at about 02:25 UT on December 13 (Day, 347), almost coincident with the peak of the X-ray flux. Type III bursts are produced by near-relativistic electrons escaping from the flaring site (e.g., Lin et al., 1973), so they drift very rapidly in frequency and appear as almost vertical features in the dynamic spectrum. Such an intense type III burst often indicates a major CME (Reiner et al., 2001). Note that many short-lived type III-like bursts are also seen starting from 17:00 UT on December 13; they are known as type III storms and presumably associated with a series of small electron beams injected from the Sun. Diffuse type II emissions occur at the fundamental and harmonic plasma frequencies and appear as slowly drifting features. They start after the type III burst and seem disrupted during the small flares around 13:00 UT on December 13 (see the X-ray flux). It is not clear whether the type II emissions after 16:00 UT on December 13 are at the fundamental or harmonic of the plasma frequency; the broad band may result from merging of the two branches. Apparently it is difficult to measure the frequency drift from individual frequencies associated with the type II bursts. An overall fit combined with in situ measurements at 1 AU would give a more accurate estimate for the height-time profile.
We employ a kinematic model to characterize the CME/shock propagation, similar to the approach of Gopalswamy et al. (2001) and Reiner et al. (2007). The shock is assumed to start with an initial speed \\(v_{0}\\) and a constant deceleration \\(a\\) lasting for a time period \\(t_{1}\\), and thereafter it moves with a constant speed \\(v_{s}\\). The shock speed \\(v_{s}\\) and transit time \\(t_{T}\\) are known from 1 AU measurements, leaving only two free parameters in the model (\\(a\\) and \\(t_{1}\\)). At a time \\(t\\), the distance of the shock can be expressed as
\\[r=\\left\\{\\begin{array}{ll}d+v_{s}(t-t_{T})+a(\\frac{1}{2}t^{2}+\\frac{1}{2}t_ {1}^{2}-t_{1}t)&\\quad t<t_{1}\\\\ d+v_{s}(t-t_{T})&\\quad t\\geq t_{1}\\end{array}\\right. \\tag{2}\\]
where \\(d=1\\) AU and \\(v_{0}=v_{s}-at_{1}\\). The trace of the fundamental branch of the type II bursts is singled out using an interactive program and shown in Figure 10; the selected frequenciesare converted to heliocentric distances using equation (1). We adjust the density scale factor \\(n_{0}\\) to obtain a best fit of the frequency drift; a value of \\(n_{0}\\simeq 13\\) cm\\({}^{-3}\\) gives a height-time profile that simultaneously matches the radio data and the shock parameters at 1 AU. The density model describes the average radial variation of the ambient density, so the scale factor is not necessarily the observed plasma density upstream of the shock at 1 AU. Two curves corresponding to the emissions at the fundamental and harmonic plasma frequencies are obtained from the best fit, as shown in Figure 9. The fit is forced to be consistent with the overall trend of the frequency-drifting bands; discrepancies are seen at some times due to irregularities of the type II emissions. Note that the best fit yields the radial kinematic parameters of the CME/shock propagation with projection effects minimized. The radial velocity of the shock near the Sun given by the best fit is \\(v_{0}\\simeq 2212\\) km s\\({}^{-1}\\), larger than the measured CME speed projected onto the sky (1774 km s\\({}^{-1}\\)). The deceleration is about \\(-34.7\\) m s\\({}^{-2}\\), lasting for \\(\\sim\\)9.5 hr which corresponds to a distance of about 0.36 AU. Thereafter the shock moves with a constant speed 1030 km s\\({}^{-1}\\) as measured at 1 AU.
In order to show how well the CME/shock is tracked by the fit, we extend the curve to the distance of Ulysses and plot in Figure 10 the CME locations measured by LASCO (see the LASCO CME catalogue at [http://cdaw.gsfc.nasa.gov](http://cdaw.gsfc.nasa.gov)), the MHD model output of the shock arrival times every 0.2 AU between 1 - 2.6 AU, and the shock arrival time at Ulysses. The fit agrees with the LASCO data, the MHD model output at different distances and finally the Ulysses measurement. Note that we only use the type II frequency drift and the 1 AU shock parameters to obtain the height-time profile. Even at large distances the shock is still tracked remarkably well by the fit. The agreement verifies the kinematic model for the CME/shock propagation; a value of 2212 km s\\({}^{-1}\\) should be a good estimate of the CME radial velocity near the Sun. The separation between the shock and CME should be very small near the Sun but is not negligible at 1 AU (see Figures 1 and 3).
These results present an important technique for space weather forecasting, especially when in situ measurements closer to the Sun are available (say, from the Solar Orbiter and Sentinels). In situ data closer to the Sun can be propagated to 1 AU by an MHD model; further constraints on the height-time profile are provided by the frequency drift of type II emissions. The advantage of this method is that the shock can be tracked continuously from the Sun all the way to 1 AU; the arrival time of CME-driven shocks at the Earth can be predicted with an accuracy less than a few hours \\(\\sim\\)days before they reach the Earth. Implementation of the method, specifically combining MHD propagation of the solar wind with type II frequency drift, is expected to be a routine possibility in the future when in situ data are available from the Solar Orbiter and Sentinels.
Summary and Discussion
We have investigated the evolution and propagation of the 2006 December 13 CME combining remote sensing and in situ measurements with modeling efforts. A comprehensive view of the CME is made possible by coordinated EUV, coronagraph, radio, particle and in situ plasma and magnetic field observations provided by a fleet of spacecraft including SOHO, STEREO, ACE, Wind and Ulysses.
The CME is accompanied by an X3.4 solar flare, EUV dimmings and EIT waves. It had a speed about 1774 km s\\({}^{-1}\\) near the Sun and produced SEP events, radio bursts, an interplanetary shock, and a large ICME embedded with an MC which gave rise to a major geomagnetic storm. The speed of the CME-driven shock is about 1030 km s\\({}^{-1}\\) at 1 AU, suggestive of a significant deceleration between the Sun and 1 AU. Reconstruction of the MC with the GS method indicates a flux-rope structure with an axis orientation oblique to the flare ribbons. We observe major SEP events at 1 AU, whose intensities and anisotropies are used to investigate the ICME structure. The ICME is still magnetically connected to the Sun, as indicated by the electron enhancement due to the X1.5 flare within the ICME. Particle deflection and exclusion by the ICME suggest that the energetic particle transport is largely dominated by the transient structures.
The CME-driven shock is also observed at Ulysses while the ICME seems missed. Ulysses was 74\\({}^{\\circ}\\) south and 117\\({}^{\\circ}\\) east of the Earth, indicative of a surprisingly large angular extent of the shock. The shock speed is about 870 km s\\({}^{-1}\\), comparable to its 1 AU counterpart. An MHD model using the 1 AU data as input successfully predicts the shock arrival time at Ulysses with a deviation of only 3.6 hr, substantially smaller than the propagation time 75.1 hr from ACE to Ulysses. The model results also show that the peak solar wind speeds quickly decrease at large distances. Consequently, the CME/shock would not cause large effects in the outer heliosphere.
To the best of our knowledge, this may be the largest CME-driven shock ever detected in the space era. Ulysses, launched in 1991, is the only spacecraft that can explore the solar wind conditions at high latitudes. A survey of ICMEs from observations of near-Earth spacecraft and Ulysses shows that these spacecraft are generally separated within 40\\({}^{\\circ}\\) in latitude when they observe the same CME-driven shock (Liu et al., 2005, 2006b). Reisenfeld et al. (2003) report an ICME as well as a preceding shock observed at both ACE and Ulysses with a latitudinal separation 73\\({}^{\\circ}\\) (comparable to the present one), but the longitudinal separation of the spacecraft is only 64\\({}^{\\circ}\\), much smaller than the current case. At the time of the Bastille Day event in 2000, Ulysses was 65\\({}^{\\circ}\\) south and 116\\({}^{\\circ}\\) east of the Earth, comparable to but smaller than the present spacecraft separation; the shock as well as the ICME, however, did not reach Ulysses (Zhang et al., 2003). During the record-breaking Halloween storm in 2003,the spacecraft that observed the preceding shock were all at low latitudes (Richardson et al., 2005). Other documented events in the last 150 years either occurred before the space era or were associated with spacecraft separations smaller than the current one (Burlaga, 1995; Cliver & Svalgaard, 2004; Gopalswamy et al., 2005; Richardson et al., 2002, 2006).
Tracking the interplanetary transport of CME-driven shocks (as well as measuring their global scale) is of critical importance for solar, heliospheric, and magnetospheric studies and space weather forecasting. We draw particular attention to the CME/shock propagation combining coronagraph images, type II bursts, in situ measurements and the MHD model. The height-time profile is deduced from the frequency drift of the type II bands and the shock parameters measured at 1 AU assuming a kinematic model; uncertainties in the frequency drift are minimized by the constraints from 1 AU data. The shock is tracked remarkably well by the height-time curve, as cross verified by LASCO data, the MHD model output at different distances and Ulysses observations. The CME/shock has a radial speed of 2212 km s\\({}^{-1}\\) near the Sun; the effective deceleration is about \\(-34.7\\) m s\\({}^{-2}\\) and lasts for 9.5 hr corresponding to a transit distance of 0.36 AU. These results demonstrate that a shock can be tracked from the Sun all the way to 1 AU (and larger distances) by combining MHD propagation of the solar wind and type II emissions, a crucial technique to predict the shock arrival time at the Earth with small ambiguities especially when in situ measurements closer to the Sun are available from the Solar Orbiter and Sentinels.
The research was supported by the STEREO project under grant NAS5-03131. We acknowledge the use of SOHO, GOES, ACE, Wind and Ulysses data and CME parameters from the LASCO CME catalogue maintained by NASA and the Catholic University of America in cooperation with NRL. We thank C. T. Russell for helping maintain the STEREO/MAG data, and are grateful to the referee for his/her helpful suggestions. This work was also supported in part by grant NNSFC 40621003.
## References
* Acuna et al. (2007) Acuna, M. H., et al. 2007, Space Sci. Rev., doi: 10.1007/s11214-007-9259-2
* Biesecker et al. (2002) Biesecker, D. A., Myers, D. C., Thompson, B. J., Hammer, D. M., & Vourlidas, A. 2002, ApJ, 569, 1009
* Burlaga (1995) Burlaga, L. F. 1995, Interplanetary Magnetohydrodynamics (New York: Oxford Univ. Press)
* Cane et al. (1987) Cane, H. V., Sheeley, N. R., Jr., & Howard, R. A. 1987, J. Geophys. Res., 92, 9869
* Cane et al. (2007)Cliver, E. W., & Svalgaard, L. 2004, Solar Phys., 224, 407
* () Dungey, J. W. 1961, Phys. Rev. Lett., 6, 47
* () Gopalswamy, N., Lara, A., Yashiro, S., Kaiser, M. L. & Howard, R. A. 2001, J. Geophys. Res., 106, 29207
* () Gopalswamy, N., Yashiro, S., Krucker, S., Stenborg, G., & Howard, R. A. 2004, J. Geophys. Res., 109, A12105
* () Gopalswamy, N., et al. 2005, J. Geophys. Res., 110, A09S15
* () Hau, L.-N., & Sonnerup, B. U. O. 1999, J. Geophys. Res., 104, 6899
* () Hu, Q., & Sonnerup, B. U. O. 2002, J. Geophys. Res., 107, 1142
* () Kahler, S. W., & Reames, D. V. 1991, J. Geophys. Res., 96, 9419
* () Kosovichev, A. G., & Sekii, T. 2007, ApJ, 670, L147
* () Lario, D., et al. 2005, J. Geophys. Res., 110, A09S11
* () Larson, D. E., et al. 1997, Geophys. Res. Lett., 24, 1911
* () Li, Y., et al. 2007, AGU Fall Meeting, abstract SH32A-0773
* () Lin, R. P., Evans, L. G., & Fainberg, J. 1973, Astrophys. Lett., 14, 191
* () Lin, R. P., et al. 2008, Space Sci. Rev., in press
* () Liu, Y., Richardson, J. D., & Belcher, J. W. 2005, Plan. Space Sci., 53, 3
* () Liu, Y., Richardson, J. D., Belcher, J. W., Kasper, J. C., & Skoug, R. M. 2006a, J. Geophys. Res., 111, A09108, doi:10.1029/2006JA011723
* () Liu, Y., Richardson, J. D., Belcher, J. W., Wang, C., Hu, Q., & Kasper, J. C. 2006b, J. Geophys. Res., 111, A12S03, doi:10.1029/2006JA011890
* () Liu, Y., Richardson, J. D., Belcher, J. W., & Kasper, J. C. 2007, ApJ, 659, L65
* () Liu, Y., et al. 2008, ApJ, 677, L133
* () Luhmann, J. G., et al. 2007, Space Sci. Rev., doi: 10.1007/s11214-007-9170-x
* () Malandraki, O. E., Lario, D., Lanzerotti, L. J., Sarris, E. T., Geranios, A., & Tsiropoula, G. 2005, J. Geophys. Res., 110, A09S06McKibben, R. B., et al. 2005, J. Geophys. Res., 110, A09S19
* () Mewaldt, R. A., et al. 2007, ICRC, in press
* () Muller-Mellin, R., et al. 2007, Space Sci. Rev., doi:10.1007/s11214-007-9204-4
* () Mulligan, T., Blake, J. B., & Mewaldt, R. A. 2007, ICRC, in press
* () Nelson, G. J., & Melrose, D. B. 1985, in Solar Radiophysics: Studies of Emission from the Sun at Metre Wavelengths, ed. D. J. McLean & N. R. Labrum (Cambridge: Cambridge Univ. Press), 333
* () Neupert, W. M. 1989, ApJ, 344, 504
* () Reiner, M. J., Kaiser, M. L., & Bougeret, J.-L. 2001, J. Geophys. Res., 106, 29989
* () Reiner, M. J., Kaiser, M. L., & Bougeret, J.-L. 2007, ApJ, 663, 1369
* () Reisenfeld, D. B., Gosling, J. T., Forsyth, R. J., Riley, P., & St. Cyr, O. C. 2003, Geophys. Res. Lett., 30, 8031
* () Richardson, J. D., Paularena, K. I., Wang, C., & Burlaga, L. F. 2002, J. Geophys. Res., 107, 1041
* () Richardson, J. D., Wang, C., Kasper, J. C., & Liu, Y. 2005, Geophys. Res. Lett., 32, L03S03
* () Richardson, J. D., et al. 2006, Geophys. Res. Lett., 33, L23107
* () Sauvaud, J.-A., et al. 2007, Space Sci. Rev., doi: 10/1007/s11214-007-9174-6
* () Schindler, K., Pfirsch, D., & Wobig, H. 1973, Plasma Phys., 15, 1165
* () Sturrock, P. A. 1994, Plasma Physics: An Introduction to the Theory of Astrophysical, Geophysical and Laboratory Plasmas (New York: Cambridge Univ. Press), 209
* () Thompson, B. J., et al. 1998, Geophys. Res. Lett., 25, 2465
* () Vinas, A. F., & Scudder, J. D. 1986, J. Geophys. Res., 91, 39
* () Vourlidas, A., Wu, S. T., Wang, A. H., Subramanian, P., & Howard, R. A. 2003, ApJ, 598, 1392
* () Wang, C., Richardson, J. D., & Gosling, J. T. 2000, J. Geophys. Res., 105, 2337
* () Wang, C., Richardson, J. D., & Paularena, K. I. 2001, J. Geophys. Res., 106, 13007
* ()* () Wang, Y., Zhou, G., Ye, P., Wang, S., & Wang, J. 2006, ApJ, 651, 1245
* () Zarro, D. M., Sterling, A. C., Thompson, B. J., Hudson, H. S., & Nitta, N. 1999, ApJ, 520, L139
* () Zhang, J., Li, L., & Song, Q. 2007, ApJ, 662, L35
* () Zhang, M., et al. 2003, J. Geophys. Res., 108, 1154.
Figure 1: Difference images of the CME and source region at different times. Filled in the circle are EIT difference images at 195 Å. A transition layer is visible around the CME front, indicating the existence of a shock (middle and right panels). Adapted from the LASCO CME catalogue at [http://cdaw.gsfc.nasa.gov](http://cdaw.gsfc.nasa.gov).
Figure 2: Pitch angle (PA) distribution of 247 eV electrons measured by STEREO A, and magnetic field strength, field elevation and azimuthal angles in RTN coordinates measured by STEREO A (black) and B (red) across the MC (bracketed by the two vertical lines). The dashed line denotes the arrival time of the MC-driven shock. The color shading indicates values of the electron flux (descending from red to blue).
Figure 3: Solar wind plasma and magnetic field parameters across the ICME (shaded region) observed at ACE. From top to bottom, the panels show the alpha/proton density ratio, proton density, bulk speed, proton temperature, magnetic field strength and components in RTN coordinates. The dotted lines denote the 8% level of the density ratio (first panel) and the expected proton temperature (fourth panel), respectively. The arrival time of the shock is marked by the vertical dashed line.
Figure 4: Left: Reconstructed cross section of the MC. Black contours show the distribution of the vector potential and the color shading indicates the value of the axial field. The dashed line marks the trajectory of the spacecraft. The arrows denote the direction and magnitude of the observed magnetic fields projected onto the cross section. The location of the maximum axial field is indicated by the plus sign. Right: An idealized schematic diagram of the MC approximated as a cylindrical flux rope in RTN coordinates with the arrow and helical line indicating the field orientation.
Figure 5: Intensities of electrons (top panel) and protons (second panel) at different energy channels measured by ACE/EPAM and anisotropies of electrons (third panel) and protons (bottom panel) observed by STEREO A/SEPT. The times of the flares and shocks are marked by the vertical dashed lines. The ICME and BDE intervals are indicated by the horizontal bars. The solid lines in the two bottom panels denote the normalized intensity differences along the directions defined by the arrows (third panel).
Figure 6: Electron intensities across the BDE interval (between the solid lines) measured by SWEA and STE D2 (one of the detectors looking away from the Sun) aboard STEREO B. The color scale shows the electron energies. The shock arrival time is indicated by the dashed line.
Figure 7: Same format as Figure 3, but for the measurements at Ulysses.
Figure 8: Evolution of solar wind speeds from ACE to Ulysses via the MHD model. The shaded region represents the ICME interval at ACE. The upper and lower solid lines show the solar wind speeds observed at ACE and Ulysses. The dotted lines denote the predicted speeds at the distances (in AU) marked by the numbers; each curve is decreased by 160 km s\\({}^{-1}\\) with respect to the previous one so that the individual profiles are discernable. The speed profiles at Ulysses (both observed and predicted) are shifted downward by 1360 km s\\({}^{-1}\\) from the 1 AU speeds.
Figure 9: Dynamic spectrum (colors) from Wind/WAVES and X-ray flux (solid line) from GOES 12. The dashed vertical line indicates the arrival time of the preceding shock at 1 AU, and the dotted lines represent the best fits of the frequency drift of the fundamental (F) and harmonic (H) type II bursts.
Figure 10: Height-time profile (solid line) of shock propagation determined from the frequency drift of the type II bands (dots) and shock parameters measured at 1 AU (\\(R_{\\odot}\\) being the solar radius). Pluses denote the LASCO data. Diamonds indicate the shock arrival times at 1 AU and Ulysses. Between 1 AU and Ulysses are the shock arrival times (filled circles) at [1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6] AU predicted by the MHD model. | The biggest halo coronal mass ejection (CME) since the Halloween storm in 2003, which occurred on 2006 December 13, is studied in terms of its solar source and heliospheric consequences. The CME is accompanied by an X3.4 flare, EUV dimmings and coronal waves. It generated significant space weather effects such as an interplanetary shock, radio bursts, major solar energetic particle (SEP) events, and a magnetic cloud (MC) detected by a fleet of spacecraft including STEREO, ACE, Wind and Ulysses. Reconstruction of the MC with the Grad-Shafranov (GS) method yields an axis orientation oblique to the flare ribbons. Observations of the SEP intensities and anisotropies show that the particles can be trapped, deflected and reaccelerated by the large-scale transient structures. The CME-driven shock is observed at both the Earth and Ulysses when they are separated by \\(74^{\\circ}\\) in latitude and \\(117^{\\circ}\\) in longitude, the largest shock extent ever detected. The ejecta seems missed at Ulysses. The shock arrival time at Ulysses is well predicted by an MHD model which can propagate the 1 AU data outward. The CME/shock is tracked remarkably well from the Sun all the way to Ulysses by coronagraph images, type II frequency drift, in situ measurements and the MHD model. These results reveal a technique which combines MHD propagation of the solar wind and type II emissions to predict the shock arrival time at the Earth, a significant advance for space weather forecasting especially when in situ data are available from the Solar Orbiter and Sentinels. | Provide a brief summary of the text. |
arxiv-format/0802_3013v2.md | # Simulation of Free Surface Compressible Flows via a Two Fluid Model
Frederic Dias
Denys Dutykh
Jean-Michel Ghidaglia
Ecole Normale Superieure de Cachan
Centre de Mathematiques et de Leurs Applications
61, Avenue du President Wilson
94235 Cachan France
Email: [email protected], [email protected], [email protected]
## Introduction
One of the challenges in Computational Fluid Dynamics (CFD) is to determine efforts exerted by waves on coastal structures. Such flows can be quite complicated and in particular when the sea is rough, wave breaking can lead to flows that cannot be described by basic models like _e.g._ free surface Euler or Navier-Stokes equations. In a free surface model, the boundary between the gas (air) and the liquid (water) is a surface. The liquid flow is assumed to be incompressible, while the gas is represented by a media, above the liquid, where the pressure is constant (the atmospheric pressure in general). Such a description is known to be valid as far as propagation in the open sea of waves with moderate amplitude is concerned. Clearly it is not satisfactory when waves either break or hit coastal structures like offshore platforms, jetties, piers, breakwaters _etc._
In this work, our goal is to investigate a two-fluid model for this kind of problem. It belongs to the family of averaged models, that is although the two fluids considered are not miscible, there exists a length scale \\(\\epsilon\\) in order that each averaging volume (of size \\(\\epsilon^{3}\\)) contain representative samples of each of the fluids. Once the averaging process is done, it is assumed that the two fluids share, locally, the same pressure, temperature and velocity. Such models are called homogeneous models in the literature. They can be seen as limiting case of more general two-fluid models where the two fluids could have different temperatures and velocities [1].
The influence of the presence of air in wave impacts is a difficult topic. While it is usually thought that the presence of air softens the impact pressures, recent results by Bullock et al. [2] show that the cushioning effect due to aeration via the increased compressibility of the air-water mixture is not necessarily a dominant effect. First of all, air may become trapped or entrained in the water in different ways, for example as a single bubble trapped against a wall, or as a column or cloud of small bubbles. In addition, it is not clear which quantity is the most appropriate to measure impacts. For example some researchers pay more attention to the pressure impulse than to pressure peaks. The pressure impulse is defined as the integral of pressure over the short duration of impact. Bagnold [3], for example, noticed thatthe maximum pressure and impact duration differed from one identical wave impact to the next, even in carefully controlled laboratory experiments, while the pressure impulse appears to be more repeatable. For sure, the simple one-fluid models which are commonly used for examining the peak impacts are no longer appropriate in the presence of air. There are few studies dealing with two-fluid models. An exception is the work by Pereging and his collaborators. Wood, Peregrine & Bruce [4] used the pressure impulse approach to model a trapped air pocket. Peregrine & Thais [5] examined the effect of entrained air on a particular kind of violent water wave impact by considering a filling flow. Bullock et al. [6] found pressure reductions when comparing wave impact between fresh and salt water where, due to the different properties of the bubbles in the two fluids, the aeration levels are much higher in salt water than in fresh. H. Bredmose recently performed numerical experiments on a two-fluid system which is quite similar to the one we will use below. He developed a finite volume solver for aerated flows named Flair [7].
## Mathematical model
In this section we present the equations which govern the motion of two phase mixtures in a computational domain \\(\\Omega\\). First of all, we need to introduce the notation which will be used throughout this article. We use superscripts \\(\\pm\\) to denote any quantity which is related to liquid and gas respectively. For example, \\(\\alpha^{+}\\) and \\(\\alpha^{-}\\) denote the volume fraction of liquid and gas and obviously satisfy the condition \\(\\alpha^{+}+\\alpha^{-}=1\\). Then, we have the following classical quantities: \\(\\rho^{\\pm}\\), \\(\\vec{u}\\), \\(p\\), \\(e\\), \\(E\\), \\(\\vec{g}\\) which denote the density of each phase, the velocity field vector, the pressure, the internal & total energy and the acceleration due to gravity correspondingly.
Conservation of mass (one equation for each phase), momentum and energy lead to the four following equations:
\\[(\\alpha^{\\pm}\\rho^{\\pm})_{t}+\
abla\\cdot(\\alpha^{\\pm}\\rho^{\\pm} \\vec{u}) = 0, \\tag{1}\\] \\[(\\rho\\vec{u})_{t}+\
abla\\cdot\\left(\\rho\\vec{u}\\otimes\\vec{u}+p \\mathbb{I}\\right) = \\rho\\vec{g},\\] (2) \\[\\left(\\rho E\\right)_{t}+\
abla\\cdot\\left(\\rho H\\vec{u}\\right) = \\rho\\vec{g}\\cdot\\vec{u}, \\tag{3}\\]
where \\(\\rho:=\\alpha^{+}\\rho^{+}+\\alpha^{-}\\rho^{-}\\) (the total density), \\(H:=E+\\frac{p}{\\rho}\\) (the specific enthalpy), \\(E=e+\\frac{1}{2}|\\vec{u}|^{2}\\) (the total energy). This system can be seen as the single energy and infinite drag limit of the more conventional six equations model [1]. The above system contains five unknowns \\(\\alpha^{\\pm}\\rho^{\\pm}\\), \\(\\vec{u}\\), \\(p\\) and \\(E\\) and only four governing equations (1) - (3). In order to close the system, we need to provide the so-called equation of state (EOS) \\(p=p^{\\pm}(\\rho^{\\pm},e^{\\pm})\\). The construction of the EOS will be discussed below.
It is possible to rewrite these equations as a system of balance laws
\\[\\frac{\\partial\\mathbf{w}}{\\partial t}+\
abla\\cdot\\mathcal{F}(\\mathbf{w})=S( \\mathbf{w}), \\tag{4}\\]
where \\(\\mathbf{w}(x,t):\\mathbb{R}^{d}\\times\\mathbb{R}^{+}\\mapsto\\mathbb{R}^{m}\\) is the vector of conservative variables (in the present study \\(d=2\\) or \\(3\\) and \\(m=5\\)), \\(\\mathcal{F}(\\mathbf{w})\\) is the advective flux function and \\(S(\\mathbf{w})\\) the source term.
The conservative variables in the 2D case are defined as follows:
\\[\\mathbf{w}=(w_{i})_{i=1}^{5}:=(\\alpha^{+}\\rho^{+},\\alpha^{-}\\rho^{-},\\ \\rho u,\\ \\rho v,\\ \\rho E). \\tag{5}\\]
The flux projection on the normal direction \\(\\vec{n}=(n_{1},n_{2})\\) can be expressed in physical and conservative variables
\\[\\mathcal{F}\\cdot\\vec{n}=(\\alpha^{+}\\rho^{+}u_{n},\\alpha^{-}\\rho ^{-}u_{n},\\rho uu_{n}+pn_{1},\\rho vu_{n}+pn_{2},\\rho Hu_{n})=\\\\ \\left(w_{1}\\frac{w_{2}n_{1}+w_{4}n_{2}}{w_{1}+w_{2}},w_{2}\\frac{ w_{3}n_{1}+w_{4}n_{2}}{w_{1}+w_{2}},w_{3}\\frac{w_{2}n_{1}+w_{4}n_{2}}{w_{1}+w_{ 2}}+pn_{1},\\right.\\\\ \\left.w_{4}\\frac{w_{3}n_{1}+w_{4}n_{2}}{w_{1}+w_{2}}+pn_{2},(w_{5 }+p)\\frac{w_{3}n_{1}+w_{4}n_{2}}{w_{1}+w_{2}}\\right) \\tag{6}\\]
where \\(u_{n}:=\\vec{u}\\cdot\\vec{n}=un_{1}+wn_{2}\\) is the velocity projection on the normal direction \\(\\vec{n}\\). The jacobian matrix \\(\\mathbb{A}_{n}(\\mathbf{w}):=\\frac{\\partial(\\mathcal{F}\\cdot\\vec{n})(\\mathbf{ w})}{\\partial\\mathbf{w}}\\) can be easily computed. This matrix has three distinct eigenvalues:
\\[\\lambda_{1}=u_{n}-c_{s},\\quad\\lambda_{2,3,4}=u_{n},\\quad\\lambda_{5}=u_{n}+c_{s},\\]
where \\(c_{s}\\) is the sound speed in the mixture. Its expression can be found in [8]. One can conclude that the system (1) - (3) is hyperbolic. This hyperbolicity represents the major advantage of this model. The computation of the eigenvectors is trickier but can still be performed analytically. We do not give here the final expressions since they are cumbersome.
## Equation of state
In the present work we assume that the light fluid is described by an ideal gas type law
\\[p^{-}=(\\gamma-1)\\rho^{-}e^{-},\\qquad e^{-}=c_{v}^{-}T^{-}, \\tag{7}\\]
while the heavy fluid is modeled by Tait's law. In the literature Tait's law is sometimes called the stiffened gas law [9, 10]:
\\[p^{+}+\\pi_{0}=(\\mathcal{N}-1)\\rho^{+}e^{+},\\qquad e^{+}=c_{v}^{+}T^{+}+\\frac{ \\pi_{0}}{\\mathcal{N}\\rho^{+}}, \\tag{8}\\]where the quantities \\(\\gamma\\), \\(c_{v}^{\\pm}\\), \\(\\mathcal{N}\\), \\(\\pi_{0}\\) are constants. For example, pure water is well described when we take \\(\\mathcal{N}=7\\) and \\(\\pi_{0}=2.1\\times 10^{9}\\) Pa.
**Remark 1**.: _In practice, the constants \\(c_{v}^{\\pm}\\) can be calculated after simple algebraic manipulations of equations (7), (8) and matching with experimental values at normal conditions:_
\\[c_{v}^{-}\\equiv\\frac{p_{0}}{(\\gamma-1)\\rho_{0}^{-}T_{0}},\\quad c_{v}^{+}\\equiv \\frac{\\mathcal{N}\\,p_{0}+\\pi_{0}}{(\\mathcal{N}-1)\\mathcal{N}\\,\\rho_{0}^{+}\\,T _{0}}.\\]
The sound velocities in each phase are given by the following formulas:
\\[(c_{s}^{-})^{2}=\\frac{\\gamma p^{-}}{\\rho^{-}},\\qquad(c_{s}^{+})^{2}=\\frac{ \\mathcal{N}\\,p^{+}+\\pi_{0}}{\\rho^{+}}. \\tag{9}\\]
In order to construct an equation of state for the mixture, we make the additional assumption that the two phases are in thermodynamic equilibrium:
\\[p^{+}=p^{-},\\qquad T^{+}=T^{-}. \\tag{10}\\]
Below, values of the common pressure and common temperature will be denoted by \\(p\\) and \\(T\\) respectively. The technical details can be found in Chapter 3, [11].
### Finite volume scheme on unstructured meshes
Finite volume methods are a class of discretization schemes that have proven highly successful in solving numerically a wide class of conservation law systems. These systems often come from compressible fluid dynamics. When compared to other discretization methods such as finite elements or finite differences, the primary interests of finite volume methods are robustness, applicability on very general unstructured meshes, and the intrinsic local conservation properties. Hence, with this type of discretization, we conserve \"exactly\" the mass, momentum and total energy1.
Footnote 1: This statement is true in the absence of source terms and appropriate boundary conditions.
In order to solve numerically the system of balance laws (1) - (3) we use (4). The system (4) should be provided with initial condition
\\[\\mathbf{w}(x,0)=\\mathbf{w}_{0}(x) \\tag{11}\\]
and appropriate boundary conditions.
The computational domain \\(\\Omega\\subset\\mathbb{R}^{d}\\) is triangulated into a set of non overlapping control volumes that completely cover the domain. Let \\(\\mathcal{T}\\) denote a tesselation of the domain \\(\\Omega\\) with control volume \\(K\\) such that
\\[\\cup_{K\\in\\mathcal{T}}\\bar{K}=\\bar{\\Omega},\\quad\\bar{K}:=K\\cup\\partial K.\\]
For two distinct control volumes \\(K\\) and \\(L\\) in \\(\\mathcal{T}\\), the intersection is either an edge (2D) or face (3D) with oriented normal \\(\\vec{n}_{KL}\\) or else a set of measure at most \\(d-2\\) (in 2D it is just a vertex, in 3D it can also be a segment, for example). We need to introduce the following notation for the neighbourhood of \\(K\\):
\\[\\mathcal{N}(K):=\\left\\{L\\in\\mathcal{T}:\\text{area}(K\\cap L)\
eq 0\\right\\},\\]
a set of all control volumes \\(L\\) which share a face (or an edge in 2D) with the given volume \\(K\\). In this article, we denote by \\(\\text{vol}(\\cdot)\\) and \\(\\text{area}(\\cdot)\\) the \\(d\\) and \\(d-1\\) dimensional measures2 respectively.
Footnote 2: In other words, in 3D the notation \\(\\text{area}(\\cdot)\\) and \\(\\text{vol}(\\cdot)\\) are very natural and mean area and volume respectively, while in 2D they refer to the area and the length.
The choice of control volume tesselation is flexible in the finite volume method. In the present study we retained a so-called cell-centered approach, which means that degrees of freedom are associated to cell barycenters.
The first steps in Finite Volume (FV) methods are classical. We start by integrating equation (4) on the control volume \\(K\\) (see Figure 1 for illustration) and we apply Gauss-Ostrogradsky theorem for advective fluxes. Then, in each control volume, an integral conservation law statement is imposed:
\\[\\frac{d}{dt}\\int_{K}\\mathbf{w}\\ d\\Omega+\\int_{\\partial K}\\mathcal{F}(\\mathbf{ w})\\cdot\\vec{n}_{KL}\\ d\\sigma=\\int_{K}S(\\mathbf{w})\\ d\\Omega. \\tag{12}\\]
T
Figure 1: An example of control volume \\(K\\) with barycenter \\(O\\). The normal pointing from \\(K\\) to \\(L\\) is denoted by \\(\\vec{n}_{KL}\\).
Physically an integral conservation law asserts that the rate of change of the total amount of a quantity (for example: mass, momentum, total energy, etc) with density \\(\\mathbf{w}\\) in a fixed control volume \\(K\\) is balanced by the flux \\(\\mathcal{F}\\) of the quantity through the boundary \\(\\partial K\\) and the production of this quantity \\(\\mathcal{S}\\) inside the control volume.
The next step consists in introducing the so-called control volume cell average for each \\(K\\in\\mathcal{T}\\)
\\[\\mathbf{w}_{K}(t):=\\frac{1}{\\text{vol}(K)}\\int_{K}\\mathbf{w}(\\vec{x},t)\\;d \\Omega\\;.\\]
After the averaging step, the finite volume method can be interpreted as producing a system of evolution equations for cell averages, since
\\[\\frac{d}{dt}\\int_{K}\\mathbf{w}(\\vec{x},t)\\;d\\Omega=\\text{vol}(K)\\frac{d \\mathbf{w}_{K}}{dt}\\;.\\]
Godunov was the first [12] who pursued and applied these ideas to the discretization of the gas dynamics equations.
However, the averaging process implies piecewise constant solution representation in each control volume with value equal to the cell average. The use of such representation makes the numerical solution multivalued at control volume interfaces. Thereby the calculation of the fluxes \\(\\int_{\\partial K}\\mathcal{F}(\\mathbf{w})\\cdot\\vec{n}_{KL}\\;d\\sigma\\) at these interfaces is ambiguous. The next fundamental aspect of finite volume methods is the idea of substituting the true flux at interfaces by a numerical flux function
\\[\\big{(}\\mathcal{F}(\\mathbf{w})\\cdot\\vec{n}\\big{)}\\big{|}_{\\partial K\\cap \\partial L}\\longleftarrow\\Phi(\\mathbf{w}_{K},\\mathbf{w}_{L};\\vec{n}_{KL}): \\mathbb{R}^{m}\\times\\mathbb{R}^{m}\\mapsto\\mathbb{R}^{m}\\;,\\]
a Lipschitz continuous function of the two interface states \\(\\mathbf{w}_{K}\\) and \\(\\mathbf{w}_{L}\\). The heart of the matter in finite volume method consists in the choice of the numerical flux function \\(\\Phi\\). In general this function is calculated as an exact or even better approximate local solution of the Riemann problem posed at these interfaces. In the present study we decided to choose the numerical flux function according to FVCF scheme extensively described in [13].
The numerical flux is assumed to satisfy the properties:
**Conservation.**: This property ensures that fluxes from adjacent control volumes sharing an interface exactly cancel when summed. This is achieved if the numerical flux function satisfies the identity
\\[\\Phi(\\mathbf{w}_{K},\\mathbf{w}_{L};\\vec{n}_{KL})=-\\Phi(\\mathbf{w}_{L},\\mathbf{ w}_{K};\\vec{n}_{LK}).\\]
**Consistency.**: The consistency is obtained when the numerical flux with identical state arguments (in other words it means that the solution is continuous through an interface) reduces to the true flux of the same state, i.e.
\\[\\Phi(\\mathbf{w},\\mathbf{w};\\vec{n})=(\\mathcal{F}\\cdot\\vec{n})(\\mathbf{w}).\\]
After introducing the cell averages \\(\\mathbf{w}_{K}\\) and numerical fluxes into (12), the integral conservation law statement becomes
\\[\\frac{d\\mathbf{w}_{K}}{dt}+\\sum_{L\\in\\mathcal{T}(K)}\\frac{\\text{ area}(L\\cap K)}{\\text{vol}(K)}\\Phi(\\mathbf{w}_{K},\\mathbf{w}_{L};\\vec{n}_{KL})=\\] \\[=\\frac{1}{\\text{vol}(K)}\\int_{K}S(\\mathbf{w})\\;d\\Omega\\;.\\]
We denote by \\(S_{K}\\) the approximation of the following quantity \\(\\frac{1}{\\text{vol}(K)}\\int_{K}S(\\mathbf{w})\\;d\\Omega\\). Thus, the following system of ordinary differential equations (ODE) is called a semi-discrete finite volume method:
\\[\\frac{d\\mathbf{w}_{K}}{dt}+\\sum_{L\\in\\mathcal{T}(K)}\\frac{\\text{area}(L\\cap K )}{\\text{vol}(K)}\\Phi(\\mathbf{w}_{K},\\mathbf{w}_{L};\\vec{n}_{KL})=S_{K},\\quad \\forall K\\in\\mathcal{T}\\;. \\tag{13}\\]
The initial condition for this system is given by projecting (11) onto the space of piecewise constant functions
\\[\\mathbf{w}_{K}(0)=\\frac{1}{\\text{vol}\\left(K\\right)}\\int_{K}\\mathbf{w}_{0}(x) \\;d\\Omega\\;.\\]
This system of ODE (13) should also be discretized. There is a variety of explicit and implicit time integration methods. We chose the following third order four-stage SSP-RK(3,4) scheme [14, 15] with \\(\\text{CFL}=2\\):
\\[u^{(1)} =u^{(n)}+\\frac{1}{2}\\Delta\\mathcal{L}(u^{(n)}),\\] \\[u^{(2)} =u^{(1)}+\\frac{1}{2}\\Delta\\mathcal{L}(u^{(1)}),\\] \\[u^{(3)} =\\frac{2}{3}u^{(n)}+\\frac{1}{3}u^{(2)}+\\frac{1}{6}\\Delta\\mathcal{ L}(u^{(n)}),\\] \\[u^{(n+1)} =u^{(3)}+\\frac{1}{2}\\Delta\\mathcal{L}(u^{(3)}).\\]
### Sign matrix computation
In the context of the FVCF scheme (see [13] for more details), we need to compute the so-called sign matrix which is defined in the following way
\\[U_{n}:=\\text{sign}(\\mathbb{A}_{n})=R\\,\\text{sign}(\\Lambda)L,\\]where \\(R\\), \\(L\\) are matrices composed of right and left eigenvectors correspondingly, and \\(\\Lambda=\\text{diag}(\\lambda_{1},\\ldots,\\lambda_{m})\\) is the diagonal matrix of eigenvalues of the Jacobian.
This definition gives the first \"direct\" method of sign matrix computation. Since the advection operator is relatively simple, after a few tricks, we can succeed in computing analytically the matrices \\(R\\) and \\(L\\). For more complicated two-phase models it is almost impossible to perform this computation in closed analytical form. In this case, one has to apply numerical techniques for eigensystem computations. It turns out to be costly and not very accurate. In the present work we use physical information about the model in the numerical computations.
There is another way which is less expensive. The main idea is to construct a kind of interpolation polynomial which takes the following values
\\[P(u_{n}\\pm c_{s})=\\text{sign}(u_{n}\\pm c_{s}),\\quad P(u_{n})=\\text{sign}(u_{n}).\\]
These three conditions allow us to construct a second degree interpolation polynomial. Obviously, when \\(P(\\lambda)\\) is evaluated at \\(\\lambda=\\lambda_{n}\\) we obtain the sign matrix \\(U_{n}\\) as a result. The construction of the Lagrange interpolation polynomial \\(P(\\lambda)\\) is simple.
In our research code we have implemented both methods. Our experience shows that the interpolation method is quicker and gives correct results in most test cases. However, when we approach pure phase states, it shows a rather bad numerical behaviour. It can lead to instabilities and diminish overall code robustness. Thus, whenever possible we suggest to use the computation of the Jacobian eigenstructure.
### Second order scheme
If we analyze the above scheme, we understand that in fact, we have only one degree of freedom per data storage location. Hence, it seems that we can expect to be first order accurate at most. In the numerical community first order schemes are generally considered to be too inaccurate for most quantitative calculations. Of course, we can always make the mesh spacing extremely small but it cannot be a solution since it makes the scheme inefficient. From the theoretical point of view the situation is even worse since an \\(\\mathcal{O}(h^{\\frac{1}{2}})\\)\\(L_{1}\\)-norm error bound for the monotone and E-flux schemes [16] is known to be sharp [17], although an \\(\\mathcal{O}(h)\\) solution error is routinely observed in numerical experiments. On the other hand, Godunov has shown [12] that all linear schemes that preserve solution monotonicity are at most first order accurate. This rather negative result suggests that a higher order accurate scheme has to be essentially nonlinear in order to attain simultaneously a monotone resolution of discontinuities and high order accuracy in continuous regions.
A significant breakthrough in the generalization of finite volume methods to higher order accuracy is due to Kolgan [18, 19] and van Leer [20]. They proposed a kind of post-treatment procedure currently known as solution _reconstruction_ or MUSCL3 scheme. In the above papers the authors used linear reconstruction (it will be retained in this study as well) but this method was already extended to quadratic approximations in each cell.
Footnote 3: MUSCL stands for Monotone Upstream-Centered Scheme for Conservation Laws.
In this paper we briefly describe the construction and practical implementation of a second-order nonlinear scheme on unstructured (possibly highly distorted) meshes. The main idea is to find our solution as a piecewise affine function on each cell. This kind of linear reconstruction operators on simplicial control volumes often exploit the fact that the cell average is also a pointwise value of any valid (conservative) linear reconstruction evaluated at the gravity center of a simplex. This reduces the linear reconstruction problem to that of gradient estimation at cell centers given cell averaged data. In this case, we express the reconstruction in the form
\\[\\mathbf{w}_{K}(\\vec{x})=\\bar{\\mathbf{w}}_{K}+(\
abla\\mathbf{w})_{K}\\cdot( \\vec{x}-\\vec{x}_{0}),\\quad K\\in\\mathcal{T}\\;, \\tag{14}\\]
where \\(\\bar{\\mathbf{w}}_{K}\\) is the cell averaged value given by the finite volume method, \\((\
abla\\mathbf{w})_{K}\\) is the solution gradient estimate (to be determined) on the cell \\(K\\), \\(\\vec{x}\\in K\\) and the point \\(\\vec{x}_{0}\\) is chosen to be the gravity center for the simplex \\(K\\).
It is very important to note that with this type of representation (14) we remain absolutely conservative, i.e.
\\[\\frac{1}{\\text{vol}(K)}\\int_{K}\\mathbf{w}_{K}(\\vec{x})\\;d\\Omega\\equiv\\bar{ \\mathbf{w}}_{K}\\]
due to the choice of the point \\(\\vec{x}_{0}\\). This point is crucial for finite volumes because of intrinsic conservative properties of this method.
In our numerical code we implemented two common techniques of gradient reconstruction: Green-Gauss integration and least squares methods. In this paper we describe only the least squares reconstruction method. The Barth-Jespersen limiter [21] is incorporated to obtain non-oscillatory resolution of discontinuities and steep gradients. We refer to [11] for more details.
### Least-squares gradient reconstruction method
In this section we consider a triangle control volume \\(K\\) with three adjacent neighbors \\(T_{1}\\), \\(T_{2}\\) and \\(T_{3}\\). Their barycenters are denoted by \\(O(\\vec{x}_{0})\\), \\(O_{1}(\\vec{x}_{1})\\), \\(O_{2}(\\vec{x}_{2})\\) and \\(O_{3}(\\vec{x}_{3})\\) respectively. In the following we denote by \\(\\mathbf{w}_{i}\\) the solution value at gravity centers \\(O_{i}\\):
\\[\\mathbf{w}_{i}:=\\mathbf{w}(\\vec{x}_{i}),\\quad\\mathbf{w}_{0}:=\\mathbf{w}(\\vec{ x}_{0}).\\]Our purpose here is to estimate \\(\
abla\\textbf{w}=(\\partial_{x}\\textbf{w},\\partial_{y}\\textbf{w})\\) on the cell \\(K\\). Using Taylor formula, we can write down the three following relations:
\\[\\textbf{w}_{1}-\\textbf{w}_{0} =(\
abla\\textbf{w})_{K}\\cdot(\\vec{x}_{1}-\\vec{x}_{0})+\\mathcal{O}( h^{2}), \\tag{15}\\] \\[\\textbf{w}_{2}-\\textbf{w}_{0} =(\
abla\\textbf{w})_{K}\\cdot(\\vec{x}_{2}-\\vec{x}_{0})+\\mathcal{O} (h^{2}),\\] (16) \\[\\textbf{w}_{3}-\\textbf{w}_{0} =(\
abla\\textbf{w})_{K}\\cdot(\\vec{x}_{3}-\\vec{x}_{0})+\\mathcal{O }(h^{2}). \\tag{17}\\]
If we drop higher order terms \\(\\mathcal{O}(h^{2})\\), these relations can be viewed as a linear system of three equations for two unknowns4\\((\\partial_{x}\\textbf{w},\\partial_{y}\\textbf{w})\\). This situation is due to the fact that the number of edges incident to a simplex mesh in \\(\\mathbb{R}^{d}\\) is greater or equal to \\(d\\) thereby producing linear constraint equations (15) - (17) which will be solved analytically here in a least squares sense.
Footnote 4: This simple estimation is done for scalar case only \\(\\textbf{w}=(w)\\). For more general vector problems the numbers of equations and unknowns have to be changed depending on the dimension of vector **w**.
First of all, each constraint (15) - (17) is multiplied by a weight \\(\\omega_{i}\\in(0,1)\\) which will be chosen below to account for distorted meshes. In matrix form our non-square system becomes
\\[\\begin{pmatrix}\\omega_{1}\\Delta x_{1}&\\omega_{1}\\Delta y_{1}\\\\ \\omega_{2}\\Delta x_{2}&\\omega_{2}\\Delta y_{2}\\\\ \\omega_{3}\\Delta x_{3}&\\omega_{1}\\Delta y_{3}\\end{pmatrix}(\
abla\\textbf{w})_ {K}=\\begin{pmatrix}\\omega_{1}(\\textbf{w}_{1}-\\textbf{w}_{0})\\\\ \\omega_{2}(\\textbf{w}_{2}-\\textbf{w}_{0})\\\\ \\omega_{3}(\\textbf{w}_{3}-\\textbf{w}_{0})\\end{pmatrix},\\]
where \\(\\Delta x_{i}=x_{i}-x_{0}\\), \\(\\Delta y_{i}=y_{i}-y_{0}\\). For further developments it is convenient to rewrite our constraints in abstract form
\\[[\\vec{L}_{1},\\,\\vec{L}_{2}]\\cdot(\
abla\\textbf{w})_{K}=\\vec{f}. \\tag{18}\\]
We use a normal equation technique in order to solve symbolically this abstract form in a least squares sense. Multiplying on the left both sides of (18) by \\([\\vec{L}_{1}\\vec{L}_{2}]^{\\prime}\\) yields
\\[G(\
abla\\textbf{w})_{K}=\\vec{b},\\quad G=(l_{i})_{1\\leq i,j\\leq 2}=\\begin{pmatrix}( \\vec{L}_{1}\\cdot\\vec{L}_{1})&(\\vec{L}_{1}\\cdot\\vec{L}_{2})\\\\ (\\vec{L}_{2}\\cdot\\vec{L}_{1})&(\\vec{L}_{2}\\cdot\\vec{L}_{2})\\end{pmatrix} \\tag{19}\\]
where \\(G\\) is the Gram matrix of vectors \\(\\left\\{\\vec{L}_{1},\\vec{L}_{2}\\right\\}\\) and \\(\\vec{b}=\\begin{pmatrix}(\\vec{L}_{1}\\cdot\\vec{f})\\\\ (\\vec{L}_{2}\\cdot\\vec{f})\\end{pmatrix}.\\) The so-called normal equation (19) is easily solved by Cramer's rule to give the following result
\\[(\
abla\\textbf{w})_{K}=\\frac{1}{l_{11}l_{22}-l_{12}^{2}}\\begin{pmatrix}l_{22}( \\vec{L}_{1}\\cdot\\vec{f})-l_{12}(\\vec{L}_{2}\\cdot\\vec{f})\\\\ l_{11}(\\vec{L}_{2}\\cdot\\vec{f})-l_{12}(\\vec{L}_{1}\\cdot\\vec{f})\\end{pmatrix}.\\]
The form of this solution suggests that the least squares linear reconstruction can be efficiently computed without the need for storing a non-square matrix.
Now we have to discuss the choice of weight coefficients \\(\\left\\{\\omega_{i}\\right\\}_{i=1}^{3}\\). The basic idea is to attribute bigger weights to cells barycenters closer to the node \\(N\\) under consideration. One of the possible choices consists in taking a harmonic mean of respective distances \\(r_{i}=||\\vec{x}_{i}-\\vec{x}_{N}||\\). This purely metric argument takes the following mathematical form:
\\[\\omega_{i}=\\frac{||\\vec{x}_{i}-\\vec{x}_{N}||^{-k}}{\\sum_{j=1}^{3}||\\vec{x}_{j} -\\vec{x}_{N}||^{-k}},\\]
where \\(k\\) in practice is taken to be one or two (in our numerical code we choose \\(k=1\\)).
### Numerical results: falling water column
A classical test in violent flows is the dam break problem. This problem can be simplified as follows: a water column is released at time \\(t=0\\) and falls under gravity. In addition, there is a step in the bottom. During its fall, the liquid hits this step and recirculation is generated behind the step. Then the liquid hits the vertical wall and climbs along this wall. The geometry and initial condition for this test case are depicted on Figure 3. Initially the velocity field is taken to be zero. The unstructured triangular grid used in this computation contained about 108000 control volumes (which in this case are triangles). The results of this simulation are presented on Figures 4 - 8. We emphasize here that there is no interface between the liquid and the gas. The dark mixture contains mostly liquid (90%) and the light mixture contains mostly gas (90%). An interesting quantity is the impact pressure along the wall. It is shown in Figure 9, where the maximal pressure on the right wall is plotted as a function of time \\(t\\mapsto\\max_{(x,y)\\in 1\\times[0,1]}p(x,y,t)\\).
Figure 2: Illustration for least-squares gradient reconstruction. We depict a triangle control volume with three adjacent neighbors.
## References
* [1] Ishii, M., 1975. _Thermo-Fluid Dynamic Theory of Two-Phase Flow_. Eyrolles, Paris.
* [2] Bullock, G. N., Obhrai, C., Peregrine, D. H., and Bredmose, H., 2007. \"Violent breaking wave impacts. part 1: Results from large-scale regular wave tests on vertical and sloping walls\". _Coastal Engineering,_**54**, pp. 602-617.
* [3] Bagnold, R., 1939. \"Interim report on wave pressure research\". _Proc. Inst. Civil Eng.,_**12**, pp. 201-26.
* [4] Wood, D., Peregrine, D., and Bruce, T., 2000. \"Wave impact on wall using pressure-impulse theory. i. trapped air\". _Journal of Waterway, Port, Coastal and Ocean Engineering,_**126**(4), pp. 182-190.
* [5] Peregrine, D., and Thais, L., 1996. \"The effect of entrained air in violent water impacts\". _J. Fluid Mech.,_**325**, pp. 377-97.
* [6] Bullock, G., Crawford, A., Hewson, P., Walkden, M., and Bird, P., 2001. \"The influence of air and scale on wave impact pressures\". _Coastal Engineering,_**42**, pp. 291-312.
* [7] Peregrine, D., Obhrai, C., Bullock, G., Muller, G., Wolters, G., and Bredmose, H., 2004. \"Violent water wave impacts and the role of air\". In ICCE 2004.
* [8] Dias, F., Dutykh, D., and Ghidaglia, J.-M., 2008. \"A two-fluid model for violent aerated flows\". _In preparation_.
* [9] Godunov, S., Zabrodine, A., Ivanov, M., Kraiko, A., and Prokopov, G., 1979. _Resolution numerique des problemes multidimensionnels de la dynamique des gaz._ Editions Mir, Moscow.
* [10] Harlow, F., and Amsden, A., 1971. _Fluid dynamics_. LANL Monograph LA-4700.
* [11] Dutykh, D., 2007. \"Mathematical modelling of tsunami waves\". PhD thesis, Ecole Normale Superieure de Cachan.
* [12] Godunov, S., 1959. \"A finite difference method for the numerical computation of discontinuous solutions of the equations of fluid dynamics\". _Mat. Sb.,_**47**, pp. 271-290.
* [13] Ghidaglia, J.-M., Kumbaro, A., and Coq, G. L., 2001. \"On the numerical solution to two fluid models via cell centered finite volume method\". _Eur. J. Mech. B/Fluids,_**20**, pp. 841-867.
* [14] Shu, C.-W., 1988. \"Total-variation-diminishing time discretizations\". _SIAM J. Sci. Statist. Comput.,_**9**, pp. 1073-1084.
* [15] Gottlieb, S., Shu, C.-W., and Tadmor, E., 2001. \"Strong stability-preserving high-order time discretization methods\". _SIAM Review,_**43**, pp. 89-112.
* [16] Osher, S., 1984. \"Riemann solvers, the entropy condition, and difference approximations\". _SIAM J. Numer. Anal.,_**21(2)**, pp. 217-235.
* [17] Peterson, T., 1991. \"A note on the convergence of the discontinuous Galerkin method for a scalar hyperbolic equation\". _SIAM J. Numer. Anal.,_**28(1)**, pp. 133-140.
* [18] Kolgan, N., 1972. \"Application of the minimum-derivative principle in the construction of finite-difference schemes for numerical analysis of discontinuous solutions in gas dynamics\". _Uchenye Zapiski TsaGI [Sci. Notes Central Inst. Aerodyn],_**3(6)**, pp. 68-77.
* [19] Kolgan, N., 1975. \"Finite-difference schemes for computation of three dimensional solutions of gas dynamics and calculation of a flow over a body under an angle of attack\". _Uchenye Zapiski TsaGI [Sci. Notes Central Inst. Aerodyn],_**6(2)**, pp. 1-6.
* [20] van Leer, B., 1979. \"Towards the ultimate conservative difference scheme v: a second order sequel to Godunov\" method\". _J. Comput. Phys.,_**32**, pp. 101-136.
* [21] Barth, T., and Jespersen, D., 1989. \"The design and application of upwind schemes on unstructured meshes\". _AIAA,_**0366**.
Figure 10: Maximal pressure on the right wall as a function of time. Light gas.
Figure 9: Maximal pressure on the right wall. Heavy gas case. | _The purpose of this communication is to discuss the simulation of a free surface compressible flow between two fluids, typically air and water. We use a two fluid model with the same velocity, pressure and temperature for both phases. In such a numerical model, the free surface becomes a thin three dimensional zone. The present method has at least three advantages: (i) the free-surface treatment is completely implicit; (ii) it can naturally handle wave breaking and other topological changes in the flow; (iii) one can easily vary the Equation of States (EOS) of each fluid (in principle, one can even consider tabulated EOS). Moreover, our model is unconditionally hyperbolic for reasonable EOS._
+
Footnote †: Address all correspondence to this author. | Give a concise overview of the text below. |
arxiv-format/0802_3411v1.md | # Detecting Planets around Compact Binaries with Gravitational Wave Detectors in Space
Naoki Seto
National Astronomical Observatory, 2-21-1 Osawa, Mitaka, Tokyo, 181-8588, Japan
## 1. Introduction
Since gravitational wave (GW) detectors have omnidirectional sensitivity, many sources can be simultaneously observed without adjusting detectors for individual ones. While this might look advantageous for astrophysical studies, it also has downsides. Depending on the number of GW sources, overlaps of signals in data streams of detectors become a significant problem, especially in the low-frequency regime probed by space GW detectors. For example, LISA (Bender et al. 1998) will detect \\(\\sim 3000\\) double white dwarf binaries above \\(\\sim 3\\)mHz (see _e.g._ Nelemans 2006, Ruiter et al. 2007). Without removing the foreground GWs made by these numerous binaries, it might be difficult to observe weak interesting signals, such as extreme-mass-ratio-inspiral (EMRI) events. In this respect, extensive efforts are being paid to numerically demonstrate how well both strong and weak signals can be analyzed, using mock LISA data (Arnaud et al. 2007).
In this paper, I study a method to search for planets (more generally sub-stellar companions) orbiting around ultra-compact binaries. The proposed approach is to observe binaries' wobble motions caused by the planets and imprinted as phase modulations of GW from the binaries. This approach is close to the eclipse timing method (see _e.g._ Deeg et al. 2008) to detect planets around binaries, and the underlying technique is similar to the planet search around pulsars with radio telescopes (Wolszczan & Frail 1992, see also Dhurandhar & Vecchio 2001). As the expected modulations due to planets are small, the ongoing numerical efforts for LISA have direct relevance to the prospects of the detection method proposed in this paper.
Here, I briefly discuss the significance of this method on extra-solar planetary science. In the last 15 years, its rapid progress has largely been led by theoretically unanticipated discoveries, such as those of the hot Jupiters (Mayor & Queloz 1995) or the pulsar planets (Wolszczan & Frail 1992). However, at present, observational studies for circum-binary planets are in a very preliminary stage (Udry et al. 2002, Muterspaugh et al. 2007, Deeg et al. 2008). In addition, impacts of stellar evolution processes including giant star phases or supernova explosions are still highly uncertain (see _e.g._ Villaver & Livio 2007, Silvotti et al. 2007 for recent studies). Since ultra-compact binaries such as double white dwarfs are end products of stellar evolution, the proposed method to search for planets around them would provide us with important clues to these unclear issues. While the probability of finding a planet around a compact binary is uncertain, the large numbers of available binaries (_e.g._ with LISA) are advantageous for various statistical analyses, such as estimation of mass distribution of planets by separating information of orbital inclination \\(\\sin i\\).
## 2. Phase Modulation by a planet
To begin with, I discuss GWs from a detached double white dwarf binary on a circular orbit without a planet (Takahashi & Seto 2002). I write its almost monochromatic waves around frequency \\(f_{gw}\\) as
\\[h_{0}(t)\\!=\\!A\\cos[2\\pi f_{gw}t\\!+\\!\\pi\\dot{f}_{gw}t^{2}+\\varphi_{0}+D_{E}(t)] \\equiv\\!A\\cos[\\varphi(t)], \\tag{1}\\]
where the term (\\(\\propto f_{gw}\\)) represents the intrinsic frequency evolution with \\(\\dot{f}_{gw}T_{obs}\\ll f_{gw}\\) (\\(T_{obs}\\): observational time \\(\\lesssim 10\\)yr). The term \\(D_{E}(t)\\equiv 2\\pi f_{gw}R_{E}c^{-1}\\sin\\theta_{s}\\cos[\\phi(t)-\\phi_{s}]\\) represents the Doppler phase modulations due to revolution of a detector around the Sun (\\(R_{E}=\\)1AU) with its orbital phase \\(\\phi(t)=2\\pi(t/1\\)yr)\\(+\\,const\\). The angular parameters (\\(\\theta_{s},\\phi_{s}\\)) are the direction of the binary on the sky in the ecliptic coordinate. In eq.(1) I have neglected amplitude modulation by rotation of the detector. To determine the direction of the binary, this effect is less important than the Doppler modulation \\(D_{E}(t)\\) at \\(f_{gw}\\gtrsim c/R_{E}\\sim 1\\)mHz (Takahashi & Seto 2002). In relation to this, I do not explicitly deal with the orientation parameters of binaries. This is just for simplicity. These parameters determine the polarization states of the waves.
The orientation-averaged amplitude of the waves is given as
\\[A = \\frac{8}{\\sqrt{5}}\\frac{G^{5/3}{\\cal M}^{5/3}\\pi^{2/3}f_{gw}^{2/3 }}{rc^{4}} \\tag{2}\\] \\[= 6.6\\times 10^{-23}\\left(\\frac{{\\cal M}}{0.45M_{\\odot}}\\right)^{5/3 }\\left(\\frac{f_{gw}}{3{\\rm mHz}}\\right)^{2/3}\\left(\\frac{r}{8.5{\\rm kpc}}\\right) ^{-1} \\tag{3}\\]
with the chirp mass \\({\\cal M}=M_{1}^{3/5}M_{2}^{3/5}(M_{1}+M_{2})^{-1/5}\\) (\\(M_{1}\\) and \\(M_{2}\\): two masses of the binary). In this equation, I put the chirp mass at \\({\\cal M}=0.45M_{\\odot}\\) (Farmer & Phinney 2003) and used the distance to the Galactic center \\(r=8.5\\)kpc as the typical distance to Galactic binaries. The matched filtering technique is an advantageous method for GW observation and the signal-to-noise ratio of the binary is evaluated in the standard manneras
\\[SNR_{0} = \\frac{A\\sqrt{2T_{obs}}}{h_{f}} \\tag{4}\\] \\[= 138\\left(\\frac{A}{6.6\\times 10^{-23}}\\right)\\left(\\frac{h_{f}}{1. 2\\times 10^{-20}\\mbox{Hz}^{-1/2}}\\right)^{-1}\\left(\\frac{T_{obs}}{10\\mbox{yr}} \\right)^{1/2}\\]
with LISA detector noise level \\(h_{f}\\) that is within 15% around \\(1.2\\times 10^{-20}\\) Hz\\({}^{-1/2}\\) in the frequency regime 3mHz-10mHz relevant for the present analysis (Bender et al. 1998). Here I assumed that LISA has two independent data streams with identical noise spectra.
When the binary has a circum-binary planet with mass \\(M_{p}\\) and orbital frequency \\(f_{p}\\), the observed waveform \\(h_{M}(t)\\) has an additional phase shift \\(D_{p}(t)\\) due to the binary's wobble induced by the planet, and I put the waveform by \\(h_{M}(t)=A\\cos[\\varphi(t)+D_{p}(t)]\\). For a planet on a circular orbit, the phase shift is given by \\(D_{p}(t)=\\Psi_{p}\\cos\\varphi_{p}(t)\\) with the orbital phase \\(\\varphi_{p}(t)=2\\pi f_{p}t+\\varphi_{c0}\\) (\\(\\varphi_{c0}\\): phase constant) and the amplitude \\(\\Psi_{p}=(2\\pi G)^{1/3}c^{-1}f_{gw}f_{p}^{-2/3}M_{p}^{-2/3}M_{p}\\sin i\\) (\\(M_{T}=M_{1}+M_{2}\\): total mass of the binary, \\(i\\): inclination of the planet's orbit) or explicitly
\\[\\Psi_{p} = 0.054\\left(\\frac{M_{p}\\sin i}{3M_{J}}\\right)\\left(\\frac{M_{T}}{ 1.04M_{\\odot}}\\right)^{-2/3} \\tag{5}\\] \\[\\times\\left(\\frac{f_{gw}}{3\\mbox{mHz}}\\right)\\left(\\frac{f_{p}}{ 0.33\\mbox{yr}^{-1}}\\right)^{-2/3}.\\]
For a system at a cosmological distance with redshift \\(z\\), the amplitude \\(\\Psi_{p}\\) is given by multiplying a factor \\((1+z)\\) to eq.(5) with the intrinsic (not redshifted) orbital frequency \\(f_{p}\\) and the observed (redshifted) GW frequency \\(f_{gw}\\). Note that the redshift \\(z\\) can be estimated from the observed luminosity distance (Schutz 1986). As I want to know the smallest mass \\(M_{p}\\sin i\\) detectable with GW observation and it is easier to find a planet with a larger amplitude \\(\\Psi_{p}\\), I hereafter assume \\(\\Psi_{p}\\ll 1\\). Then the modulated signal \\(h_{M}(t)\\) is expressed as
\\[h_{M}(t)=h_{0}(t)+h_{p^{\\ast}}(t)+h_{p^{\\ast}}(t)+O(\\Psi_{p}^{2}) \\tag{6}\\]
with two new components
\\[h_{p\\pm}(t) = A\\Psi_{p}(\\sin[\\varphi(t)\\pm\\varphi_{p}(t)])/2 \\tag{7}\\] \\[= -A\\Psi_{p}(\\sin[2\\pi(f_{gw}\\pm f_{p})t+\\pi\\dot{f_{gw}}t^{2}+ \\varphi_{0}\\pm\\varphi_{p0}])/2.\\]
A simple interpretation can be made for eq.(6). In addition to the original signal \\(h_{0}(t)\\) given in eq.(1), motion of the planet produces two replicas \\(h_{p\\pm}\\) (smaller by a factor of \\(\\Psi_{p}/2\\) than \\(h_{0}(t)\\)) at nearby frequencies \\(f_{gw}\\pm f_{p}\\gg f_{p}\\). Because of the coupling with the binary's rotation, the orbital frequency \\(f_{p}\\) of the planet is now up-converted into a band that might be observed with GW detectors. Here, it is important to note that the gravitational wave signal of each replica \\(h_{p+}\\) or \\(h_{p-}\\) itself is described with a nearly monochromatic waveform for a standard Galactic binary (including dependencies on angular parameters). This fact is important for data analysis, as seen later. In this paper I only study a planet on a circular orbit, but this analysis can be straightforwardly extended for multiple planets or eccentric orbits that produce other small replicas at frequencies \\(f_{gw}\\pm nf_{p}\\) (\\(n=2,3,\\cdots\\)) not only with \\(n=1\\) (Dhurandhar & Vecchio 2001).
Based on the simple interpretation of the modulated signal \\(h_{M}(t)\\), I can naively define the signal-to-noise ratio for the two small replicas \\(h_{c\\pm}\\) by
\\[X_{p}=\\frac{A\\Psi_{p}}{h_{f}}\\sqrt{T_{obs}}=5.3\\left(\\frac{\\Psi_{p}}{0.054} \\right)\\left(\\frac{SNR_{0}}{138}\\right) \\tag{8}\\]
as for the original one \\(h_{0}\\) given in eq.(3). If the parameters \\(\\mathbf{\\alpha}_{O}=(A,f_{gw},\\dot{f_{gw}},\\varphi_{0},\\theta_{s}, \\phi_{s})\\) are well determined with the strong original one \\(h_{0}\\), they can be used to estimate the three additional parameters \\(\\mathbf{\\alpha}_{N}=(f_{p},\\Psi_{p},\\varphi_{c0})\\) for the small replicas \\(h_{p\\pm}\\). The expected observational errors for the three new parameters \\(\\mathbf{\\alpha}_{N}\\) are evaluated by a \\(3\\times 3\\) Fisher matrix (see _e.g._ Takahashi & Seto 2002), and I obtain the asymptotic results at \\(T_{obs}f_{p}\\gg 1\\) as
\\[\\left(\\frac{\\Delta\\Psi_{p}}{\\Psi_{p}}\\right)_{3}\\equiv X_{p}^{-1},\\;\\;\\;( \\Delta f_{p})_{3}\\equiv\\sqrt{3}\\pi^{-1}T_{obs}^{-1}X_{p}^{-1}, \\tag{9}\\]
where the suffix \"3\" represents fitting only three new parameters.
The actual observational situation is more complicated. For example, the frequency resolution is given by \\(\\sim T_{obs}^{-1}\\), and with a short observational period \\(T_{obs}\\), there must be a significant interference between the weak signals \\(h_{p\\pm}(t)\\) and the strong one \\(h_{0}(t)\\). Furthermore, the triplet \\((h_{0},h_{p\\pm})\\) is need to be identified in the presence of thousands of other binaries. To study the interference within the triplet, I firstly discuss signal analysis only with a single binary-planet system and detector noises. I numerically evaluated the observational errors expected for simultaneous fitting of all the nine parameters \\(\\mathbf{\\alpha}_{O}\\) and \\(\\mathbf{\\alpha}_{N}\\) listed above, and obtained the magnitudes of the errors \\(\\left(\\Delta\\Psi_{p}/\\Psi_{p}\\right)_{3}\\) and \\((\\Delta f_{p})_{3}\\) for various sets of input parameters (suffix \"S\": simultaneous fitting). I found that, for a given observational time \\(T_{obs}\\gtrsim 1\\)yr, these results depend strongly on the orbital frequency \\(f_{p}\\), weakly on the phase \\(\\phi_{c0}\\), and negligibly on other parameters. As shown in figure 1, the errors \\(\\left(\\Delta\\Psi_{p}/\\Psi_{p}\\right)_{5}\\) and \\((\\Delta f_{p})_{3}\\) become much larger than the previous simple estimations \\(\\left(\\Delta\\Psi_{p}/\\Psi_{p}\\right)_{3}\\) and \\((\\Delta f_{p})_{3}\\) for frequencies
\\[f_{p}T_{obs}\\lesssim 2\\;\\;\\;\\mbox{or}\\;\\;\\;|f_{p}-1|T_{obs}\\lesssim 1. \\tag{10}\\]
In the latter regime, two phase modulations \\(D_{E}(t)\\) and \\(D_{p}(t)\\) become highly degenerated. Outside these two bands, the replicas \\(h_{p\\pm}\\) are well separated from the original one \\(h_{0}\\) in the frequency space, and the simple estimation in eq.(9) becomes reliable. In these preferable frequency regimes, the mass \\(M_{p}\\sin i\\) can be estimated within 10% error (at the same time, the naive SNR \\(X_{p}\\gtrsim 10\\)) for planet with \\(M_{p}\\sin i\\geq 5.4M_{J}(f/3\\mbox{mHz})^{-5/3}(f_{p}/0.33\\mbox{yr}^{-1})^{2/3}\\). Here I used eqs.(5)(8) and (9), and the following typical parameters: \\(r=8.5\\)kpc, \\(M_{1}=M_{2}=0.52M_{\\odot}\\), \\(T_{obs}=10\\)yr and \\(h_{f}=1.2\\times 10^{-20}\\mbox{Hz}^{-1/2}\\).
Now I study circum-binary planet searches among gravitational waves from other binaries. For simplicity, I pick up a binary-planet system at \\(f\\geq 3\\)mHz and outside the interfering frequency regimes (10). I consider the following two-steps data analysis; (i) detecting the individual signals \\(h_{0}\\), \\(h_{p+}\\) and \\(h_{p-}\\), and (ii) identifying a triplet combination caused by a planet. The frequency distribution of Galactic white dwarf binaries is modeled as \\(dN/df=0.08(N_{B}/3000)(f_{gw}/3\\mbox{mHz})^{-1/3}\\) [yr\\({}^{-1}\\)] with the total number \\(N_{B}\\sim 3000\\) at \\(f_{gw}\\geq 3\\)mHz (see _e.g._ Bender et al. 1998). A similar density is expected for Galactic AM CVn stars (Nelemans 2006). For observational period \\(T_{obs}\\sim 10\\)yr, the occupation number \\(T_{obs}^{-1}\\)\\(dN/df\\) of binaries per frequency bin will be much smaller than 1. For a planet search, it is crucial to detect replicas \\(h_{p\\pm}\\) whose signals are weak but individually fitted with standard Galactic binary waveforms. Identification of weak binary signals is currently one of the most important topics on LISA data analysis. While the situation is somewhat different, Crowder & Cornish (2007) demonstrated that many (but not all) binaries can be detected down to \\(SNR\\sim 7\\) (corresponding to \\(X_{p}/\\sqrt{2}\\sim 7\\) for each replica) even under a more crowded condition _i.e._ a larger occupation number \\(T_{obs}^{-1}\\,dN/df\\) (see their SS4.2 and SS4.3). They also showed that the Fisher matrix analysis provides a reasonable prediction for parameter estimation errors. These results are very encouraging for a planet search that might reversely provide another motivation for ongoing activities for LISA data analysis.
Next I discuss an outline for identifying a triplet signal by a binary-planet system. The first task is to search for a potential pair \\(h_{0}\\)-\\(h_{p}\\), from a list of resolved binaries, using the fact that the pair should have same direction (and orientation) parameters with similar frequencies. The second task is to confirm the existence of another replica \\(h_{p-}\\) whose parameters can be estimated only with the \\(h_{0}\\)-\\(h_{p+}\\) pair. Considering the expected binary density \\(dN/df\\), this discrimination method will work well. In this manner the triplet can be identified among other binaries with a small extension of the standard Galactic binary search. Then coherent analysis can be performed for the modulated signal \\(h_{m}\\) to improve the quality of parameter estimation.
For unambiguous detections of planets, other effects that produce similar waveforms should be closely examined. From the arguments about the triplet structure, it is expected that the phase modulation \\(D_{p}(t)\\) can be easily separated from other small modulations at higher frequencies \\(\\gg f_{p}\\) that also generate small replicas but with larger frequency differences. Meanwhile, because of geometrical nature of gravitational wave generation, an observed waveform depends on angular parameters describing configuration of a binary. For example, it is shown that, for an eccentric binary in the LISA band, an triplet waveform can be produced by the periastron advance with a frequency difference \\(O(1{\\rm yr}^{-1})\\) (Seto 2001, Willems et al. 2007). But the triplet structure is different from the planet case. Precession of orbital plane of a binary (by the spin-orbit coupling) can also generate a triplet waveform, and might be important for double neutron stars with BBO/DECIGO. But it has different amplitude patterns (or equivalently polarization states), and has a larger frequency difference.
In figure 2, I plot the detectable planet on the semimajor axis-mass plane. Here I used the relation \\(a=1(f_{p}/1{\\rm yr}^{-1})^{-2/3}(M_{T}/1M_{\\odot})^{1/3}{\\rm AU}\\) for the orbital frequency \\(f_{p}\\) and the semimajor axis \\(a\\). The planets around \\(f_{p}=1{\\rm yr}^{-1}\\) (corresponding to 1.01AU for \\(M_{T}=1.04M_{\\odot}\\)) are excluded due to the degeneracy discussed before.
## 3. Discussions
The follow-on missions to LISA, such as the Big Bang Observer (BBO) (Phinney 2003) or the Decihertz Interferometer Gravitational Wave Observatory (DECIGO) (Seto et al. 2001, Kawamura et al. 2006) were proposed primarily to detect stochastic GW background from inflation in the band \\(f_{min}\\lesssim f\\lesssim f_{max}\\) with \\(f_{min}\\sim 0.2{\\rm Hz}\\) and \\(f_{max}\\sim 1{\\rm Hz}\\). At the lower frequency regime \\(f\\lesssim f_{min}\\), the foreground GWs by extragalactic white dwarf binaries would fundamentally limit sensitivity for GW observation (Farmer & Phinney 2003). In contrast, at \\(f\\gtrsim f_{min}\\), a deep window of GW is expected to be opened. To this end, it is crucial to resolve and remove foreground GWs generated by cosmological double neutron star binaries (NS+NSs) whose estimated merger rate
Figure 1.— Planet search sensitivity around white dwarf binaries with LISA. Estimated observational errors are presented for the orbital frequency of the planet ( (\\(\\Delta f_{p}\\))s: long-dashed curves) and for the amplitude of the GW phase modulation (\\((\\Delta\\Psi_{p}/\\Psi_{p})\\)s: Solid curves) induced by the planet. These errors are normalized by their asymptotic values \\(((\\Delta\\Psi_{p}/\\Psi_{p})_{\\perp},(\\Delta f_{p})_{\\perp})\\) derived with a simple interpretation for the signal modulation (see eq.(9)). Thick curves are for integration period \\(T_{obs}=3{\\rm yr}\\) and thin ones for \\(T_{obs}=10{\\rm yr}\\). It is difficult to find a planet with a low orbital frequency \\(f_{obs}\\lesssim 2a\\) due to the poor frequency resolution. Two phase modulations \\(D_{p}(t)\\) and \\(D_{p}(t)\\) induced by motions of LISA and the planet degenerate at \\(|(f_{p}-1)|T_{obs}\\lesssim 1\\). Outside these two bands the new signal \\(h_{p\\pm}(t)\\) by the planet can be well separated from the strong original one \\(h_{0}(t)\\), and the planet search works efficiently. These results depend very weakly or negligibly on source parameters other than \\(f_{p}\\).
Figure 2.— The ranges of detectable planets for LISA and DECIGO/BBO for typical sets of parameters. In the shaded regions, the mass \\(M_{c}\\sin i\\) can be estimated within 10% error. Around 1AU (corresponding to \\(f_{p}=1{\\rm yr}^{-1}\\)), performance of LISA is degraded due to the degeneracy of two phase shifts induced by orbital motions of planet and LISA.
is \\(\\sim 3\\times 10^{5}\\)yr\\({}^{-1}\\). In addition to NS+NSs, there might be double black hole binaries or black hole-neutron star binaries, while their merger rates are highly uncertain. Here I provide a brief sketch for planet search around cosmological NS+NSs with the follow-on missions. I fix masses of NS+NSs at \\(M_{1}=M_{2}=1.4M_{\\odot}\\).
In the observational band [\\(f_{min},f_{max}\\)], a NS+NS is on its final stage before merger. The time left before the merger is \\(1(f_{gw}/0.2{\\rm Hz})^{-8/3}(1+z)^{-5/3}\\) yr that severely limits the observable orbital frequency \\(f_{p}\\) of a planet. Using the restricted 1.5-order post-Newtonian waveform (Cutler & Harms 2006), I evaluated the expected observational errors in the scenario that all the parameters are simultaneously fitted, including two phase shifts \\(D_{E}(t)\\) and \\(D_{p}(t)\\). For various sets of input parameters, I examined the observational error for the amplitude \\(\\Psi_{p,1}\\equiv\\Psi_{p}|_{f_{w}=1{\\rm Hz}}\\), and found that by observing at least three orbital cycles (namely \\(f_{p}\\gtrsim f_{th}\\equiv 3(f_{min}/0.2)^{8/3}(1+z)^{8/3}\\)yr\\({}^{-1}\\)) the relative error is given as
\\[\\left(\\frac{\\Delta\\Psi_{p1}}{\\Psi_{p1}}\\right) \\sim \\frac{\\Delta(M_{p}\\sin i)}{M_{p}\\sin i} \\tag{11}\\] \\[\\sim (1+z)^{-1}\\left(\\frac{2.3}{SNR_{0}}\\right)\\left(\\frac{M_{p}\\sin i }{3M_{J}}\\right)^{-1}\\left(\\frac{f_{p}}{3{\\rm yr}^{-1}}\\right)^{2/3}\\]
with the signal-to-noise ratio \\(SNR_{0}\\) for the observed NS+NS.
Here I assumed a nearly flat noise spectrum (in units of Hz\\({}^{-1/2}\\)) in the band [\\(f_{min},f_{max}\\)] (Phinney 2003, Kawamura et al. 2006). For a given orbital frequency \\(f_{p}\\) and signal-to-noise ratio \\(SNR_{0}\\), the mass resolution is better than the previous results for LISA. This is because of the higher frequencies \\(f_{gw}\\) used in the present case. For \\(f_{p}\\lesssim f_{th}\\) (less than three orbital cycles in the observational band), the observational error \\(\\Delta(M_{p}\\sin i)/(M_{p}\\sin i)\\) becomes significantly larger than eq.(11).
Due to a limitation of estimated computational power available at the time of the follow-on missions \\(\\sim\\)2025, the minimal noise level of detectors required to remove NS+NSs corresponds to \\(SNR_{0}\\sim 100\\) for NS+NSs at \\(z=1\\) (Cutler & Harms 2006). For \\(z=1\\) the critical orbital frequency becomes \\(f_{th}=19{\\rm yr}^{-1}\\) (semimajor axis \\(\\sim 0.23{\\rm AU}\\) for \\(M_{T}=2.8M_{\\odot}\\)), and the mass \\(M_{c}\\sin i\\) can be measured within 10% error for a planet with \\(M_{p}\\sin i>1.2(f_{p}/19{\\rm yr}^{-1})^{2/3}(SNR_{0}/100)^{-1}M_{J}\\). The range of detectable planets is shown in figure 2. If detected at \\(z\\sim 1\\), the planet is \\(\\sim 10^{6}\\) times as distant as those currently found in our galaxy. Note that the estimated merger rate of NS+NSs around \\(z\\sim 1\\) is \\(\\sim 10^{5}\\)yr\\({}^{-1}\\). The bottom edge of the shaded region moves to (0.39AU, 0.52\\(M_{J}\\)) for NS+NSs at \\(z=0.5\\).
I would like to thank an anonymous referee for helpful comments to improve the draft.
## References
* Arnaud et al. (2007) Arnaud, K. A., et al. 2007, Classical and Quantum Gravity, 24, 529
* Bender et al. (1998) Bender P. L., et al. 1998, LISA prephase A report
* Cutler & Harms (2006) Cutler, C., & Harms, J. 2006, Phys. Rev. D, 73, 042001
* Deeg et al. (2008) Deeg, H. J., et al. 2008, ArXiv e-prints, 801, arXiv:0801.2186
* Farmer & Phinney (2003) Farmer, A. J., & Phinney, E. S. 2003, MNRAS, 346, 1197
* Dhurandhar & Vecchio (2001) Dhurandhar, S. V., & Vecchio, A. 2001, Phys. Rev. D, 63, 122001
* Kawamura et al. (2006) Kawamura, S., et al. 2006, Classical and Quantum Gravity, 23, 125
* Mayor & Queloz (1995) Mayor, M., & Queloz, D. 1995, Nature, 378, 355
* Muterspaugh et al. (2007) Muterspaugh, M. W., et al. 2007, ArXiv e-prints, 705, arXiv:0705.3072
* Nelemans (2006) Nelemans, G 2006, _AIP Conference Proceedings_**873**, 397
* Phinney et al. (2003) Phinney E. S., et al. 2003, The Big Bang Observer, NASA Mission Concept Study
* Ruiter et al. (2007) Ruiter, A. J., et al. 2007, ArXiv e-prints, 705, arXiv:0705.3272
* Schutz (1986) Schutz, B. F. 1986, Nature, 323, 310
* Seto et al. (2001) Seto, N., Kawamura, S., & Nakamura, T. 2001, Physical Review Letters, 87, 221103
* Seto (2001) Seto, N. 2001, Physical Review Letters, 87, 251101
* Takahashi & Seto (2002) Takahashi, R., & Seto, N. 2002, ApJ, 575, 1030
* Udry et al. (2002) Udry, S., et al. 2002, A&A, 390, 267
* Villaver & Livio (2007) Villaver, E., & Livio, M. 2007, ApJ, 661, 1192
* Willems et al. (2007) Willems, B., Vecchio, A., & Kalogera, V. 2007, ArXiv e-prints, 706, arXiv:0706.3700
* Wolszczan & Frail (1992) Wolszczan, A., & Frail, D. A. 1992, Nature, 355, 145 | I propose a method to detect planets around compact binaries that are strong sources of gravitational radiation. This approach is to measure gravitational-wave phase modulations induced by the planets, and its prospect is studied with a Fisher matrix analysis. I find that, using the Laser Interferometer Space Antenna (LISA), planets can be searched for around \\(\\sim 3000\\) Galactic double white dwarfs with detection limit \\(\\gtrsim 4M_{J}\\) (\\(M_{J}\\sim 2\\times 10^{30}\\)g: the Jupiter mass). With its follow-on missions, planets with mass \\(\\gtrsim 1M_{J}\\) might be detected around double neutron stars even at cosmological distances \\(z\\sim 1\\). In this manner, gravitational wave observation has potential to make interesting contributions to extra-solar planetary science.
Subject headings:gravitational waves--binaries: close -- planetary systems | Give a concise overview of the text below. |
arxiv-format/0803_1571v1.md | Equation of state for strongly interacting matter: collective effects, Landau damping and predictions for LHC
R. Schulze, M. Bluhm, B. Kampfer
Forschungszentrum Dresden-Rossendorf, PF 510119, 01314 Dresden, Germany
Institut fur Theoretische Physik, TU Dresden, 01062 Dresden, Germany
## 1 Introduction
The equation of state (EOS) for strongly interacting matter is needed as input for hydrodynamical calculations of the expanding fireball created in relativistic heavy-ion collisions (HIC). Theoretical predictions (cf. [1] for a survey) and recent experimental results [2, 3, 4, 5] point to a transition from confined hadronic matter to the quark-gluon plasma (QGP), being a new deconfined state which is governed by the fundamental quark and gluon degrees of freedom. That means, a usable EOS has to uncover both states.
Upcoming HIC experiments at LHC, mainly to be investigated by ALICE, will probe the high-temperature region at small net baryon densities, while the future HIC experiments at FAIR, to be addressed by CBM, are aimed at exploring the region of high net baryon densities. Therefore, the EOS in a wide region of the phase diagram is needed. Numerical simulations of QCD on the lattice are still constrained to small net baryon densities. Consequently, there is a need for phenomenological models which allow predictions in regions of the phase diagram not yet accessible by lattice QCD calculations. Here we discuss a phenomenological model which relies on the picture of quarks and gluons as non-interacting quasiparticle excitations. The employed quasiparticle model (QPM) goes back to [6, 7, 8, 9, 10], while recent work has been presented in [11, 12, 13, 14, 15, 16]. Alternative formulations have been given, e.g., in [17, 18].
## 2 The quasiparticle model
The description of strongly interacting matter is governed by QCD. Thus the foundation of any model has to be the quantized Lagrangian \\(\\mathcal{L}_{\\mathrm{QCD}}\\) and the dressed propagators and full self-energies obtained from it. In the framework of finite-temperature field theory a thermodynamical potential \\(\\Omega\\) can be derived usingthe Cornwall-Jackiw-Tomboulis (CJT) formalism [19]. It employs the effective action
\\[\\Gamma[D,S] = I\\,-\\,\\frac{1}{2}\\left\\{\\mathrm{Tr}\\left[\\ln D^{-1}\\right]+ \\mathrm{Tr}\\left[D_{0}^{-1}D-1\\right]\\right\\} \\tag{1}\\] \\[+\\,\\left\\{\\mathrm{Tr}\\left[\\ln S^{-1}\\right]+\\mathrm{Tr}\\left[S_ {0}^{-1}S-1\\right]\\right\\}\\,+\\,\\Gamma_{2}[D,S],\\]
where \\(I\\) is the classical action containing \\(\\mathcal{L}_{\\mathrm{QCD}}\\), and \\(D\\) and \\(S\\) are the dressed gluon and quark propagators (the subscript 0 denotes the respective free equivalents). The functional \\(\\Gamma_{2}\\) represents the sum over all two-particle irreducible skeleton graphs of the theory, i.e. all those graphs without external lines that do not fall apart upon cutting of two propagators.
For translationally invariant systems without broken symmetries the expression (1) simplifies and gives the thermodynamic potential at finite temperature \\(T\\)[16]
\\[\\frac{\\Omega}{V} = \\mathrm{tr}\\!\\!\\int\\!\\!\\frac{\\mathrm{d}^{4}k}{(2\\pi)^{4}}n_{\\mathrm {B}}(\\omega)\\,\\mathrm{Im}\\!\\left(\\ln D^{-1}-\\Pi D\\right)\\] \\[+\\,2\\,\\mathrm{tr}\\!\\!\\int\\!\\!\\frac{\\mathrm{d}^{4}k}{(2\\pi)^{4}}n _{\\mathrm{F}}(\\omega)\\,\\mathrm{Im}\\!\\left(\\ln S^{-1}-\\Sigma S\\right)-\\frac{T} {V}\\Gamma_{2},\\]
where \\(\\Pi\\) and \\(\\Sigma\\) are the full self-energies of gluons and quarks respectively. Truncating \\(\\Gamma_{2}\\) at 2-loop order leaving
\\[\\Gamma_{2}=\\frac{1}{12}\\]
directly leads to the well-known 1-loop quark and gluon self-energies [20]. Assuming additionally small external momenta or, equivalently, hard thermal loops ensures gauge invariance. These approximations are used in what follows.
An important quantity of the strong interaction and consequently also of our model is the running coupling \\(g^{2}\\) which depends on the ratio of renormalization scale and QCD scale parameters. In order to phenomenologically accommodate higher-order and even non-perturbative effects of QCD we replace the former at \\(\\mu=0\\) (\\(\\mu\\) is the quark chemical potential) by the first Matsubara frequency \\(i\\pi T\\) and shift the temperature \\(T\\) by a parameter \\(T_{s}\\). The new quantity corresponds to an effective coupling and is denoted by \\(G^{2}\\) (see [14] for details):
\\[G^{2}(T\\geq T_{c},\\mu=0)=\\frac{16\\pi^{2}}{\\beta_{0}\\ln x^{2}}\\left(1-\\frac{4 \\beta_{1}}{\\beta_{0}^{2}}\\frac{\\ln\\left[\\ln x^{2}\\right]}{\\ln x^{2}}\\right), \\tag{4}\\]
where \\(x\\equiv\\lambda(T-T_{s})/T_{c}\\)At zero chemical potential \\(\\mu\\), both damping terms and the plasmon and plasmino entropies give only small contributions. Omitting those, an effective quasiparticle model (eQP) can be formulated to simplify the description/prediction of experiments with negligibly small net baryon density. This is a good approximation, e.g., for Au+Au collisions at RHIC or Pb+Pb collisions at LHC. Also note that the eQP uses the asymptotic approximation of the dispersion relation near the light cone, \\(\\omega^{2}=k^{2}+m_{i,\\infty}^{2}\\), with \\(m_{i,\\infty}\\) as asymptotic quasiparticle masses which depend on \\(T\\) and \\(\\mu\\) both explicitly and implicitly (via \\(G^{2}\\)).
## 3 EOS for \\(\\mu\\approx 0\\)
In order to obtain sensible predictions, the QPM is adjusted lattice data. Exploring the flexibility of the QPM introduced by the effective coupling we find excellent agreement of both the full QPM and the effective QPM with lattice data, see Figs. 1 and 2. In light of the substantially more involved nature of the full QPM it is very notable that both models give essentially the same good description of available lattice data. Fig. 1 also shows that plasmons and plasminos give negative contributions to the entropy
Figure 1: Adjustment of both full (solid line, \\(T_{s}=-0.73T_{c}\\) and \\(\\lambda=6.1\\)) and effective (dashes on solid line, \\(T_{s}=-0.75T_{c}\\) and \\(\\lambda=6.3\\)) QPMs to lattice data for the entropy density \\(s/T^{3}\\) from [21] (continuum extrapolated by a factor of 0.96). Contributions to the full QPM are given by the dashed (transverse gluons, quarks, antiquarks), dotted (plasminos) and dash-dotted (plasmons) lines.
Figure 2: Left panel: Adjustments of the scaled eQP pressure \\(p/T^{4}\\) to various lattice calculations (Ref. [22] - squares, Ref. [23] - diamonds and triangles, Ref. [24] - circles). Right panel: Corresponding EOS in the form of pressure \\(p\\) as a function of energy density \\(e\\). For details see [15].
density which is due to correlations introduced into the quark-gluon system by those collective modes.
It is remarkable that even though there still are substantial difference between various lattice QCD results, which are e.g. due to the used actions, lattice sizes and continuum extrapolations, the QPM EOS in the form \\(p(e)\\) is unique above a threshold of about \\(e\\gtrsim 4\\,\\mathrm{GeV/fm^{3}}\\) (Fig. 2). Some uncertainty is seen in the regions of lower energy densities. We suppose that the hadron resonance gas is the correct description of strongly interacting matter in the confined region. In order to examine the impact of this uncertainty we investigate two extreme cases: (i) a smooth crossover (labeled \"QPM 4.0\"), and (ii) a first order transition between resonance gas and the confident region of our QPM (labeled \"bag model\").
To do so we use a relativistic hydrodynamic model to simulate Au+Au collisions at RHIC energies. Fig. 3 shows the resulting transverse momentum spectra and the azimuthal anisotropy coefficient \\(v_{2}\\) of the baryons \\(\\Lambda\\), \\(\\Xi\\) and \\(\\Omega\\) for an initial state characterized by \\(s_{0}=110\\,\\mathrm{fm}^{-3}\\) and initial proper time \\(\\tau_{0}=0.6\\,\\mathrm{fm}/c\\). The latter is compared to actual experimental results [25], showing good agreement of both crossover and first order phase transition in the transverse momentum region \\(p_{\\mathrm{T}}\\lesssim 1.8\\,\\mathrm{GeV}\\) considered relevant for hydrodynamics. Above this region, a simple crossover from the resonance gas to the QPM clearly provides better description of the measured data.
To consider Pb+Pb collisions at LHC, a conservative guess can give first indications of possible differences to RHIC. LHC particle yields are assumed to be three times larger than at RHIC, hinting to \\(s_{0}^{\\mathrm{LHC}}=3s_{0}^{\\mathrm{RHIC}}=330\\,\\mathrm{fm}^{-3}\\) (\\(T_{0}=515\\,\\mathrm{MeV}\\)) with the assumption of \\(\\tau=0.6\\,\\mathrm{fm}/c\\). The higher initial temperature at LHC leading to a longer fireball lifetime suggests a stronger radial flow as well as a more
Figure 4: Transverse momentum spectra (left) and azimuthal distribution of emitted hadrons \\(v_{2}\\) (right) for the same particles as in Figure 3 as predicted for Pb+Pb collisions at LHC. For details see [15].
Figure 3: Transverse momentum spectra (left) and azimuthal distribution \\(v_{2}\\) of emitted hadrons (right) for some strange baryons. Symbols represent experimental data [25] for Au+Au collisions at RHIC. For details see [15].
equilibrated azimuthal distribution of emitted hadrons. Indeed, the predicted \\(p_{\\rm T}\\) spectrum for \\(\\Lambda\\), \\(\\Xi\\) and \\(\\Omega\\) is considerably flatter than at RHIC, while \\(v_{2}\\) is noticeably reduced (Fig. 4).
In these examples the effects of a non-zero baryon density are negligibly small.
## 4 Nonzero net baryon density
The advantage of the phenomenological QPM is its ability to provide an EOS at nonzero chemical potential, in particular in a region which is expect to be relevant for forthcoming experiments at FAIR. This remarkable ability is due to the thermodynamic self-consistency of the QPM. As a consequence, thermodynamic quantities at arbitrary values of the state variables (here \\(\\mu\\) and \\(T\\)) are connected through Maxwell relations and the stationarity condition of the thermodynamic potential. Thus the model is able to map the lattice data at \\(\\mu=0\\) into the \\(T\\)-\\(\\mu\\)-plane. This is achieved by solving the Maxwell relation, which is a partial differential equation of first order for the effective coupling \\(G^{2}(T,\\mu)\\), using the method of characteristics with the parametrized \\(G^{2}(T,\\mu=0)\\) as initial condition. This procedure has been tested successfully against lattice calculations of the pressure corrections coefficients available for small chemical potential [11, 15].
However, the eQP, where damping terms and collective modes are neglected, meets some ambiguity in the region of large chemical potential and not too high temperatures, since the characteristic curves of the partial differential equation exhibit crossings. Consequently, for mapping to large net baryon densities the full QPM has to be applied. As a sign of the self-consistency of the latter one, no crossings appear among its characteristics [16]. Thus, the effective coupling \\(G^{2}\\) is unique for every point of the \\(T\\)-\\(\\mu\\)-plane. From the effective coupling \\(G^{2}(T,\\mu)\\) entropy density \\(s\\) and net quark number density \\(n\\) follow directly as thermodynamic integrals. However, to ensure the physical relevance of the solutions, agreement with general thermodynamic requirements, e.g. Nernst's theorem, has to be verified.
Indeed, Fig. 5, which shows both quantities along selected characteristics starting above \\(T_{c}\\), confirms that the entropy density vanishes for \\(T\\to 0\\) and the net number density increases with the chemical potential. Contour plots of the thermodynamic quantities (Fig. 6) also show regular behavior of the thermodynamic quantities for the region above the expected phase transition. Therefore, the full QPM can be used to predict a physical EOS for strongly interacting matter especially at high net baryon densities.
Below the \"phase transition\", the model, in the present version, cannot directly be applied since strong
Figure 5: Scaled entropy density \\(s/T^{3}\\) (left) and scaled net baryon density \\(n/T^{3}\\) (right) of strongly interacting matter as predicted by the full QPM. Both quantities are exhibited along selected characteristic curves.
contributions of collective modes lead to a negative baryon density. It remains to be checked whether improved dispersion relations and a refined treatment of the imaginary parts of the self-energies can cure this obstacle. However, for most cases it is more prudent to use the resonance gas below the phase transition as shown for the eQP, so that these ambiguities do not pose a serious problem. The resulting \"compound EOS\" can then not only be used for predictions of upcoming experiments at CBM@FAIR but also as an input to general relativistic models of compact stellar objects such as neutron/quark/strange stars.
## 5 Conclusion
Our quasiparticle model in both the previous simplified version and the extended version with collective modes and Landau damping is able to simultaneously describe recent lattice calculations at zero and small chemical potential. Employing the resulting equation of state, combined with a resonance gas model, in a hydrodynamical co de the experimental data from RHIC are fairly well described. Furthermore, predictions for heavy-ion experiments at LHC can be given. For both experimental situations the simplified, effective quasiparticle model suffices due to small net baryon densities. However, for larger net baryon densities, the full model including the suitably parametrized HTL dispersion relations, Landau damping and collective modes has to be employed. The current results are encouraging with respect of deriving an equation of state usable in a large region of the phase diagram of strongly interacting matter.
_Acknowledgment_: R.S. would like to thank the organizers for the invitation to this very insightful and inspiring meeting and the financial support granted.
## References
* [1] J. Kapusta, B. Muller, J. Rafelski, _Quark-Gluon Plasma: Theoretical Foundations_ (Elsevier, 2003), ISBN 0444511105
* [2] I. Arsene et al. (BRAHMS), Nucl. Phys. A **757**, 1 (2005), nucl-ex/0410020
* [3] B.B. Back et al. (PHOBOS), Nucl. Phys. A **757**, 28 (2005), nucl-ex/0410022
* [4] J. Adams et al. (STAR), Nucl. Phys. A **757**, 102 (2005), nucl-ex/0501009
* [5] K. Adcox et al. (PHENIX), Nucl. Phys. A **757**, 184 (2005), nucl-ex/0410003
Figure 6: Contour plot of the scaled entropy density \\(s/T^{3}\\) (left) and scaled net baryon density \\(n/T^{3}\\) (right) as a function of temperature \\(T\\) and chemical potential \\(\\mu\\) from Fig. 5.
* [6] A. Peshier, B. Kampfer, O.P. Pavlenko, G. Soff, Phys. Lett. B **337**, 235 (1994)
* [7] A. Peshier, B. Kampfer, O.P. Pavlenko, G. Soff, Phys. Rev. D **54**(3), 2399 (1996)
* [8] P. Levai, U. Heinz, Phys. Rev. C **57**, 1879 (1998), hep-ph/9710463
* [9] A. Peshier, B. Kampfer, G. Soff, Phys. Rev. C **61**, 045203 (2000), hep-ph/9911474
* [10] A. Peshier, B. Kampfer, G. Soff, Phys. Rev. D **66**, 094003 (2002), hep-ph/0206229
* [11] M. Bluhm, B. Kampfer, G. Soff, Phys. Lett. B **620**, 131 (2005), hep-ph/0411106
* [12] B. Kampfer, M. Bluhm, R. Schulze, D. Seipt, U. Heinz, Nucl. Phys. A **774**, 757 (2006), hep-ph/0509146
* [13] M. Bluhm, B. Kampfer, R. Schulze, D. Seipt, Acta Phys. Hung. A **27**, 397 (2006), hep-ph/0608052
* [14] M. Bluhm, B. Kampfer, R. Schulze, D. Seipt, Eur. Phys. J. C **49**, 205 (2007), hep-ph/0608053
* [15] M. Bluhm, B. Kampfer, R. Schulze, D. Seipt, U. Heinz, Phys. Rev. C **76**, 034901 (2007), arXiv:0705.0397
* [16] R. Schulze, M. Bluhm, B. Kampfer, Eur. Phys. J. ST (2008), in print, arXiv:0709.2262
* [17] J. Letessier, J. Rafelski, Phys. Rev. C **67**, 031902 (2003), hep-ph/0301099
* [18] M.A. Thaler, R.A. Schneider, W. Weise, Phys. Rev. C **69**, 035210 (2004), hep-ph/0310251
* [19] J.M. Cornwall, R. Jackiw, E. Tomboulis, Phys. Rev. D **10**, 2428 (1974)
* [20] M.L. Belac, _Thermal Field Theory_ (Cambridge University Press, 1996), ISBN 0521654777
* [21] F. Karsch (2007), hep-ph/0701210
* [22] F. Karsch, K. Redlich, A. Tawfik, Phys. Lett. B **571**, 67 (2003), hep-ph/0306208
* [23] C. Bernard et al., PoS **LAT2005**, 156 (2006), hep-lat/0509053
* [24] Y. Aoki, Z. Fodor, S.D. Katz, K.K. Szabo, JHEP **2006**, 089 (2006), hep-lat/0510084
* [25] J. Adams et al. (STAR Collaboration), Phys. Rev. Lett. **95**, 152301 (2005), nucl-ex/0501016 | The equation of state (EOS) is of utmost importance for the description of the hydrodynamic phase of strongly interacting matter in relativistic heavy-ion collisions. Lattice QCD can provide useful information on the EOS, mainly for small net baryon densities. The QCD quasiparticle model provides a means to map lattice QCD results into regions relevant for a variety of experiments. We report here on effects of collectives modes and damping on the EOS. Some predictions for forthcoming heavy-ion collisions at LHC/ALICE are presented and perspectives for deriving an EOS for FAIR/CBM are discussed. | Give a concise overview of the text below. |
arxiv-format/0803_3515v1.md | # Presented in Second ESRI Asia-Pacific User Conference
New Delhi, 2007
**Geographic Information Systems in Evaluation and Visualization of Construction Schedule**
V. K. Bansal and Mahesh Pal
Department of Civil Engineering, National Institute of Technology,
Kurukshetra, Haryana, India -136119
[email protected]
# Introduction
The bar charts and networks are widely used to depict the construction schedule whereas different activities in the schedule are linked with one or more components of the project under consideration. Bar charts/ networks provide the non-spatial information that lacks in spatial aspects of the different construction activities. Thus, to have the spatial aspects of a project, the construction planner uses 2D drawings and associates its different components with the related activities present in the schedule (Koo and Fischer 2000). Further, there is no dynamic linkage between a schedule and its spatial aspect in the commercially available scheduling tools such as _Primavera_ and _Microsoft Project_. Interpretation of the schedule without any link of its activities with the corresponding spatial components is cumbersome as an actual project may contain thousands of activities, which makes difficult to check the schedule sequence for itscompleteness. Thus generating a gap in effective communication among different project participants. This limitation of schedule forced the researchers to combine scheduling tools with 3D CAD systems to depict the construction sequence visually in 3D that leads to the development of the 4D CAD. Koo and Fischer (2000) suggested that a 4D model increases the comprehensibility of the project schedule, thus, allowing users to detect mistakes or potential problems prior to the construction. Despite a lot of research in 4D CAD technology, their use is not very common in the construction industry as these tools are somewhat difficult to use and cannot be manipulated by everyone (Issa et al. 2003).
Within last decade, GIS tools with inbuilt 3D display have become more widely accessible to mainstream practitioners. Several researc h suggest the usefulness of GIS in construction industry to effectively handle various construction project requirements such as data management, integrating information, visualization, cost estimate, site layout and construction planning etc. (Bansal and Pal 2006 a,b,c; Cheng and O'Connor 1996; Varghese and O'Connor 1995; Zhong et al. 2004). GIS is found to be helpful in improving the construction planning and design efficiency by integrating the spatial and non -spatial information in a single environment (Jeljeli et al. 1993; Camp and Brown 1993; Oloufa et al. 1994). Recently, Poku and Arditi (2006) supplemented the non-spatial scheduling techniques by developing a GIS based spatial system to represent construction progress that is synchronized with construction schedule. They also discussed the important issue of construction scheduling and progress control that concerns with practice of construction management. They used design generated in _AutoCAD_ and plugged with schedule (generated using _primavera_ ) into a GIS package. The study by Poku and Arditi (2006) sheds light on how to supplement the project management softwares, concluded that integrating GIS with project management tools such as _Primavera_ require backend coding to make the existing system more user friendly. Present study explores the potential of GIS in evaluation and visualization of construction schedule as well as to maintain the construction data within GIS environment that can be associated with the activities in the schedule.
**2. GIS-Based Operations**
The _AutoCAD_ is used to generate the spatial data (2D data layers) corresponding to each activity in the schedule. In addition _ArcGIS_ also contain the tools to handle the editing session (ArcGIS 2004). The functionalities/features of _ArcGIS_ used in this study are discussed below.
### Spatial Operation
The _merge_ in _ArcGIS_ is used to group the features of a layer into one feature. It combines different features by removing boundaries or nodes between adjacent polygons. The non -adjacent polygons in same layer are also merged to create a multipart polygon feature. The _Base height_ is the elevation value for the features of a layer in 3D space. The 2D layer does not have _base height_ and _feature height_ information. To display 3D perspective view, features of a 2D layer generated in _AutoCAD_ are assigned _base height_ and _feature height_ from the fields of its own _attribute table_. The _extrusion_ tool in _ArcGIS_ is also used to changes the points into vertical lines, lines into vertical walls, and polygons in to 3D blocks. _ArcGIS_ is also used to create a new layer by piecing together two or more layers of the same geometry (ArcGIS 9 2004).
### Maintaining and Integration of Construction Resources Data in GIS
_ArcGIS_ is used to maintain the construction resources data in tabular form and integrate this data with the corresponding activities of the project. This approach also replaces the manual methods to extract the information from the database and allows easy updating, as most of the information is in digital form (Bansal and Pal 2006 b, c). Three types of relationships defined in the _ArcGIS one-to-one_, _one-to-many_, and _many-to-one_ are used in this work to join different tables together (Chang 2002). The _Join_ function of _ArcGIS_ is used to establish a _one-to-one_ or _many-to-one_ relationship between the destination table and the source table (ArcGIS 9 2006). Two tables are joined on the basis of a field called _Activity_ID_ that is available in both the tables. The name of the field does not have to be the same in both tables, but the data type must be same (allows joining number-to-number or string-to-string). The function _Relate_ establishes _one-to-many_ relationship between the destination table and the source table.
### 3.0 Procedure for the Evaluation and Visualization of Schedule
The detailed procedure to construct spatial aspects in _AutoCAD_ corresponding to each activity in schedule is discussed in more detail by Bansal and Pal (2006c). The procedure to evaluate and visualize the construction schedule is discussed below:
_Step 1: Generation and transfer of construction schedule-schedule is a model of how and when the various tasks of a project are going to be accomplished. The schedule acts as a roadmap for the successful implementation of a project (Moder et al.1983). Typically, a project is divided into discrete activities and the time needed to complete each activity is decided and are arranged in sequential or overlapping order. The _Microsoft Excel_ is used as scheduling tool (Hegazy and Ayed 1999), to determine the starting /finishing times, different floats, project duration and the critical activities. The schedule developed in_Microsoft Excel_ is then transferred into the _ArcGIS_.
_Step 2: Generation and transfer of 3D components_ - spatial information of different activities is generated in _AutoCAD,_ which form the basis of the proposed system. The _ArcGIS_ allows working with the drawings generated in _AutoCAD._ The drawings are transferred to _ArcGIS_ as layers and can be symbolized and queried. _ArcGIS_ uses the _AutoCAD_ drawing layers like any other type of feature layer. However, to edit or modify a CAD drawing layer's features or its associated attribute table, layers have to be converted to _shapefiles_. The _shapefiles_ are simple, non-topological format for the storing geometric location and attribute information of geographic features.
_Step 3: Merging components/features_ -components transferred into _ArcGIS_ from _AutoCAD_ may be merged together according to the activities as defined earlier in schedule generated in _Microsoft Excel_. Thus, the components of the drawing that belong to the same activity but are located at different positions in the space are joined together to construct the spatial data for each activity.
_Step 4: Connecting of the schedule with corresponding 3D components_- this step involves in adding a field called _Activity_ID_ to the schedule table and the _attribute_ table of each component. The field _Activity_ID_ is common between two tables (i.e. schedule table and _attribute_ table of different components) and used to establish the connection between the component and the corresponding activity in the schedule. All the entries in the field _Activity_ID_ are to be entered manually and should be unique in both scheduleand _attribute_ tables of an activity. Thus, the attribute required to associate the components with the corresponding activities in the schedule are the entries in the field _Activity_ID_.
_Step 5_: _Schedule Evaluation and corrections_ -This step involves in evaluating the schedule to see the order of construction sequence. The schedule for different dates can be evaluated by visualizing it in the 3D space. If it complies with the actual construction sequence desired and does not requires any change in number of activities and logic, the schedule will finally be accepted and no alteration is allowed afterward. If the model does not comply with the required construction sequence and need some changes in the logic, the schedule is changed again as per the requirements in the _Microsoft Excel_. After the implementation of desired changes in the construction schedule, it is again associated with the related components for its evaluation. The sequence of the construction schedule is again checked to verify its sequence. Further, if it does not require any change, it is finally accepted. Sometime the different steps need to be repeated again if the numbers of activities in the schedule need addition/deletion as well as if some changes are required in the corresponding components generated in the _AutoCAD_.
**4.0 Advantages of Scheduling in GIS Environment**
Planner makes schedule in such a way that all components of the project must have related activities. By viewing the schedule, it becomes quite difficult to determine whether the schedule is complete. However, to confirm that all components of the project have related activity is a time consuming process because of the large number of activities in the network. In GIS a visual check of schedule is possible, which may help in preventing omission of the activities in the schedule. With non-spatial schedule it becomes difficult to predict if the activities are within/out of the construction sequence, because of the activities with mutual dependencies (i.e., successor and predecessor relationships) may be located in different parts of the schedule. Such non-spatial schedules force users to visualize and interpret the activity sequence in their minds. Therefore, multiple participants of project must individually conceptualize the sequence by associating the activities with the components shown in drawings. The schedule in GIS displays at 'what time' and 'where' in the space the components are to be built.
GIS facilitate the understanding of 3D model and topological relationship between different components in many ways (like zooming, pan, fly forward or backward, navigation etc). Any components can be set transparent that makes it easier to visualize the model. The users also have the option of rotating 3D components around the \\(x\\), \\(y\\), or z-axes to observe the developed 3-D models. Further, the element can also be viewed from any direction and angle. Integrating information such as project schedules and drawings allows visual understanding of the construction process, and the construction data can be linked with the corresponding activities. The failure or success of a building contract largely depends on the quality and timing of the information available to the contractors from the database. Thus, require an automated system to get the required information without delay. Present methodology links the construction resources data of various activities with the corresponding activities (Bansal and Pal 2006b). The proposed methodology also supports the cost estimation that involves quantity takeoffs and the cost of various resources required in the construction project. Some of the existing approaches for quantity takeoffs are not accurate enough to eliminate the possibility of errors like missing or duplicating different items of work. GIS-Based cost estimation methodology eliminates the errors like missing or duplicating various items by visualizing each components corresponding to the items in 2D or 3D space. Thus, it may help in increasing the productivity of quantity estimator by reducing the manual work in determining the quantity takeoffs. Accurate bill of quantities can also be generated on the basis of dimensions of various data layers within GIS (Cheng and Yang 2001). A detailed methodology of GIS based cost estimation is proposed by Bansal and Pal (2006 c). From schedule, the quarries like activities starting on the particular date and activities starting between the particular intervals of time can be made in the proposed model. The project activities can be easily stored and listed in variety of useful ways, such as sorting schedule in ascending or descending order on any field (floats, early or late start time) in the table. Selected records could be displayed in same table by promoting them to the top. The system still requires automation in some tasks, such as transferring information from _Microsoft Excel_ and _AutoCAD_ to _ArcGIS_, in current version of system these operations are performed manually.
### 5.0 Conclusions
This paper purposes a methodology to use GIS in representing and integrating the spatial and non-spatial information on the single environment. Methodology integrates construction schedule with corresponding spatial details so as to make understanding of the project sequence easier. The link allows easier understanding of the project as well as helps to detect possible problems in it. The proposed methodology supports additional analyses like rate analysis, cost estimates, and allows integrating safety recommendation with critical activity, thus ma king schedules more realistic (Bansal
Figure 1: Linking the 3D model with construction scheduleand Pal 2006a,b). Non-spatial schedules can only convey what is built 'when', whereas the schedule in GIS conveys what is being built 'when and where'. Thus, proposed work concludes that GIS can effectively be used for construction scheduling evaluation also.
**References**
ArcGIS 9 (2004), Introduction to ArcGIS 9 Part I & II, Environment Science and Research Institute, New York Street, Redlands, C.A., USA.
Bansal, V. K., and Pal, M. (2006a), Geographic Information Systems for Construction Industry: a Review, NICMAR Journal of Construction Management, 21(2), 1-12.
Bansal, V. K., and Pal, M. (2006b), GIS Based Projects Information System for Construction Management, Asian Journal of Civil Engineering, 7 (2), 115-124.
Bansal, V. K., and Pal, M. (2006c), Potential of Geographic Information Systems in building cost estimation and visualization, Automation in Construction, Article in press.
Camp, C. V., and Brown, M. C. (1993), GIS procedure for developing three-dimensional subsurface profile, Journal of Computing in Civil Engineering, 7 (3), 296-309.
Chang, Kang-Tsung (2002), Introduction to Geographic Information Systems, Tata McGraw-Hill, New Delhi.
Cheng, M.Y., and O'Connor, J.T. (1996), ArcSite: Enhanced GIS for construction site layout, Journal Construction Engineering and Management, 122 (4), 329 -336.
Cheng, M.Y., and Yang, C.Y. (2001), GIS-Based cost estimate integrated with material layout planning, Journal Construction Engineering and Management, 127 (4), 291-299.
Hegazy, T., and Ayed, A. (1999), Simplified spreadsheet solutions: Models for CPM and TCT analysis, Cost Engineering Journal, 41(7), 26-33.
Issa, R. R. A., Flood, I. and O'Brien W. J. (2003), 4D CAD and visualization in construction: developments and applications, A.A. Balkema Publishers.
Jeljeli M. N., Russell, J. S., Meyer, H. W. G. and Vonderohe, A. P. (1993), Potential applications of geographic information systems to construction industry, JournalConstruction Engineering and Management, 119 (1), 72-86.
* [Moder, Phillips and DavisModer et al.1983] Moder, J.J., Phillips, C.R., and Davis, E.W. (1983), Project Management with CPM, PERT and Precedence Diagramming, Van Nostrand Reinhold Company, New York.
* [Oloufa, Eltahan, and PapacostasOloufa et al.1994] Oloufa, A. R., Eltahan, A. A., and Papacostas, C. S. (1994), Integrated GIS for construction site investigation, Journal Construction Engineering and Management, 120 (1), 211-222.
* [Poku and ArditiPoku and Arditi2006] Poku, S. E. and Arditi, D. (2006), Construction scheduling and process control using geographic information systems, Journal Computing in civil Engineering, 20 (5), 351-360.
* [Varghese and O'ConnorVarghese and O'Connor1995] Varghese, K., and O'Connor, J.T. (1995), Routing large vehicles on industrial construction site, Journal Construction Engineering and Management, 121 (1), 1-12.
* [Zhong, Li, Zhu, and SongZhong et al.2004] Zhong, D., Li, J., Zhu, H., and Song, L. (2004), Geographic information system based visual simulation methodology and its application in concrete dam construction processes, Journal Construction Engineering and Management, 130 (5), 742-750. | Commercially available scheduling tools such as _Primavera_ and _Microsoft Project_ fail to provide information pertaining to the spatial aspects of construction project. A methodology using geographical information systems (GIS) is developed to represent spatial aspects of the construction progress graphically by synchronizing it with construction schedule. The spatial aspects are depicted by 3D model developed in _AutoCAD_ and construction schedule is generated using _Microsoft Excel._ Spatial and scheduling information are linked together into the GIS environment (_ArcGIS_). The GIS-based system developed in this study may help in better understanding the schedule along with its spatial aspects.
_Keywords:_ Construction scheduling and visualization; 4D-GIS; GIS in construction management | Summarize the following text. |
arxiv-format/0803_4304v3.md | # Constraining the evolution of dark energy with type Ia supernovae and gamma-ray bursts
Shi Qi
1Department of Physics, Nanjing University, Nanjing 210093, China
[email protected]
Fa-Yin Wang
2Department of Astronomy, Nanjing University, Nanjing 210093, China
[email protected]
Tan Lu
3Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, China
3t.lu9pmo.ac.cn
4Ioint Center for Particle, Nuclear Physics and Cosmology, Nanjing University - Purple Mountain Observatory, Nanjing 210093, China
4
Key Words.:cosmological parameters - supernovae: general - Gamma rays: bursts
## 1 Introduction
Unexpected accelerating expansion of the universe was first discovered by observing type Ia supernovae (SNe Ia) (Riess et al. 1998; Perlmutter et al. 1999). This acceleration is attributed to dark energy, whose presence was corroborated later by other independent sources including the WMAP and other observations of the CMB (Spergel et al. 2003), X-ray clusters (Allen et al. 2002), etc. With more observational data available (e.g. Hawkins et al. 2003; Abazajian et al. 2003; Spergel et al. 2007; Riess et al. 2007; Wood-Vasey et al. 2007; Davis et al. 2007; Schaefer 2007; Percival et al. 2007; Komatsu et al. 2008; Dunkley et al. 2008), we are getting more stringent constraints on the nature of dark energy; nevertheless, the underlying physics of dark energy remains mysterious. In addition to the cosmological constant, many other dark energy models have been suggested, including models of scalar fields (see Copeland et al. (2006) for a recent review) and modification of general relativity (see for example Deffayet 2001; Binetruy et al. 2000; Maartens 2007; Capozziello et al. 2003; Dvali et al. 2000; Carroll et al. 2004; Nojiri & Odintsov 2003, 2006).
Measuring the expansion history directly may be the best way to constrain the properties of dark energy. To measure the expansion history, we need standard candles at different redshifts. SNe Ia, which are now viewed as nearly ideal standard candles, have played an important role in constraining cosmological parameters. We now have 192 samples of SN Ia (Riess et al. 2007; Wood-Vasey et al. 2007; Davis et al. 2007) that can be used to determine the expansion history. And the proposed SNAP satellite1 (see for example Aldering et al. 2004) will add about 2000 samples per year. Increasing SN Ia samples will provide more and more precise description of the cosmic expansion. However, the redshift of the present 192 SNe Ia ranges only up to about 1.7 and the mean redshift is about 0.5. They cannot provide any information on the cosmic expansion beyond redshift 1.7. Here gamma-ray bursts (GRBs) come in and fill the void. With their higher luminosities, GRBs are visible across much greater distances than supernovae. The presently available 69 compiled GRBs (Schaefer 2007) extend the redshift to \\(z>6\\) and the mean redshift is about 2.1. After being calibrated with luminosity relations, GRBs may be used as standard candles to provide information on cosmic expansion at high redshift and, at the same time, to tighten the constraints on cosmic expansion at low redshift. See, for example, Dai et al. (2004), Ghirlanda et al. (2004), Di Girolamo et al. (2005), Firmani et al. (2005), Friedman & Bloom (2005), Lamb et al. (2005), Liang & Zhang (2005), Xu et al. (2005), Wang & Dai (2006), Li et al. (2008), Su et al. (2006), Schaefer (2007),Wright (2007), and Wang et al. (2007) for works on GRB cosmology.
Among parameters that describe the properties of dark energy, the equation of state (EOS) is the most important. Whether and how it evolves with time is crucial in distinguishing different cosmological models. Due to not understanding of the behaviors of dark energy, simple parametric forms such as \\(w(z)=w_{0}+w^{\\prime}z\\) (Cooray & Huterer 1999) and \\(w(z)=w_{0}+w_{a}z/(1+z)\\) (Chevallier & Polarski 2001; Linder 2003) have been proposed for studying the possible evolution of dark energy. However, a simple parameterization itself greatly restricts the allowed wandering of \\(w(z)\\), and is equivalent to a strong prior on the nature of dark energy (Riess et al. 2007). To avoid any strong prior before comparing data, one can utilize an alternative approach in which uncorrelated estimates are made of discrete \\(w(z)\\) of different redshifts. This approach was proposed by Huterer & Starkman (2003) and Huterer & Cooray (2005) and has been adopted in previous analyses using SNe Ia (Riess et al. 2007; Sullivan et al. 2007a).
In this work, we apply this approach to GRB luminosity data (Schaefer 2007), in addition to SN Ia data (Riess et al. 2007; Wood-Vasey et al. 2007; Davis et al. 2007), and compare our results with those in the previous work that does not include GRB luminosity data (Sullivan et al. 2007a). We first briefly review the techniques for uncorrelated estimates of dark energy evolution in section 2. The observational data and how they are included in the data analysis are described in section 3. We present our results in section 4, followed by a summary in section 5.
## 2 Methodology
Standard candles impose constraints on cosmological parameters essentially through a comparison of the luminosity distance from observation with that from theoretical models. Observationally, the luminosity distance is given by
\\[d_{L}=\\left(\\frac{L}{4\\pi F}\\right)^{1/2}, \\tag{1}\\]
where \\(L\\) and \\(F\\) are the luminosity of the standard candles and the observed flux, respectively. Theoretically, the luminosity distance \\(d_{L}(z)\\) depends on the geometry of the universe, i.e. the sign of \\(\\Omega_{k}\\), and is given by
\\[d_{L}(z)=(1+z)\\frac{c}{H_{0}}\\times\\left\\{\\begin{array}{ll}\\frac{1}{\\sqrt{ \\Omega_{k}}}\\sinh\\left(\\sqrt{\\Omega_{k}}\\right]\\int_{0}^{z}\\frac{\\mathrm{d}z}{ E(z)}&\\mathrm{if}\\ \\Omega_{k}>0\\\\ \\int_{0}^{z}\\frac{\\mathrm{d}z}{E(z)}&\\mathrm{if}\\ \\Omega_{k}=0\\\\ \\frac{1}{\\sqrt{\\left\\langle\\Omega_{k}\\right\\rvert}}\\sin\\left(\\sqrt{\\Omega_{k} }\\right]\\int_{0}^{z}\\frac{\\mathrm{d}z}{E(z)}&\\mathrm{if}\\ \\Omega_{k}<0\\end{array}\\right. \\tag{2}\\]
where
\\[E(z)=\\left[\\Omega_{m}(1+z)^{3}+\\Omega_{x}f(z)+\\Omega_{k}(1+z)^{2}\\right]^{1/2},\\] \\[\\Omega_{m}+\\Omega_{x}+\\Omega_{k}=1 \\tag{3}\\]
and
\\[f(z)=\\exp\\left[3\\int_{0}^{z}\\frac{1+w(\\overline{z})}{1+\\overline{z}}\\mathrm{ d}z\\right]. \\tag{4}\\]
Dark energy parameterization schemes enter through \\(f(z)\\). For the case where EOS is piecewise constant in redshift, \\(f(z)\\) can be rewritten as (Sullivan et al. 2007a)
\\[f(z_{m-1}<z\\leq z_{a})=(1+z)^{3(1+w_{a})}\\prod_{i=0}^{n-1}(1+z_{i})^{3(w_{i}- w_{i+1})}, \\tag{5}\\]
where \\(w_{i}\\) is the EOS parameter in the \\(i^{\\mathrm{th}}\\) redshift bin defined by an upper boundary at \\(z_{i}\\), and the zeroth bin is defined as \\(z_{0}=0\\). In order to compare with previous analysis (Sullivan et al. 2007a), we define the first three redshift bins to be the same as those used by Sullivan et al. (2007a) by setting \\(z_{1}=0.2\\), \\(z_{2}=0.5\\), and \\(z_{3}=1.8\\). The fourth bin is defined by \\(z_{4}=7\\) to include GRBs. We carry out our analyses under two different assumptions about the high redshift (redshift greater than \\(z_{4}=7\\) in our case) behavior of dark energy, i.e. the so-called (see Riess et al. 2007) \"weak\" prior, which makes no assumptions about \\(w(z)\\) at \\(z>7\\) and the \"strong\" prior, which assumes \\(w(z)=-1\\) at \\(z>7\\).
In this paper we adopt \\(\\chi^{2}\\) statistic to estimate parameters. For a physical quantity \\(\\xi\\) with experimentally measured value \\(\\xi_{o}\\), standard deviation \\(\\sigma_{\\xi}\\), and theoretically predicted value \\(\\xi_{i}(\\theta)\\), where \\(\\theta\\) is a collection of parameters needed to calculate the theoretical value, the \\(\\chi^{2}\\) value is given by
\\[\\chi^{2}_{\\xi}(\\theta)=\\frac{(\\xi_{i}(\\theta)-\\xi_{o})^{2}}{\\sigma_{\\xi}^{2}} \\tag{6}\\]
and the total \\(\\chi^{2}\\) is the sum of all \\(\\chi^{2}_{\\xi}\\)s, i.e.
\\[\\chi^{2}(\\theta)=\\sum_{\\xi}\\chi^{2}_{\\xi}(\\theta). \\tag{7}\\]
The likelihood function is then proportional to \\(\\exp\\left(-\\chi^{2}(\\theta)/2\\right)\\), which produces the posterior probability when multiplied by the prior probability of \\(\\theta\\). In the case of our analysis, the calculation of \\(\\chi^{2}\\)s for different observational data is described in section 3. According to the posterior probability derived in this way, Markov chains are generated through the Monte-Carlo algorithm to study the statistical properties of the parameters. In this paper, we focus on the EOS parameters by marginalizing the others.
As mentioned above, in the process of constraining cosmological parameters, standard candles play this role by providing the luminosity distances at certain redshifts. However, the luminosity distance depends on the integration of the behavior of the dark energy over redshift, so the estimates of the dark energy EOS parameters \\(w_{i}\\) at high redshift depend on those at low redshift. In other words, the EOS parameters \\(w_{i}\\) are correlated in the sense that the covariance matrix,
\\[\\@vec{C}=\\langle\\@vec{w}\\@vec{w}^{\\mathrm{T}}\\rangle-\\langle\\@vec{w}\\rangle \\langle\\@vec{w}^{\\mathrm{T}}\\rangle, \\tag{8}\\]
is not diagonal. In the above equation, the \\(\\@vec{w}\\) is a vector with components \\(w_{i}\\) and the average is calculated by letting \\(\\@vec{w}\\) run over the Markov chain. We can obtain a set of decorrelated parameters \\(\\widetilde{w_{i}}\\) through diagonalization of the covariance matrix by choosing an appropriate transformation
\\[\\widetilde{\\@vec{w}}=\\@vec{T}\\@vec{w}. \\tag{9}\\]
There can be different choices for \\(\\@vec{T}\\). In this paper we use the transformation advocated by Huterer & Cooray (2005) (see below). First we define the Fisher matrix
\\[\\@vec{F}\\equiv\\@vec{C}^{-1}=\\@vec{O}^{\\mathrm{T}}\\@vec{\\Lambda}\\@vec{O}, \\tag{10}\\]
and then the transformation matrix \\(\\@vec{T}\\) is given by
\\[\\@vec{T}=\\@vec{O}^{\\mathrm{T}}\\@vec{\\Lambda}^{\\mathrm{\\frac{1}{2}}}\\@vec{O}, \\tag{11}\\]
except that the rows of the matrix \\(\\@vec{T}\\) are normalized such that
\\[\\sum_{j}T_{ij}=1. \\tag{12}\\]
The advantage of this transformation is that the weights (rows of \\(\\@vec{T}\\)) are positive almost everywhere and localized in redshift fairly well, so the uncorrelated EOS parameters \\(\\widetilde{w_{i}}\\) are easy to interpret intuitively (Huterer & Cooray 2005).
## 3 Observational data
To constrain the dark energy EOS, we have made use of observational data described below.
### Type Ia supernovae
Recently compiled SN Ia data (Riess et al. 2007; Wood-Vasey et al. 2007; Davis et al. 2007) include 45 nearby supernovae (Hamuy et al. 1996; Riess et al. 1999; Jha et al. 2006), 60 ESSENCE supernovae (Wood-Vasey et al. 2007), 57 SNLS supernovae (Astier et al. 2006), and 30 HST supernovae (Riess et al. 2007). Figure 1 shows the distribution of these SN Ia samples versus redshift.
The \\(\\chi^{2}\\) value for SNe Ia is
\\[\\chi^{2}_{\\rm SN}=\\sum_{i}\\frac{(\\mu_{p,i}-\\mu_{o,i})^{2}}{\\sigma_{i}^{2}+ \\sigma_{int}^{2}}, \\tag{13}\\]
where \\(\\mu_{o,i}\\) and \\(\\mu_{p,i}\\) are the observed and theoretically predicted distance modulus of SN Ia, which is defined by \\(\\mu=5\\log d_{L}+25\\) with the luminosity distance \\(d_{L}\\) in unit of megaparsec and \\(\\sigma_{int}\\) is the intrinsic dispersion.
### Gamma-ray bursts
Besides SNe Ia, GRB luminosity data is another main observational constraint we used. As mentioned before, GRBs are complementary to SNe Ia at high redshifts. We include GRBs presented by Schaefer (2007) (see Figure 2 for the distribution of these GRBs versus redshift) in our analysis by utilizing the five luminosity relations, i.e. the connections between measurable parameters of the light curves and/or spectra and GRB luminosity: \\(\\tau_{\\rm lag}\\)-\\(L\\), \\(V\\)-\\(L\\), \\(P_{\\rm peak}\\)-\\(L\\), \\(E_{\\rm peak}\\)-\\(E_{\\gamma}\\) and \\(\\tau_{\\rm RT}\\)-\\(L\\)
\\[\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}} = a_{1}+b_{1}\\log\\left[\\frac{\\tau_{\\rm lag}(1+z)^{-1}}{0.1\\ {\\rm s}}\\right], \\tag{14}\\] \\[\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}} = a_{2}+b_{2}\\log\\left[\\frac{V(1+z)}{0.02}\\right],\\] (15) \\[\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}} = a_{3}+b_{3}\\log\\left[\\frac{E_{\\rm peak}(1+z)}{300\\ {\\rm keV}}\\right],\\] (16) \\[\\log\\frac{E_{\\gamma}}{1\\ {\\rm erg\\ s^{-1}}} = a_{4}+b_{4}\\log\\left[\\frac{E_{\\rm peak}(1+z)}{300\\ {\\rm keV}}\\right], \\tag{17}\\]
\\[\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}} = a_{5}+b_{5}\\log\\left[\\frac{\\tau_{\\rm RT}(1+z)^{-1}}{0.1\\ {\\rm s}}\\right]. \\tag{18}\\]
Throughout this paper, by GRB luminosity data we refer to the GRBs' observational data related to such luminosity relations. It is worth mentioning that these relations may be correlated. As discussed in Schaefer (2007), there is one significant correlation between the \\(V\\)-\\(L\\) and \\(\\tau_{RT}\\)-\\(L\\) relations with the correlation coefficient equaling 0.53. However, even for this correlation, ignoring it only causes a 4% underestimate in the standard error of the average distance modulus (Schaefer 2007), so in our analysis we safely ignore the correlations and simply add the contributions from each relation (see Eq. (21) below).
There are significant differences between SNe Ia and GRBs on the calibration. For SNe Ia, the calibration is done with nearby events and is therefore independent of cosmological parameters. The luminosity relations obtained in the calibration are applied to high-redshift events to derive the luminosity of SNe Ia, then used to constrain cosmological parameters. In this procedure, the calibration and the constraining of cosmological parameters are done separately. In contrast to SNe Ia, to constrain cosmological parameters using GRBs, we need to know the luminosity relations of GRBs (Eq. (14-18)), i.e. to know the values of \\(a_{1}\\)-\\(a_{5}\\) and \\(b_{1}\\)-\\(b_{5}\\); consequently, we need the luminosity \\(L\\) and the total collimation-corrected energy \\(E_{\\gamma}\\) of GRBs, which are converted respectively from the bolometric peak flux \\(P_{\\rm bolo}\\) and the bolometric fluence \\(S_{\\rm bolo}\\) of GRBs through the relations
\\[L = 4\\pi d_{L}^{2}P_{\\rm bolo}, \\tag{19}\\] \\[E_{\\gamma} = E_{\\gamma,{\\rm iso}}F_{\\rm beam}=4\\pi d_{L}^{2}S_{\\rm bolo}(1+z)^ {-1}F_{\\rm beam}. \\tag{20}\\]
The conversion depends on cosmological parameters because the luminosity distance \\(d_{L}\\) depends on cosmological models. As a result, the calibration and the constraining of cosmological parameters are mixed for GRBs; i.e., we need to simultaneously fit calibration parameters of GRBs and cosmological parameters.
Based on the above discussions, the \\(\\chi^{2}\\) value for GRBs is calculated by
\\[\\chi^{2}_{\\rm GRB} = \\sum_{i}\\frac{\\left\\{\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}}-a_{1}-b_{1}\\log\\left[\\frac{\\tau_{\\rm RT}(1+z)^{-1}}{0.1\\ {\\rm s}}\\right]\\right\\}^{2}}{\\sigma_{1}^{2}} \\tag{21}\\] \\[+\\sum_{i}\\frac{\\left\\{\\log\\frac{L}{1\\ {\\rm erg\\ s^{-1}}}-a_{2}-b_{2}\\log\\left[\\frac{V(1+z)}{0.02} \\right]\\right\\}^{2}}{\\sigma_{2}^{2}}\\]
Figure 1: Distribution of SN Ia samples versus redshift
Figure 2: Distribution of GRB samples versus redshift\\[+\\sum_{i}\\frac{\\left\\{\\log\\frac{L_{i}}{1\\,\\mathrm{erg\\ s}^{-1}}-a_{3}-b_{ 3}\\log\\left[\\frac{E_{\\mathrm{rad},(1+z_{i})}}{300\\ \\mathrm{keV}}\\right]\\right\\}^{2}}{\\sigma_{3}^{2}}\\] \\[+\\sum_{i}\\frac{\\left\\{\\log\\frac{L_{i}}{1\\,\\mathrm{erg\\ s}^{-1}}-a_{ 3}-b_{3}\\log\\left[\\frac{E_{\\mathrm{rad},(1+z_{i})}}{300\\ \\mathrm{keV}}\\right]\\right\\}^{2}}{\\sigma_{3}^{2}}\\] \\[+\\sum_{i}\\frac{\\left\\{\\log\\frac{L_{i}}{1\\,\\mathrm{erg\\ s}^{-1}}-a_ {5}-b_{5}\\log\\left[\\frac{\\pi_{\\mathrm{HE},(1+z_{i})^{-1}}}{0.1\\ \\mathrm{s}}\\right]\\right\\}^{2}}{\\sigma_{5}^{2}}, \\tag{21}\\]
where \\(L_{i}\\) and \\(E_{\\gamma,i}\\) are derived using Eq. (19) and Eq. (20). The summations run over the GRBs with the corresponding luminosity indicator observed. We use the systematic errors estimated by (Schaefer, 2007) that account for the scatter of the log-log plots of the luminosity versus the luminosity indicators as \\(\\sigma_{1}\\)-\\(\\sigma_{5}\\) in our analysis. Apparently, \\(\\chi_{\\mathrm{GRB}}^{2}\\) is a function of calibration parameters \\(a_{1}\\)-\\(a_{5}\\), \\(b_{1}\\)-\\(b_{5}\\) and cosmological parameters that enter through the luminosity distance \\(d_{L}\\).
### Other data
In addition to SNe Ia and GRBs, we have also used the constraints below following previous analyses (Riess et al., 2007; Sullivan et al., 2007a)
* _Constraints on dimensionless mass densities_: The SDSS large-scale structure measurements give the constraint on local mass density in terms of \\(\\Omega_{m}h=0.213\\pm 0.023\\)(Tegmark et al., 2004). The WMAP three-year data combined with the HST key project constraint on the Hubble constant gives \\(\\Omega_{k}=-0.014\\pm 0.017\\)(Spergel et al., 2007).
* _The SDSS luminous red galaxy, baryon acoustic oscillation (BAO) distance parameter to \\(z_{\\mathrm{BAO}}=0.35\\)_: \\(A\\equiv\\frac{\\sqrt{\\Omega_{m}H_{0}^{2}}}{c_{\\mathrm{BAO}}}\\left[r^{2}(z_{ \\mathrm{BAO}})\\frac{c_{\\mathrm{BAO}}}{H_{0}E(z_{\\mathrm{ABO}})}\\right]^{1/3}\\), where \\(r(z)=d_{L}(z)/(1+z)\\). \\(A=0.469\\left(\\frac{n}{0.98}\\right)^{-0.35}\\pm 0.017\\) from Eisenstein et al. (2005) and the three-year WMAP results give \\(n=0.95\\)(Spergel et al., 2007).
* _The distance to last scattering, z=1089_: If nonzero cosmic curvature is allowed as we do in our analysis, the three-year WMAP data (Spergel et al., 2007) gives the shift parameter \\(R_{\\mathrm{CMB}}=\\frac{\\sqrt{\\Omega_{m}H_{0}^{2}}}{c}r(z_{\\mathrm{CMB}})=1.71 \\pm 0.03\\)(Wang & Mukherjee, 2007).
* _The distance ratio between \\(z_{\\mathrm{BAO}}=0.35\\) and \\(z_{\\mathrm{CMB}}=1089\\)_: \\[R_{0.35}=\\frac{\\left[r^{2}(z_{\\mathrm{BAO}})\\frac{c_{\\mathrm{BAO}}}{H_{0}E(z_{ \\mathrm{CMB}})}\\right]^{1/3}}{r(z_{\\mathrm{CMB}})}.\\] (22) The SDSS BAO analysis (Eisenstein et al., 2005) gives \\(R_{0.35}=0.0979\\pm 0.0036\\).
The corresponding \\(\\chi^{2}\\) for these constraints are directly calculated using Eq. (6).
We have also studied the dark energy EOS evolution with the above BAO constraints replaced by the latest BAO measurements presented in Percival et al. (2007), for which the \\(\\chi^{2}\\) value is (Percival et al., 2007)
\\[\\chi_{\\mathrm{BAO}}^{2}=\\mathbf{X}_{\\mathrm{BAO}}^{\\mathrm{T}}\\mathbf{C}_{\\mathrm{BAO }}^{-1}\\mathbf{X}_{\\mathrm{BAO}} \\tag{23}\\]
where
\\[\\mathbf{X}_{\\mathrm{BAO}}=\\left(\\begin{array}{cc}\\frac{r_{s}}{D_{3}(0.2)}-0.1980 \\\\ \\frac{1}{D_{3}(0.35)}-0.1094\\end{array}\\right) \\tag{24}\\]
with \\(r_{s}\\) the comoving sound horizon at recombination and
\\[\\mathbf{C}_{\\mathrm{BAO}}^{-1}=\\left(\\begin{array}{cc}35059&-24031\\\\ -24031&108300\\end{array}\\right). \\tag{25}\\]
This constraint itself favors a dark energy EOS of \\(w<-1\\)(Percival et al., 2007).
## 4 Results
Figures 3 and 4 show our results for the weak prior and strong prior respectively. For these two figures, we have included subsets of data from section 3.3 same as that are used in Sullivan et al. (2007a) besides SNe Ia. For the results presented in Figure 5, the BAO constraints are updated with the latest measurements (Percival et al., 2007), see Eq. (23), Eq. (24), and Eq. (25). A comparison between Figures 3 and 4 shows that the results are insensitive to the priors, i.e. insensitive to whether \\(w(z>7)=-1\\) is assumed or not for dark energy.
Since Figures 3 and 4 only differ from results derived by Sullivan et al. (2007a) in that we include GRB luminosity data, comparisons of Figures 3 and 4 with figures in Sullivan et al. (2007a) demonstrate the improvement made by including GRBs. We find that there is little improvement in \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\). This is because at low redshift, where we have both SNe Ia and GRBs, there are fewer GRBs than that of SNe Ia (see Table 1, in the first two bins the number of GRBs is negligible compared with that of SNe Ia); at the same time, the contributions to \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\) from high redshift, where we have a considerable number of GRB samples (see Table 1), are too small (see the weight histograms in Figures 3 and 4) to improve constraints on \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\) significantly. The most significant improvement lies in \\(\\widetilde{w}_{3}\\), whose contribution mostly comes from the third bin, where we have several GRBs (see Table 1). The \\(1\\sigma\\) confidence interval of \\(\\widetilde{w}_{3}\\) with GRBs included is less than one third of that presented in Sullivan et al. (2007a) without including GRB luminosity data.
For Figures 3 and 4, \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\) are consistent with the cosmological constant within \\(1\\sigma\\), and \\(\\widetilde{w}_{3}\\) consistent within \\(2\\sigma\\). While in Figure 5, for which the latest BAO measurements are used instead, the cosmological constant lies outside of the \\(1\\sigma\\) confidence intervals of \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\), and outside the \\(2\\sigma\\) confidence interval of \\(\\widetilde{w}_{3}\\), though still inside the \\(2\\sigma\\) confidence intervals of \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{2}\\). These results show some evidence of an evolving dark energy EOS. This is not surprising provided that the latest BAO measurements themselves favor a dark energy EOS of \\(w<-1\\)(Percival et al., 2007). The BAO distance information lies in the second redshift bin, so including it leads to a smaller \\(\\widetilde{w}_{2}\\). And main data we used depends on the integration of the dark energy evolution, thus the decrease in \\(\\widetilde{w}_{2}\\) causes increases in \\(\\widetilde{w}_{1}\\) and \\(\\widetilde{w}_{3}\\).
The constraints on \\(\\widetilde{w}_{4}\\) are very weak. The uncertainty is so great that we plot its probability separately. This is due to three reasons. First, there are not enough samples of standard candles
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|} \\hline bin & 1 & 2 & 3 & 4 \\\\ \\hline \\hline redshift range & 0-0.2 & 0.2-0.5 & 0.5-1.8 & 1.8-7 \\\\ \\hline number of SNe Ia & 47 & 59 & 86 & 0 \\\\ \\hline number of GRBs & 1 & 3 & 32 & 33 \\\\ \\hline total number & 48 & 62 & 118 & 33 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Number of SNe Ia and GRBs that fall into the four binsFigure 4: Estimates of the uncorrelated dark energy EOS parameters using the strong prior. Same as Figure 3 except using the strong prior.
Figure 3: Estimates of the uncorrelated dark energy EOS parameters using the weak prior. In turn are the plots of \\(\\widetilde{w}_{i}\\) (\\(i\\) = 1-3) versus redshift, window functions of \\(\\widetilde{w}_{i}\\) (\\(i\\) = 1-4) with respect to the 4 bins, probability distribution of \\(\\widetilde{w}_{i}\\) (\\(i\\) = 1-3), \\(\\widetilde{w}_{4}\\), and \\(\\widetilde{w}_{i}-\\widetilde{w}_{j}\\).
in the fourth bin, all of which are GRBs. From Table 1 it can be seen that the number ratio of third bin to the fourth bin is about 4. Second, as mentioned earlier, the estimate of the behavior of dark energy at high redshift depends on its behavior at low redshift; consequently, the uncertainty of EOS parameters at low redshift will be reflected on EOS parameters at high redshift. Therefore we get increasing errors as the redshift increases. Thirdly, the density ratio of dark energy to matter is given by (assuming a constant EOS parameter for dark energy)
\\[\\frac{\\rho_{x}}{\\rho_{m}}=\\frac{\\rho_{x0}(1+z)^{3(1+w_{x})}}{\\rho_{m0}(1+z)^{3 }}\\approx 3(1+z)^{3w_{x}}. \\tag{26}\\]
For negative \\(w_{x}\\), the ratio decreases as \\(z\\) increases. For example, when \\(w_{x}=-1\\), the ratio is about \\(1/9\\) at \\(z=2\\). At higher redshift, matter dominates over dark energy, then dark energy becomes less important in determining the cosmic expansion. Thus the constraints imposed on the behavior of dark energy by the expansion history become weak compared with that at low redshift where dark energy is important. Despite the large uncertainty in \\(\\widetilde{w}_{4}\\), there is indeed some restriction imposed by GRBs. From the probability plots of \\(\\widetilde{w}_{4}\\) in Figures 3, 4, and 5, it can be seen that there is obviously a cut at about zero. In other words, it is most probable that the ratio in Eq. (26) continues to decrease at a redshift beyond 1.8. The probability cut at the left of \\(-200\\) is due to the precision of the computer and can be viewed as the negative infinity. To get substantial constraints on the dark energy EOS beyond 1.8, we need more GRB samples.
To see the overall improvement made by including GRB luminosity data, we calculate the figure of merit (FOM), which is defined by (Sullivan et al. 2007a,b)
\\[\\mathrm{FOM}=\\left[\\sum_{i}\\frac{1}{\\sigma^{2}(\\widetilde{w}_{i})}\\right]^{1 /2}. \\tag{27}\\]
For the the results presented in Figure 5, \\(\\mathrm{FOM}=9.6\\). And if the GRB luminosity data are excluded, \\(\\mathrm{FOM}=8.8\\).
## 5 Summary
We used a model-independent approach to constrain the evolution of dark energy. First, we separated the redshifts into 4 bins and assumed a constant EOS parameter for dark energy in each bin, then estimated the uncorrelated EOS parameters. We mainly used the SNe Ia and GRBs in our analysis. Other constraints from SDSS, 2dFGRS, HST, and WMAP are also included. Compared with the results obtained without including GRB luminosity data, the confidence interval of the third uncorrelated EOS parameter, whose contribution mostly comes from the third bin, is reduced significantly. Even though constraints at high redshift where we have only GRBs are very weak, from the obvious probability cut of the EOS parameter at about zero, we can infer that it is most probable that the ratio of dark energy to matter continues to decrease beyond redshift 1.8. To get substantial constraints at redshifts beyond SNe Ia more GRBs are needed.
If the latest BAO measurements, which themselves favor a dark energy EOS of \\(w<-1\\), are included, the results show some evidence for an evolving dark energy EOS. Otherwise, the results are consistent with the cosmological constant.
###### Acknowledgements.
Shi Qi would like to thank Maurice HPM van Putten and Edna Cheung for helpful discussions and suggestions. This work was supported by the Scientific Research Foundation of the Graduate School of Nanjing University (for Shi Qi), the Jiangxi Project Innovation for PhD Candidates CX078-039x (for Fa-Yin Wang), and the National Natural Science Foundation of China under Grant No. 10473023.
Figure 5: Estimates of the uncorrelated dark energy EOS parameters using the strong prior. Same as Figure 4 except BAO constraints were updated with the latest measurements (Percival et al. 2007).
## References
* (1) Abazajian, K. et al. 2003, Astron. J., 126, 2081
* (2) Aldering, G. et al. 2004, arXiv:astro-ph/0405232
* (3) Allen, S. W., Schmidt, R. W., & Fabian, A. C. 2002, Mon. Not. Roy. Astron. Soc., 334, L11
* (4) Astier, P. et al. 2006, Astron. Astrophys., 447, 31
* (5) Binetruy, P., Deffayet, C., Ellwanger, U., & Langlois, D. 2000, Phys. Lett., B477, 285
* (6) Capozziello, S., Cardone, V. F., Carloni, S., & Troisi, A. 2003, Int. J. Mod. Phys., D12, 1969
* (7) Carroll, S. M., Duvvuri, V., Trodden, M., & Turner, M. S. 2004, Phys. Rev., D70, 043528
* (8) Chevalier, M. & Polarski, D. 2001, Int. J. Mod. Phys., D10, 213
* (9) Cooray, A. R. & Huterer, D. 1999, Astrophys. J., 513, L95
* (10) Copeland, E. J., Sami, M., & Tsujikawa, S. 2006, Int. J. Mod. Phys., D15, 1753
* (11) Dai, Z. G., Liang, E. W., & Xu, D. 2004, Astrophys. J., 612, L101
* (12) Davis, T. M. et al. 2007, Astrophys. J., 666, 716
* (13) Deffayet, C. 2001, Phys. Lett., B502, 199
* (14) Di Girolamo, T., Catena, R., Vietri, M., & Di Sciascio, G. 2005, JCAP., 04, 008
* (15) Dunkley, J. et al. 2008, arXiv:0803.0868
* (16) Dvali, G. R., Gabadadze, G., & Porrati, M. 2000, Phys. Lett., B484, 112
* (17) Eisenstein, D. J. et al. 2005, Astrophys. J., 633, 560
* (18) Firmani, C., Ghisellini, G., Ghirlanda, G., & Avila-Reese, V. 2005, Mon. Not. Roy. Astron. Soc., 360, L1
* (19) Friedman, A. S. & Bloom, J. S. 2005, Astrophys. J., 627, 1
* (20) Ghirlanda, G., Ghisellini, G., Lazzati, D., & Firmani, C. 2004, Astrophys. J., 613, L13
* (21) Hamuy, M., Phillips, M. M., Suntzeff, N. B., Schommer, R. A., & Maza, J. 1996, Astron. J., 112, 2408
* (22) Hawkins, E. et al. 2003, Mon. Not. Roy. Astron. Soc., 346, 78
* (23) Huterer, D. & Cooray, A. 2005, Phys. Rev., D71, 023506
* (24) Huterer, D. & Starkman, G. 2003, Phys. Rev. Lett., 90, 031301
* (25) Ma, S. et al. 2006, Astron. J., 131, 527
* (26) Komatsu, E. et al. 2008, arXiv:0803.0547
* (27) Lamb, D. Q. et al. 2005, arXiv:astro-ph/0507362
* (28) Li, H., Su, M., Fan, Z., Dai, Z., & Zhang, X. 2008, Phys. Lett., B658, 95
* (29) Liang, E.-W. & Zhang, B. 2005, Astrophys. J., 633, 611
* (30) Linder, E. V. 2003, Phys. Rev. Lett., 90, 091301
* (31) Maartens, R. 2007, J. Phys. Conf. Ser., 68, 012046
* (32) Nojiri, S. & Odintsov, S. D. 2003, Phys. Rev., D68, 123512
* (33) Nojiri, S. & Odintsov, S. D. 2006, ECOMF, CO602061, 06
* (34) Percival, W. J. et al. 2007, Mon. Not. Roy. Astron. Soc., 381, 1053
* (35) Perlmutter, S. et al. 1999, Astrophys. J., 517, 565
* (36) Riess, A. G. et al. 1998, Astron. J., 116, 1009
* (37) Riess, A. G. et al. 1999, Astron. J., 117, 707
* (38) Riess, A. G. et al. 2007, Astrophys. J., 659, 98
* (39) Schaefer, B. E. 2007, Astrophys. J., 660, 16
* (40) Spergel, D. N. et al. 2003, Astrophys. J. Suppl., 148, 175
* (41) Spergel, D. N. et al. 2007, Astrophys. J. Suppl., 170, 377
* (42) Su, M., Fan, Z., & Liu, B. 2006, arXiv:astro-ph/0611155
* (43) Sullivan, S., Cooray, A., & Holz, D. E. 2007a, JCAP, 0709, 004
* (44) Sullivan, S. et al. 2007b, arXiv:0709.1150
* (45) Tegmark, M. et al. 2004, Astrophys. J., 606, 702
* (46) Wang, F. Y. & Dai, Z.-G. 2006, Mon. Not. Roy. Astron. Soc., 368, 371
* (47) Wang, F. Y., Dai, Z. G., & Zhu, Z.-H. 2007, Astrophys. J., 667, 1
* (48) Wang, Y. & Mukherjee, P. 2007, Phys. Rev., D76, 103533
* (49) Wood-Vasey, W. M. et al. 2007, Astrophys. J., 666, 694
* (50) Wright, E. L. 2007, Astrophys. J., 664, 633
* (51) Xu, D., Dai, Z., & Liang, E. W. 2005, Astrophys. J., 633, 603 | Context:
Aims: The behavior of the dark energy equation of state (EOS) is crucial in distinguishing different cosmological models. With a model independent approach, we constrain the possible evolution of the dark energy EOS.
Methods:Gamma-ray bursts (GRBs) of redshifts up to \\(z>6\\) are used, in addition to type Ia supernovae (SNe Ia). We separate the redshifts into 4 bins and assume a constant EOS parameter for dark energy in each bin. The EOS parameters are decorrelated by diagonalizing the covariance matrix. And the evolution of dark energy is estimated out of the uncorrelated EOS parameters.
Results: By including GRB luminosity data, we significantly reduce the confidence interval of the uncorrelated EOS parameter whose contribution mostly comes from the redshift bin of \\(0.5<z<1.8\\). At high redshift where we only have GRBs, the constraints on the dark energy EOS are still very weak. However, we can see an obvious cut at about zero in the probability plot of the EOS parameter, from which we can infer that the ratio of dark energy to matter most probably continues to decrease beyond redshift 1.8. We carried out analyses with and without including the latest BAO measurements, which themselves favor a dark energy EOS of w \\(<-1\\). If they are included, the results show some evidence of an evolving dark energy EOS. If not included, however, the results are consistent with the cosmological constant within \\(1\\sigma\\) for redshift \\(0<z\\lesssim 0.5\\) and \\(2\\sigma\\) for \\(0.5\\lesssim z<1.8\\).
Conclusions: | Write a summary of the passage below. |
arxiv-format/0804_1137v1.md | # Variational assimilation of Lagrangian data in oceanography
Maelle Nodet
[email protected]
## 1 Introduction
The world's oceans play a crucial role in governing the earth's weather and climate. Lack of data has been a serious problem in oceanography for a long time. Since ten years the number of observations has greatly increased,with the availability of satellite altimeter data (ie measurements of the free-surface height of the ocean) from Geosat, Topex/Poseidon, Jason and other satellites. In addition to these remote-sensing data we have in situ data, from scientific ships, surface mooring buoys or Lagrangian drifting buoys. Among these observations, Lagrangian data, ie positions of drifting floats, play a relevant role for many reasons: firstly their horizontal coverage is very wide (the whole Atlantic Ocean, for example), secondly they give information about currents, temperature and salinity in depth which are complementary to surface information given by satellite altimeters. For these reasons many national and international programs are organized to deploy drifting floats in the world's oceans. The largest program of this type is Argo (2 055 floats on the 13th October 2005, 3 000 planned), whose floats provide also temperature and salinity profiles.
There are different types of drifting buoys. In the framework of ocean basin-scale localized experiments, oceanographers have datasets from acoustic floats. These floats emit acoustic signals which are recorded by moored listening stations, and the floats positions are calculated every six hours by triangulation. Large datasets are available especially in the Atlantic Ocean (SAMBA, ARCANE-Eurofloat, ACCE experiments). On a larger scale Argo floats are deployed in order to provide vertical temperature and salinity profiles. Assimilation of Argo thermohaline data has been successfully investigated by Forget (PhD Thesis 2004). Argo floats provide also Lagrangian information, which are their positions every ten days. Indeed they drift freely at a predetermined parking depth (around 1 000 meters), every ten days they descent to begin profiles from greater depth (2 000 meters) then they go back to the surface and they record temperature and salinity profiles during ascent. On the surface they transmit data to satellite and they are located by GPS. Thus many different floats networks and Lagrangian datasets are available.
In parallel, modeling of the ocean system has greatly improved in both quality and realism, and there are many Ocean Global Circulation Models (OGCM), like for example the OPA PArallelized Ocean model (see Madec _et al_ 1998). A crucial issue for oceanographers is then to take the best advantage of different types of information included in models to one hand and in various observations to the other hand. Data Assimilation (DA) covers all theoretical and numerical mathematical methods which allow to blend as optimally as possible all sources of information (see the review by Ghil _et al_ 1997 and by De Mey 1997). There are two main categories of DA methods: variational methods based on optimal control theory (Lions 1968) and statistical ones based on optimal statistical estimation (Jazwinski 1970). Adjoint method is the prototype of variational methods, introduced in meteorology by Penenko and Obraztsov (1976). Its effective implementation in the framework of atmospheric Data Assimilation, namely four dimensional variational assimilation (4D-Var), has been studied by Le Dimet and Talagrand (1986, see also Talagrand and Courtier 1987). Introduction of 4D-Var in oceanography is even more recent (see Thacker and Long 1988, Sheinbaum and Anderson 1990). The prototype of sequential methods is the Kalman filter, introduced in oceanography by Ghil (1989) (see also the reviews by Ghil and Malanotte-Rizzoli 1991).
Assimilation of Lagrangian data is in the pipeline. Kamachi and O'Brien (1995) used the adjoint method in a Shallow-Water model with upper-layer thickness as control vector. More recently Mead (2004) has implemented a variational method based on the use of Lagrangian coordinates for Shallow-Water equations. Molcard _et al_ (2003) and Ozgokmen _et al_ (2003) implemented optimal interpolation (which is a simple sequential method) in a reduced-gravity quasi-geostrophic model and in a primitive equations model; their method is based on conversion of Lagrangian data into velocity information. Ide, Kuznetsov, Jones and Salman (2002, 2003, 2005) used Extended and Ensemble Kalman methods to assimilate Lagrangian data into a Shallow-Water model; their method is based on an augmented state vector approach which does not require the conversion of the positions into velocity data. These teams have used simulated data in the twin experiments approach: they don't use real Lagrangian data but idealized observations simulated from a known \"true state\" of the ocean. Beside these studies Assenbaum and Reverdin (2005) assimilate real data available during the POMME experiment, including Argo floats data, into a very high resolution model thanks to optimal interpolation.
Previous works on Lagrangian DA were either based on sequential methods or on variational ones into very simple models. In this paper we investigate variational assimilation of drifters positions into the high resolution primitive equations model OPA.
The aim of variational assimilation methods is to identify the initial state of an evolution problem which minimizes a cost function. This cost function represents the difference between observations and their corresponding model variables. It is minimized using a gradient descent algorithm. The gradient is computed by integration of the adjoint model. Thanks to this formulation there is no need to convert Lagrangian data into velocities data: we can use directly the position observations, although they are not variables of the ocean model, but nonlinear functions of the state variables. Moreover this method is a four dimensional one because the temporal dimension of the observations, ie their Lagrangian nature, is taken into account. The cost function involves a so-called observation operator, which links the state variables (here the velocities) and the observed data (here the positions of drifting particles). This operator is nonlinear and consequently the cost function is not necessarily convex so we used an incremental method (see Courtier _et al_1994) in order to achieve and accelerate the minimization.
We implement our method using the primitive equations model OPA. Our configuration is an idealized wind-driven mid-latitude box model, which is representative of the different processes that are going on in the real mid-latitude ocean, as shown by Holland (1978). Then we use the twin experiments approach. As we have said before, Lagrangian observations are simulated from a known \"true state\", so that data are perfectly consistent with the model and it ensures there is no systematic bias in the observations. It is of course unrealistic, but twin experiments are a necessary first-step to validate our method. Indeed in this framework we know exactly the system true state and so we are able to quantify the efficiency of our method by comparing assimilated and true states. Moreover it was relevant not to use real data for many reasons: firstly, Argo floats have not been launched to provide Lagrangian information, the feasibility of exploiting their positions is absolutely not ensured. Secondly, Lagrangian datasets (from Argo to acoustic floats) are very diversified in terms of number of floats, time-sampling period of observations and drifting depth and we want to investigate the sensitivity of our method to these parameters. In order to take into account difficulties of real data (such as drift during ascent and descent for Argo floats or acoustic positioning problems) we also study the impact of errors in observations on assimilation efficiency.
The paper is organized as follows: in section 2 we describe the physical model and the Lagrangian simulated data. In section 3 we present the assimilation method and its implementation. Some numerical results are given and commented in section 4. We conclude in section 5.
Physical model and Lagrangian data
### The Primitive Equations of the ocean
The ocean circulation model used in our study is a Primitive Equations (PE) model. These equations are derived from Navier Stokes equations (mass conservation and momentum conservation, included the Coriolis force) coupled with a state equation for water density and heat equation, under Boussinesq and hydrostatic approximations (for more details see Lions _et al_ 1992 and Temam and Ziane 2004).
These equations are written as
\\[\\left\\{\\begin{array}{ll}\\partial_{t}u-b\\Delta u+(U.\
abla_{2})u+w\\partial_{z }u-av+\\partial_{x}p=0&\\mbox{ in }\\Omega\\times(0,t_{f})\\\\ \\partial_{t}v-b\\Delta v+(U.\
abla_{2})v+w\\partial_{z}v+au+\\partial_{y}p=0&\\\\ \\partial_{z}p-gT=0&\\\\ \\partial_{t}T-b\\Delta T+(U.\
abla_{2})T+w\\partial_{z}T+fw=0&\\mbox{ in }\\Omega\\times(0,t_{f})\\\\ w(x,y,z)=-\\int_{0}^{z}\\partial_{x}u(x,y,z^{\\prime})+\\partial_{y}v(x,y,z^{ \\prime})\\,dz^{\\prime}&\\mbox{ in }\\Omega\\times(0,t_{f})\\\\ U(t=0)=U_{0},\\qquad T(t=0)=T_{0}&\\mbox{ in }\\Omega\\end{array}\\right. \\tag{1}\\]
where
- \\(\\Omega=\\Omega_{2}\\times(0,1)\\) is the circulation basin, where \\(\\Omega_{2}\\) is a regular bounded open subset of \\(\\mathbb{R}^{2}\\), \\(x\\) and \\(y\\) are the horizontal variables and \\(z\\in(0,1)\\) is the vertical one, \\((0,t_{f})\\) is the time interval;
- \\(U=(u,v)\\) is the horizontal velocity, \\(w\\) is the vertical velocity, \\(T\\) the temperature and \\(p\\) the pressure;
- \\(U_{0}=(u_{0},v_{0})\\) and \\(T_{0}\\) the initial conditions;
- \\((\
abla_{2}.)\\) is the horizontal divergence operator and \\(\\Delta=\\partial_{xx}+\\partial_{yy}+\\partial_{zz}\\) the 3-D Laplace operator;
- \\(a\\), \\(b\\), \\(f\\), \\(g\\) are physical constants.
The space boundary conditions are
\\[\\left\\{\\begin{array}{ll}\\partial_{z}u=\\tau_{u},&\\partial_{z}v=\\tau_{v},&T=0 \\quad\\mbox{on }\\Gamma_{t}\\\\ u=0,&v=0,&T=0\\quad\\mbox{on }\\partial\\Omega\\setminus\\Gamma_{t}\\\\ \\int_{z=0}^{1}\\partial_{x}u+\\partial_{y}v\\,dz=0&\\mbox{ in }\\Omega_{2}\\end{array}\\right. \\tag{2}\\]
where \\(\\tau=(\\tau_{u},\\tau_{v})\\) is the stationary wind-forcing, \\(\\partial\\Omega\\) is the boundary of \\(\\Omega\\) and \\(\\Gamma_{t}=\\Omega_{2}\\times\\{z=1\\}\\) is its top boundary.
### Model and configuration
We are using the OPA ocean circulation model developed by LODYC (see Madec _et al_ 1998), in its 8.1 version. OPA is a flexible model and can be used either in regional or in global ocean configuration. The prognostic variables are the three-dimensional velocity field \\((u,v,w)\\) and the thermohaline variables \\(T\\) and \\(S\\). Discretization is based on finite differences in space and time (leap-frog scheme in time). Various physical choices are available to describe ocean physics.
The characteristics of our configuration are as follows:
- The domain is \\(\\Omega=(0,l)\\times(0,L)\\times(0,H)\\) (longitude, latitude, depth), with \\(l=2800\\) km, \\(L=3600\\) km and \\(H=5000\\) m. It extends from \\(-56^{o}\\) to \\(-24^{o}\\) West longitude, and from \\(22.5^{o}\\) to \\(47.5^{o}\\) North latitude.
- The horizontal resolution is 20 km, there are 11 vertical levels, so that the number of grid points is 180\\(\\times\\)140\\(\\times\\)11 = 277200.
- The time step is 1200 seconds.
- The model is purely wind-driven.
This configuration is a classical eddy-resolving double-gyre circulation. As shown by Holland (1978) this model is representative of real mid-latitude oceans, where circulation is highly nonlinear, non-stationary and where oceanic turbulence is very active. Indeed a very active and unstable mid-latitude jet develops at the convergence of the subpolar gyre and the subtropical gyre. Non-stationary mesoscale eddies form also along the jet. So this model shows dynamically different processes such as large-scale gyres, mid-latitude jet, mesoscale eddies and also western boundary currents which interact in a complex way. Therefore this configuration is a difficult and interesting situation to study Lagrangian DA.
The model is integrated for 25 years until a statistically steady-state is reached, which is our \"true state\" for the twin experiments. Figure 1 shows an instantaneous horizontal velocity field at the surface, on the whole horizontal grid on the left and on a reduced grid on the right. We can see the mid-latitude jet and some mesoscale eddies.
### Lagrangian data
Lagrangian data are positions of drifting floats. These floats drift between \\(z_{0}-a\\) and \\(z_{0}+a\\) where \\(z_{0}\\) is given by the user and \\(a\\) is around 25 meters, so that we can assume that the floats drift at fixed depth \\(z_{0}\\), ie in the horizontalplane \\(z=z_{0}\\) (Assenbaum 2005, personal communication). We denote by \\(\\xi(t)=(\\xi_{1},\\xi_{2})(t)\\) the position of one float at time \\(t\\) in the plane \\(z=z_{0}\\). \\(\\xi(t)\\) is the solution of the following differential equation:
\\[\\left\\{\\begin{array}{rcl}\\frac{d\\xi}{dt}&=&U(t,\\xi(t),z_{0})\\\\ \\xi(0)&=&\\xi_{0}\\end{array}\\right. \\tag{3}\\]
where \\(U=(u,v)\\) is the horizontal velocity of the flow and \\(\\xi_{0}\\) the initial position of the float. It is important to notice that the mapping \\(U\\mapsto\\xi\\), which links the variables of the model and the Lagrangian observations is nonlinear. In the twin experiments approach, observations are simulated by the model. The true initial state of the ocean is given and OPA model computes the true velocities of the ocean during a ten-day window. We compute on-line perfect observations.
To do that we integrate numerically the equation (3) using a leapfrog scheme. This requires the velocity \\(U\\) along the trajectory of the float (ie out of the grid). To achieve this we use the following continuous 2D interpolation
Figure 1: Instantaneous horizontal velocity field of the true state. On the left the velocity field at the surface on the whole horizontal grid. On the right the velocity field at the surface on a reduced grid centered on the mid-latitude jet. Dark grey vectors represent large velocities.
'interp\\((U,(x,y))\\)' of the vector field \\(U\\) at the point \\((x,y)\\):
\\[\\begin{array}{ll}x_{1}=\\lfloor x\\rfloor,&y_{1}=\\lfloor y\\rfloor,\\\\ u_{1}=U(x_{1},y_{1}),&u_{2}=U(x_{1}+1,y_{1}),\\\\ u_{3}=U(x_{1},y_{1}+1),&u_{4}=U(x_{1}+1,y_{1}+1),\\\\ \\mbox{interp}(U,(x,y))=u_{1}+(u_{2}-u_{1})(x-x_{1})+(u_{3}-u_{1})(y-y_{1})\\\\ \\qquad+(u_{1}-u_{2}-u_{3}+u_{4})(x-x_{1})(y-y_{1})\\end{array}\\]
where \\(\\lfloor.\\rfloor\\) denotes the floor function, \\((x_{1},y_{1})\\), \\((x_{1}+1,y_{1})\\), \\((x_{1},y_{1}+1)\\) and \\((x_{1}+1,y_{1}+1)\\) are the grid points which are the nearest neighbors to \\((x,y)\\). This function is piecewise affine with respect to \\(x\\) and \\(y\\), continuous with respect to \\((x,y)\\), linear with respect to \\(u\\). Thus it is not differentiable in \\((x,y)\\) everywhere. More precisely it is not differentiable at \\((x,y)\\) if and only if \\(x=x_{1}\\) or \\(y=y_{1}\\). It will be a problem to derive the adjoint code, see paragraph 3.3. However it is accurate enough to approximate the solution of equation (3) and it is very costly to use a differentiable interpolation. Indeed such a method (like cubic splines for example) would compute each interpolated value from the whole field \\(u\\) (ie the values of \\(u\\) at every horizontal grid point) and we would have to inverse a \\(n\\) by \\(n\\) matrix (where \\(n=25\\,200\\) is the number of horizontal grid points) at every time step and this is not workable.
Let us denote \\(\\xi_{k}=(\\xi_{1,k},\\xi_{2,k})\\) the horizontal position of the float at time \\(t_{k}\\), \\(U\\) the horizontal velocity of the fluid at time \\(t_{k}\\), \\(U_{k}\\) the velocity at point \\(\\xi_{k}\\), and \\(h\\) the time step of the ocean model. The algorithm step is schematically
\\[\\left\\{\\begin{array}{rcl}\\xi_{k}&=&\\xi_{k-2}+2h\\,U_{k-1}\\\\ U_{k}&=&\\mbox{interp}(U,\\xi_{k})\\end{array}\\right.\\]
The dataset is \\(\\{\\xi_{N},\\xi_{2N},\\xi_{3N} \\}\\), where \\(N\\) is an integer. The duration between two data is thus the product of \\(N\\) by the time-step \\(h\\) of the code (\\(h=1200\\) seconds). We call this the time-sampling period. For example if \\(N=72\\) we have one data per float and per day and the time-sampling period is thus one day.
In order to simulate real floats we can add errors to the simulated observations. Origins of errors are multiple: for acoustic floats, inaccuracy can come from acoustic sources (accuracy of their positioning, clocks accuracy, bottom topography - acoustic shadow problem, etc.), floats (listening period accuracy, complexity of the trajectory, technical problem - temporary \"deafness\", etc.) or communications quality. For Argo floats errors are due to drift during ascent and descent and also to drift at the surface between ascent/descent and satellite communication. Errors amplitude is around 3 to 4 kilometers for acoustic floats (T. Reynaud, private communication) and 2 to 6 kilometers for Argo floats (M. Assenbaum, private communication).
Figure 2 represents perfect data simulated by the algorithm with 2 000 floats for 10 days at level 4 (1 000 meters).
## 3 Description and implementation of the variational assimilation method
### Description of the assimilation problem
Without loss of generality we assume that there is only one assimilated data: the dataset is the position \\(\\xi(t_{1})=(\\xi_{1}(t_{1}),\\xi_{2}(t_{1}))\\) in the horizontal plane \\(z=z_{0}\\) of a single float at a single time \\(t_{1}\\). We use here the notations established by Ide _et al_ (1997). We denote by \\(\\mathbf{y}^{o}=(\\xi_{1}(t_{1}),\\xi_{2}(t_{1}))\\) this data. Our problem is to minimize the following cost function with respect to the
Figure 2: Trajectories of 2 000 floats drifting at level 4 during ten days. On the left, trajectories on the whole horizontal grid at level 4. On the right, trajectories on a reduced grid around the mid-latitude jet. Very different trajectories are observed: short to long, half-circle or straight.
control vector \\({\\bf x}\\):
\\[\\begin{array}{rclrcl}{\\cal J}({\\bf x})&=&\\frac{1}{2}\\|{\\cal GM}({\\bf x})-{\\bf y }^{o}\\|^{2}&+&\\frac{\\omega}{2}\\|{\\bf x}-{\\bf x}^{b}\\|_{\\bf B}^{2}\\\\ &=&{\\cal J}^{o}({\\bf x})&+&\\omega\\,{\\cal J}^{b}({\\bf x})\\end{array} \\tag{4}\\]
where
- the control vector \\({\\bf x}=(u_{0},v_{0},T_{0})\\) is the initial state vector,
- the \\({\\bf B}\\)-norm is calculated thanks to the background error covariance matrix \\({\\bf B}^{-1}\\): \\(\\|\\psi\\|_{\\bf B}^{2}=\\psi^{T}{\\bf B}^{-1}\\psi\\),
- \\({\\bf x}^{b}\\) is another initial state of the ocean, called the background or first guess, which is required to be close to the minimum \\(\\bar{\\bf x}\\),
- \\({\\cal M}\\) is the discrete ocean model and \\({\\cal M}({\\bf x})\\) is the discrete state vector (one value per variable, per grid-point and per time step),
- \\({\\cal G}\\) is the discrete nonlinear observation operator, which links the state of the fluid (and especially the horizontal velocity \\(U\\)) with the data, \\({\\cal GM}({\\bf x})=\\xi(t)\\) where \\(\\xi(t)\\) is defined by equation (3),
- \\(\\|.\\|\\) is the euclidean norm in \\(\\mathbb{R}^{2}\\),
- \\({\\bf y}^{o}\\) is the observations vector.
Then \\({\\cal J}^{o}\\) quantifies the misfit between observations and the state of the system, \\({\\cal J}^{b}\\) represents the distance (in terms of the \\({\\bf B}\\)-norm) between the control vector and the background. It is also a regularization term thanks to which the inverse problem of finding the minimum \\({\\bf x}^{*}\\) becomes well-posed. The parameter \\(\\omega\\) represents the relative weight of the regularization term with respect to the observation term and it must be chosen carefully.
### Numerical variational assimilation : incremental 4D-Var
Four dimensional variational assimilation (4D-Var, see Le Dimet and Talagrand 1986) is an iterative numerical method which aims to approximate the solution \\({\\bf x}^{*}\\) of discrete assimilation problems with cost function of type (4).
In 4D-Var a gradient descent algorithm is used to minimize the cost function, the gradient being obtained by solving the discrete adjoint equations. It is an efficient method but it is very costly when the direct model and the observation operator are not linear, for at least two reasons. Firstly every iteration of the adjoint method requires one integration of the full non linear direct model and one integration of the adjoint of the linearized model. Secondly the cost function is not necessarily convex and the minimization process may converge to a local minimum, or it may take considerable time to converge, or may not converge at all.
Incremental 4D-Var (see Courtier _et al_ 1994) avoids, to some extent, both of these problems. In this approach, the nonlinear model is approximated by a simplified linear model (called tangent linear model) and the nonlinear observation operator is linearized around a reference state. The cost function becomes quadratical, it has a unique minimum and this minimum is assumed to be close to the one of the full non quadratical cost function. In that case the minimization process converges quickly. Moreover this approach takes into account weak nonlinearities, because the tangent linear model and the adjoint model are updated three or four times.
**Remark 1**: _The approximation of the full nonlinear model by the tangent linear model is called the tangent linear hypothesis (TLH). In our highly nonlinear configuration, we have to use a ten-day time-window so that the TLH is valid (see section 4)._
### Implementation in OPAVAR
The OPAVAR 8.1 package developed by Weaver _et al_ (2003) includes the direct non linear model OPA 8.1 developed by LODYC, the tangent linear model, the adjoint model and a minimization module. Weaver has implemented a preconditioning through the \\(\\mathbf{B}\\) matrix, via the change of variables \\(\\delta\\mathbf{w}=\\mathbf{B}^{-1/2}\\delta\\mathbf{x}\\), following the method introduced by Courtier _et al_ (1994). The observation operators of OPAVAR 8.1 are interpolation and projection operators. To assimilate Lagrangian data we have implemented the non linear observation operator (see section 2.3), its linearization around the reference trajectory and the adjoint of the linear observation operator.
To obtain the tangent (and adjoint) codes of the discrete observation operator we use the recipes for (hand-coding) adjoint code construction of Talagrand (1991) and Giering and Kaminski (1998). The direct and tangent algorithms are schematically:
* Direct code: \\[\\left\\{\\begin{array}{l}\\xi_{k}=\\xi_{k-2}+2h\\,U_{k-1}\\\\ U_{k}=\\mbox{interp}(U,\\xi_{k})\\end{array}\\right.\\] where 'interp' is the interpolation function of \\(U\\) at point \\(\\xi\\) (see section 2.3).
* Linear tangent code: \\[\\left\\{\\begin{array}{l}\\delta\\xi_{k}=\\delta\\xi_{k-2}+2h\\,\\delta U_{k-1}\\\\ \\delta U_{k}=\\mbox{interp}(\\delta U,\\xi_{k})+\\delta\\xi_{k}.\\partial_{(x,y)}\\mbox{interp }(U,\\xi_{k})\\end{array}\\right.\\] where '\\(\\partial_{(x,y)}\\)interp' is the derivative of the 'interp' function with respect to \\((x,y)\\). The term \\(\\partial_{(x,y)}\\)interp is specific to Lagrangian data. It leads to a slight difficulty, because the function 'interp' is linear with respect of \\(U\\) but it is not derivable in \\((x,y)\\) at points with integer coordinates. Thus we have chosen the values of that derivative at these points, using finite centered differences.
## 4 Numerical results
In this section we present the results of our numerical experiments. We begin with a brief description of our choices.
Background and time-window width.In these experiments we have assimilated only Lagrangian data and we assume that the true initial temperature and salinity were known. Background and time-window width are related because of the incremental formulation: indeed the full nonlinear model is linearized _around the background over the whole time-window_. When the background is too different from the true state or the time-window is too wide, approximation errors are large ie the _tangent linear hypothesis_ (TLH) is not valid any more. So we compute some correlations to choose both of them. If we denote \\(\\mathbf{x}^{t}\\) the true state and \\(\\mathbf{M}\\) the tangent linear model, we can compute the nonlinear and linear perturbations \\(\\delta_{1}\\) and \\(\\delta_{2}\\):
\\[\\delta_{1}=\\mathcal{M}(\\mathbf{x}^{t})-\\mathcal{M}(\\mathbf{x}^{b}),\\quad \\delta_{2}=\\mathbf{M}(\\mathbf{x}^{t}-\\mathbf{x}^{b})\\]
Then we compute (as a function of time) the spatial correlation between \\(\\delta_{1}\\) and \\(\\delta_{2}\\) according to the formula:
\\[Cor(\\delta_{1},\\delta_{2})=\\frac{\\langle\\delta_{1}\\delta_{2}\\rangle-\\langle \\delta_{1}\\rangle\\langle\\delta_{2}\\rangle}{\\sqrt{(\\langle\\delta_{1}^{2} \\rangle-\\langle\\delta_{1}\\rangle^{2})(\\langle\\delta_{2}^{2}\\rangle-\\langle \\delta_{2}\\rangle^{2})}} \\tag{5}\\]
where \\(\\langle X\\rangle\\) is the spatial mean of \\(X\\). The closer to \\(1\\) the correlation is, the better the adequacy between the fields is. Table 1 shows correlation at the end of the time-window between linear and nonlinear perturbationsfor different time-window widths (10 or 20 days) and different background choices (the state of the ocean 10 days or 1 month before the true initial state). We can see that the TLH is not valid with a 20-day window. In the sequel we use a 10-day time-window and the state of the ocean ten days before the true one as a background state, as in this context the TLH is valid. We ran the model with the background as initial state and we obtained a \"without-assimilation\" state, called background in the sequel. It will be compared to the assimilated state in order to quantify the efficiency of the assimilation process.
The B matrix.The choice of the \\(\\mathbf{B}\\) matrix is crucial because of its dual purpose (preconditioning and regularization). Firstly we have tested very simple matrices (identity, energy weights) and the results were quite bad: the convergence was very slow and the analysis increments were very noisy. These matrices are used in OPAVAR only for debugging purpose, with extremely idealized assimilation context, for example when the whole state vector is observed ie with data everywhere. In order to obtain smoother increments and to accelerate the convergence, we used then the diffusion filter method (see Weaver and Courtier 2001) which gives good results.
Diagnostics.Our diagnostics are based on RMS error between the true velocity and the assimilated one, compared with the RMS error between the true velocity and the background one. The RMS error is plotted as a function of time or of the vertical level or of another parameter. For example, we have
\\begin{table}
\\begin{tabular}{l l l} \\hline time-window width & background & correlation \\\\ \\hline
10 days & 10 days & 0.80 \\\\
10 days & 1 month & 0.67 \\\\
20 days & 10 days & 0.50 \\\\
20 days & 1 month & 0.42 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Background and time-window width choices: spatial correlation between nonlinear and linear perturbations at the end of the time-window, according to formula (5). The background is the state of the ocean 10 days or 1 month before the true initial state.
the following formula for the time-dependent RMS error:
\\[\\text{error}(u,t)=\\Big{(}\\frac{\\sum_{i,j,k}|u_{t}(i,j,k,t)-u(i,j,k,t)|^{2}}{\\sum_ {i,j,k}|u_{t}(i,j,k,t)|^{2}}\\Big{)}^{1/2} \\tag{6}\\]
where \\(u_{t}\\) is the true state, \\(u\\) the assimilated state (or the background), \\(t\\) is the time and \\((i,j,k)\\) a grid point, where \\((i,j)\\) are the horizontal coordinates and \\(k\\) the vertical one.
We made the following experiments: first experiment and diagnostics in section 4.1, sensitivity to the floats network parameters (time sampling of the position measurements, number of floats, drifting level, coupled impact of number and time sampling) in section 4.2, comparison with another variational method in section 4.3.
### First experiment
We present here the results of a typical experiment. There are \\(3\\,000\\) floats drifting at level \\(4\\) in the ocean for \\(10\\) days. The Lagrangian data are collected once a day. Thus the total amount of data is \\(2\\times 3\\,000\\times 10=60\\,000\\).
Figure 3 (on the left) shows the RMS error of the experiment as function of time, according to formula (6). We have put the error for the background (= without assimilation state) on the same plot.
Figure 3 (on the right) shows the total RMS error as a function of the vertical level (where \\(1\\) represents the surface and \\(10\\) the bottom), according to the formula:
\\[\\text{error}(u,k)=\\Big{(}\\frac{\\sum_{i,j,t}|u_{t}(i,j,k,t)-u(i,j,k,t)|^{2}}{ \\sum_{i,j,t}|u_{t}(i,j,k,t)|^{2}}\\Big{)}^{1/2} \\tag{7}\\]
We can see that the error with assimilation is twice lower than without. Moreover the assimilation process improves every vertical level and not only the \\(4\\)th one.
**Remark 2**: _We can notice that the RMS error at the beginning are quite large. It is explained by the following fact: the relative weight of the regularization term \\(\\mathcal{J}^{b}\\) (see section 3.1) has to be large enough to ensure the convergence of the minimization process, thus the assimilated state is a compromise between background and observations._
Figure 4 shows the horizontal velocity field \\(U=(u,v)\\) at level \\(1\\) at the final time. We can notice that the main patterns as the mid-latitude jet and the bigger eddies are quite similar to the true ones.
Figure 4: First experiment: horizontal surface velocity field at the final time. The true field is displayed on the left. The field corresponding to the assimilation of floats positions is displayed on the middle. For reference the field obtained without assimilation (background) is displayed on the right.
Figure 3: First experiment: u+v RMS errors corresponding to the assimilation of the positions (sampled 4 times a day) of 3 000 floats drifting at level 4. On the left, RMS error as a function of time. On the right, RMS error as a function of the vertical level. For reference the error without assimilation (background) is also displayed.
Longer experiments.We have seen before that the TLH is not valid for a 20-day window, so that we cannot use incremental 4D-Var for longer windows. However we can restart the assimilation process over the next 10-day window in order to do longer experiments: the new background (at the beginning of the new window) is the previous assimilated state (at the end of the previous window), so that the new background carries information from the previous assimilation process. Table 2 shows the relative RMS errors (7) for \\(u+v\\) of a 30-day experiment at different times. We can see that error with assimilation is lower than 15% at the end of the window, ie it is less than one fourth of the error without assimilation. This is a very good result: 3 succesive assimilation processes enable to reconstruct a very good approximation of the true state.
### Sensitivity to the floats network parameters
In operational oceanography, sensitivity analysis is central to the observational network design. Indeed, in situ and remote observation instruments are very expensive (e.g. one Argo floats costs 15 000$) and they must be optimally used. Ngodock (PhD thesis 1996) shows that second order analysis (ie the derivation of the optimality system or _second order adjoint system_, see also Wang _et al_ 1992) enables to analyze the sensitivity of the 4D-Var assimilation system to the design of the observational network. Second order adjoint information (see also the review paper by Le Dimet _et al_ 2002) is actually central to adaptive observation network and observation targeting issues. However it requires the storage of model, tangent and adjoint trajectories, so that it is not workable in OPAVAR at the present time because of computer memory limitations.
So we have performed a lot of experiments to analyze the sensitivity to various parameters of our assimilation process. Indeed in the ocean the network's
\\begin{table}
\\begin{tabular}{l c c c c} \\hline Experiment & t = 0 & t = 9 days & t = 19 days & t = 29 days \\\\ \\hline Without Assimilation & 70.8 & 61.8 & 62.3 & 59.8 \\\\ \\hline With Assimilation & 55.4 & 34.9 & 21.7 & 13.6 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: RMS errors for u+v (in %) at different instants, during a 30-day experiment, with and without assimilation of the positions of 1 000 floats drifting at level 4, sampled once a day.
parameters can widely change, from Argo floats (1000-2000 meters depth, one data per 10 days) to acoustic floats (various depth, time sampling period around 6 hours) or drifters in the upper ocean (near surface, time sampling period very short)
Here are the parameters that we consider:
- the time sampling period, varying from 6 hours to 10 days,
- the number of floats, varying from 300 to 3000,
- the vertical level of drift, varying from 1 (surface) to 10 (bottom),
We analyze also the coupled effect of the number and the time sampling period.
#### 4.2.1 Sensitivity to the time sampling period.
The framework of this experiment is the following: we performed seven different experiments with exactly the same initial conditions, namely 3 000 floats at level 4. The only difference in these experiments is the time sampling period, which will be denoted shortly by TSP in the sequel. The experiments are denoted by TSP-xxx where xxx is the time sampling period, in hours (6 or 12h) or in days (1, 2, 3, 5 or 10d). Figure 5 shows the RMS error as a function of time for each TSP experiment, except (for readability) experiments TSP-6h, TSP-12h and TSP-2d. It shows also the total RMS error as a function of the time sampling period. We can see that our method is robust with respect to the increase of the time sampling period. This is very encouraging. Indeed every prior study is very sensitive to the TSP and shows quite bad results when the TSP is larger than 2 or 3 days (see Molcard _et al_2003, Mead 2005 and section 4.3). Our method does not show this sensitivity, velocities are quite well reconstructed even when the TSP is large and especially with a 10-day period, which is a very positive result.
#### 4.2.2 Sensitivity to the number of floats.
We perform five experiments with varying number of floats drifting at the same vertical level (4) and with positions sampled with the same period (6 hours). The experiments are denoted by NUM-xxx, where xxx is the number of floats. Figure 6 shows the RMS error as a function of time and the total RMS error as a function of the number of floats for each experiment. We can see that the number of floats has great influence on the results. Under a minimal number (1 000) the velocities are badly reconstructed, undoubtedly Figure 5: Sensitivity to the time sampling period: u+v RMS errors corresponding to assimilation of 3 000 floats’ positions with different time-sampling periods of observation. On the left: error as a function of time for each ’TSP’ experiment and for the background (without assimilation reference state). On the right: final error as a function of the TSP; the error without assimilation is also displayed.
because there is not enough information to constrain the flow. However, when we perform longer experiments, we get satisfactory results for small numbers like 500 and 300. For a ten-day window the results are optimal with a 1 000 floats network and they don't improve with higher numbers. Obviously the information becomes redundant and it is useless to add floats. The associated density is one float per 10 000 km\\({}^{2}\\), which is ten times more than the planned Argo density (namely around 100 floats in our configuration). Even if we perform longer experiments, the Argo density is too small to constrain the velocity field. It is more appropriate to use Lagrangian data from localized experiments such as acoustic floats launchings in the Atlantic Ocean (like e.g. SAMBA), whose floats densities are higher.
#### 4.2.3 Sensitivity to the vertical drift level.
Again we perform seven experiments with 3 000 floats drifting at varying vertical level and fixed TSP (6 hours). As usually we denote by LEV-x the experiment involving floats at level x. Figure 7 shows the RMS error as a function of time for upper levels on the left and lower levels on the right.
Figure 6: Sensitivity to the number of floats: u+v RMS errors corresponding to assimilation of 300 to 3 000 floats’ positions. On the left: error as a function of time for each ’NUM’ experiment and for the background (without assimilation). On the right: final error as a function of the number of floats; the error without assimilation is also displayed.
Again the results are very sensitive to the position of the floats. The three best levels are 3, 4 and 5, ie the intermediate levels. From a physical point of view it is coherent because the information propagates vertically with a finite velocity so that very upper (1, 2) and very lower (7 to 10) levels are penalized. Moreover upper levels (1 to 4) are the most energetic ones (from the kinetic turbulent energy point of view), quasi ten times more than the lower ones (levels 5 to 10), it seems quite natural that the best results are obtained with floats drifting at level 4 which is both intermediate and energetic.
#### 4.2.4 Coupled impact of number of floats and time sampling period.
Here we look at the coupled effect of varying number of floats and varying TSP, for example in order to answer the following question: is the total number of data an important variable to measure the efficiency of the assimilation? So we perform nine experiments, denoted by nnn-xxx where nnn is the number of floats and xxx is the TSP. These experiments and their final
Figure 7: Sensitivity to the vertical drift level: time-evolution of the u+v RMS errors corresponding to the assimilation of positions of floats drifting at different depths. On the left: error for upper levels experiments (above 1 000 meters) and for the background (without assimilation). On the right: error for lower levels experiments (below 1 000 meters) and for the background (without assimilation).
RMS error are described in Table 3. Figure 8 represent the RMS error as a function of time for the 500-xxx experiments on the left, 1000-xxx in the middle and 2000-xxx on the right with the same scale on the axis of ordinates. The results are complementary to the precedent experiments. Indeed we can see that \\(1\\,000\\) is an optimal number for this configuration whatever the TSP and that our method is stable with respect to large TSP whatever the number of floats. Thus we can conclude that, in our configuration, it seems optimal to launch around \\(1\\,000\\) floats and that the TSP can be chosen quite large.
### Comparison with the \"Eulerian\" method
A classical method in oceanography is to assimilate the velocity observations deduced from the Lagrangian data according to the following finite differences formula:
\\[\\begin{array}{rcl}\\frac{\\xi_{1}(t_{k+1})-\\xi_{1}(t_{k})}{t_{k+1}-t_{k}}& \\approx&u(\\xi_{1}(t_{k}),\\xi_{2}(t_{k}),z_{0},t_{k})\\\\ \\frac{\\xi_{2}(t_{k+1})-\\xi_{2}(t_{k})}{t_{k+1}-t_{k}}&\\approx&v(\\xi_{1}(t_{k}), \\xi_{2}(t_{k}),z_{0},t_{k})\\end{array} \\tag{8}\\]
Then the velocity data are treated as Eulerian data (measured at non-fixed points). We implement this method in the 4D-Var framework. The obser
\\begin{table}
\\begin{tabular}{c c c} \\hline Experiment & Total number of data & Final Error (\\%) \\\\ \\hline
500-5D & \\(1\\,000\\) & 46.6 \\\\
500-3D & \\(1\\,500\\) & 44.0 \\\\
500-1D & \\(5\\,000\\) & 44.0 \\\\
2000-5D & \\(4\\,000\\) & 37.1 \\\\
2000-3D & \\(6\\,000\\) & 36.8 \\\\
2000-1D & \\(20\\,000\\) & 35.9 \\\\
10vation operator is much easier to write (and to differentiate and transpose) because it is an interpolation at the points of the true (fixed) floats trajectories. We compare the results for this method said \"Eulerian\" and for our \"Lagrangian\" one. Experiments have the same characteristics (3 000 floats and varying TSP), their names are LAG-xxx or EUL-xxx with xxx the TSP, where xxx is the TSP.
Figures 9 and 10 represent the RMS error as a function of time and vertical level. We can see that the \"Eulerian\" approach is slightly better than the \"Lagrangian\" one when the TSP is small (one day), moreover its computation time is 10% lower. Indeed the TSP is small enough so that the formula (8) is a very good approximation: the displacement vector between two successive positions is quasi tangent to the trajectory (ie quasi collinear with the velocity vector). However we can see that the error for the \"Lagrangian\" method is more homogeneous as a function of the vertical level. For larger TSP (3 days or more) the \"Lagrangian\" method is obviously better than the \"Eulerian\" one: the approximation formula (8) is not valid any more. We can see that our method is able to extract information from the positions data even if the TSP is large.
Figure 8: Coupled impact of number and time sampling period: time-evolution of the u+v RMS errors corresponding to assimilation experiments with 500 to 2 000 floats and positions sampled every 1 to 5 days: experiment with 500 floats on the left, 1 000 in the middle and 2 000 on the right. For reference, the background error is also displayed on each plot.
### Assimilation of noisy observations
In order to deal with real data issues, a necessary first-step is to study the impact of observation errors in the twin experiments framework. To do that, we simulate as previously perfect data from the \"true state\" with 1 000 floats drifting at level 4, their positions being sampled once a day. Then we add a random Gaussian noise to the computed positions. In the sequel, the word \"error\" represents the amplitude of the noise. As we said before, real errors are about 2 to 6 kilometers. However our system is idealized so we study the impact of errors up to 20 kilometers. The total displacement of one float between initial and final positions is around 25 kilometers (in steady regions) to 90 km (in the mid-latitude jet region), so that a 10 to 20-km noise is significant for most of the floats.
Figure 11 represents RMS errors as a function of time (on the left) for experiments with 0 to 20km errors and for the background (without assimilation). On the right we plot RMS errors as a function of observation error amplitude. The RMS error is very stable with increasing noise amplitude: our method is able to extract information even when the error amplitude is not negligible with respect to floats displacement.
Figure 9: Comparison Eulerian/Lagrangian: time-evolution of the RMS error for u+v corresponding to the assimilation of 3 000 floats with different TSP. On the left, errors for the Lagrangian and Eulerian methods with small TSP. On the right, errors for the Lagrangian and Eulerian methods with large TSP. For reference, the background error is also displayed on each plot.
Figure 10: Comparison Eulerian/Lagrangian: final RMS error for u+v as a function of the vertical level, corresponding to the assimilation of 3 000 floats with different TSP. On the left, errors for the Lagrangian and Eulerian methods with small TSP. On the right, errors for the Lagrangian and Eulerian methods with large TSP. For reference, the background error is also displayed on each plot.
Figure 12 shows the evolution of the cost function value and its gradient during the assimilation process. The abscissa represents the total number of iterations. Here we perform four outer loops: in each outer loop we perform ten inner minimization loops, in which the cost function is minimized, as we can see on the left. We can see that the gradient norm decreases and the cost function converges even in the presence of noise in data.
## 5 Conclusion
This paper shows that the problem of assimilating Lagrangian data can be solved by a variational adjoint method into a realistic primitive equations ocean model. We have implemented a Lagrangian method which takes into account the four dimensional (space and time) nature of the observations: RMS errors with assimilation are twice lower than without and the main patterns of the fluid flow are well identified at each vertical level, although the floats drift at a single determined level.
Figure 11: Impact of observation errors: u+v RMS errors corresponding to assimilation of noisy observations. On the left: error as a function of time for experiments with noise amplitude from 1km to 20km. For reference, results for experiments without noise (0km) and without assimilation (background) are also displayed. On the right: final error as a function of the amplitude of the noise; the error without assimilation is also displayed.
We have tested the sensitivity of our method to the characteristics of the dataset. It is very sensitive to the vertical drift level, and the best results are obtained for intermediates ones, especially level 4 (around 1000 meters depth). It is also very sensitive to the number of floats, but the more is not the better, it seems useless to launch more than \\(1\\,000\\) floats in our configuration. It is very robust with respect to the increase of the time-sampling period, up to ten days.
We have compared our Lagrangian method to the Eulerian one, which consists in interpreting Lagrangian data as velocity information. When the time-sampling period of the observations is one day or less, the Eulerian method performs slightly better, but the transfer of information to lower levels is better achieved by the Lagrangian one. When this period is larger than two or three days, the Lagrangian method performs much better than the Eulerian one.
We also studied the impact of errors on observation: the reconstruction of the velocities is well achieved even with a large noise in data.
Also the performances of this method have been assessed in the framework
Figure 12: Evolution of the cost function and its gradient’s norm during the assimilation of noisy data. On the left, evolution of the cost function for experiments with noise amplitude from 1km to 20km. For reference, the cost for experiment without noise (0km) is also plotted. On the right, evolution of the gradient norm, shown on a logarithmic scale, for experiments with and without noise in data.
of the twin experiments approach. The next step would be to use real data and to deal with problems such as trajectories modelling and model error.
## Acknowledgments
The author thanks Anthony Weaver and LODYC for the source code of OPAVAR 8.1.
Numerical computations were performed on the NEC SX5 vector computer at IDRIS.
This work is supported by French project Mercator.
## References
* [1] Assenbaum M and Reverdin G 2005 Near real-time analysis of the mesoscale circulation during the POMME experiment _Deep-sea Research Part I_**52** 1345-1373
* [2] ----2005 Private communication
* [3] Courtier P, Thepaut J-N and Hollingsworth A 1994 A strategy for operational implementation of 4D-Var, using an incremental approach _Quart. J. Roy. Meteor. Soc_**120** 1367-87
* [4] De Mey P 1997 Data assimilation at the oceanic mesoscale: a review _Journal of the Meteorological society of Japan_**75** 1B 415-27
* [5] Forget G 2004 4D-Var assimilation of Argo profiles applied to North Atlantic Ocean climate monitoring University of Bretagne Occidentale Brest France
* [6] Ghil M 1989 Meteorological data assimilation for oceanographers Part I: description and theoretical framework _Dyn. Atmos. Oceans_**13** 171-218
* [7] ----and Manalotte-Rizzoli P 1991 Data assimilation in meteorology and oceanography _Adv. Geophys._**23** 141-265* [8] ----, Ide K, Bennett A, Courtier P, Kimoto M, Nagata M, Saiki M and Sato N 1997 Data assimilation in meteorology and oceanography: Theory and Practice _Meteorological Society of Japan_
* [9] Giering R and Kaminski T 1998 Recipes for Adjoint Code Construction _ACM Trans. On Math. Software_**24** 4 437-74
* Numerical Experiments Using a Wind-Driven Quasi-Geostrophic Model _Journal of Physical Oceanography_**8** 363-392
* [11] Ide K, Courtier P, Ghil M and Lorenc AC 1997 Unified notation for data assimilation: operational, sequential and variational _J. Meteor. Soc. Japan_**75** 1B 181-9
* [12] ----, Kuznetsov L and Jones C 2002 Lagrangian data assimilation for point-vortex system _J. Turbulence_**3** 053
* [13] Jazwinski AH 1970 Stochastic processes and filtering theory _Applied Mathematical Sciences_**64** Academic Press
* [14] Kamachi M and O'Brien J 1995 Continuous data assimilation of drifting buoy trajectory into an equatorial Pacific Ocean model _Journal of Marine Systems_**6** 159-78
* [15] Kuznetsov L, Ide K and Jones C 2003 A method for assimilation of Lagrangian data _Mon. Wea. Rev._**131**(10) 2247-2260
* [16] Le Dimet F X, Navon I M and Daescu D N 2002 Second-Order Information in Data Assimilation _Journal: Monthly Weather Review_**130** 3 629-648
* [17] ----and Talagrand O 1986 Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects _Tellus_ A **38** 97
* [18] Lions J L 1968 _Controle optimal de systemes gouvernes par des equations aux derivees partielles_ (Paris: Dunod Gauthier-Villars)
* [19] ----, Temam R and Wang S 1992 On the equations of the large-scale ocean _Nonlinearity_**5** 5 1007-53* [20] Madec G, Delecluse P, Imbard M and Levy C 1998 OPA8.1 ocean general circulation model reference manual _Notes du pole de Modelisation de l'IPSL_**11**
* [21] Mead J 2005 Assimilation of simulated float data in Lagrangian coordinates _Ocean Modeling_**8** 369-94
* [22] Molcard A, Piterbarg L, Griffa A, Ozgokmen T and Mariano A 2003 Assimilation of drifter observations for the reconstruction of the Eulerian circulation field _J. Geophys. Res. Oceans_**108** C03 3056 1-21
* [23] Ngodock H E 1996 Data assimilation and sensitivity analysis: application to ocean circulation, PhD thesis, University Joseph Fourier Grenoble France
* [24] Ozgokmen T, Molcard A, Chin T, Piterbarg L and Griffa A 2003 Assimilation of drifter observation in primitive equation models of mid-latitude ocean circulation _J. Geophys. Res. Oceans_**108** C07 3238 31 1-31 17
* [25] Penenko V V and Obraztsov N N 1976 A variational initialization method for the fields of the meteorological elements _Sov. Meteorol. Hydrol._**11** 1-11
* [26] Reynaud T 2005 Private communication
* [27] Salman H, Kuznetsov L, Jones C and Ide K 2005 A method for assimilating Lagrangian data into a shallow-water equation ocean model _Mon. Wea. Rev._ to appear
* [28] Sheinbaum J and Anderson D L T 1990 Variational assimilation of XBT data Part I _J. Phys. Oceanogr._**20** 672-88
* [29] Talagrand O 1991 The use of adjoint equations in numerical modeling of the atmospheric circulation _Automatic differentiation of algorithms, Proc. 1st SIAM Workshop, Beckenridge/ CO (USA)_ 169-180
* [30] ----and Courtier P 1987 Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory _Quarterly Journal of the Royal Meteorological Society_**113** 478 1311-28* [31] Temam R and Ziane M 2004 Some mathematical problems in geophysical fluid dynamics _Handbook of Mathematical Fluid Dynamics 3_ (Friedlander S and Serre D Editors Elsevier)
* [32] Thacker W C and Long R B 1988 Fitting dynamics to data _J. Geophys. Res._**93** 1227-40
* [33] Wang Z, Navon I M, Le Dimet F X and Zou X 1992 The second order adjoint analysis: Theory and applications _Meteorology and Atmospheric Physics (Historical Archive)_**50** (1-3) 3-20
* [34] Weaver A T and Courtier P 2001 Correlation modeling on the sphere using a generalized diffusion equation _Q. J. R. Meteorol. Soc._**127** 1815-46
* [35] ----, Vialard J and Anderson D L T 2003 Three- and Four-dimensional variational assimilation with an ocean general circulation model of the tropical Pacific Ocean Part I: formulation, internal diagnostics and consistency checks _Mon. Wea. Rev._**131** 1360-78 | We consider the assimilation of Lagrangian data into a primitive equations circulation model of the ocean at basin scale. The Lagrangian data are positions of floats drifting at fixed depth. We aim at reconstructing the four-dimensional space-time circulation of the ocean. This problem is solved using the four-dimensional variational technique and the adjoint method. In this problem the control vector is chosen as being the initial state of the dynamical system. The observed variables, namely the positions of the floats, are expressed as a function of the control vector via a nonlinear observation operator. This method has been implemented and has the ability to reconstruct the main patterns of the oceanic circulation. Moreover it is very robust with respect to increase of time-sampling period of observations. We have run many twin experiments in order to analyze the sensitivity of our method to the number of floats, the time-sampling period and the vertical drift level. We compare also the performances of the Lagrangian method to that of the classical Eulerian one. Finally we study the impact of errors on observations. | Provide a brief summary of the text. |
arxiv-format/0804_2929v2.md | # Universal Equation of States are Derived from the Isothermal Relationships of Elastic Solids
Jozsef Garai
Department of Mechanical and Materials Engineering, Florida International University, Miami, FL 33174, USA
######
The pressure-volume (P-V) relationship of solids is described by isothermal equation of states. The most widely used isothermal EoSs in solid state physics are the Birch-Murnaghan and the Vinet. The third-order Birch-Murnaghan EoS [1-3] is given as:
\\[\\mathrm{p}=\\frac{3\\mathrm{B}_{{}_{0T}}}{2}\\Bigg{[}\\Bigg{(}\\frac{\\mathrm{V}_{{}_{0T }}}{\\mathrm{V}}\\Bigg{)}^{\\frac{7}{3}}-\\Bigg{(}\\frac{\\mathrm{V}_{{}_{0T}}}{ \\mathrm{V}}\\Bigg{)}^{\\frac{5}{3}}\\Bigg{]}\\Bigg{\\{}1+\\frac{3}{4}\\Big{(}\\mathrm{B }_{{}_{0}}^{\\shortdot}-4\\Big{)}\\left[\\Bigg{(}\\frac{\\mathrm{V}_{{}_{0T}}}{ \\mathrm{V}}\\Bigg{)}^{\\frac{2}{3}}-1\\right]\\Bigg{\\}}\\,, \\tag{1}\\]
where \\(\\mathrm{B}_{{}_{0T}}\\), \\(\\mathrm{B}_{{}_{0}}^{\\shortdot}\\) and \\(\\mathrm{V}_{{}_{0T}}\\) are the bulk modulus, the pressure derivative of the bulk modulus and volume respectively at zero pressure and at the temperature of interest. The so-called universal EoS derived by Rose [4] from a general inter-atomic potential and promoted by Vinet [5] is:
\\[\\mathrm{p}=3\\mathrm{B}_{{}_{0T}}\\,\\frac{1-\\mathrm{f}_{{}_{V}}}{\\mathrm{f}_{{}_ {V}}^{2}}\\mathrm{e}^{\\left[\\frac{3}{2}\\Big{(}\\mathrm{B}_{{}_{0}}^{\\shortdot}- 4\\Big{)}\\left(1-\\mathrm{f}_{{}_{V}}\\right)\\right]} \\tag{2}\\]
where\\[\\mathrm{f_{v}=}\\Bigg{(}\\frac{\\mathrm{V}}{\\mathrm{V_{{}_{0T}}}}\\Bigg{)}^{\\frac{1}{3 }}. \\tag{3}\\]
Simple approach for describing the P-V-T relationship of elastic solids is to raise the temperature of the material first and then use the isothermal EoS [e.g. 6]. In order to use the isothermal EoS the parameters \\(\\mathrm{B_{{}_{0T}}}\\), \\(\\mathrm{B_{{}_{0}}^{{}^{\\prime}}}\\)and \\(\\mathrm{V_{{}_{0T}}}\\) must be known at the temperature of interest. The bulk modulus and volume at the temperature of interest can be calculated as:
\\[\\mathrm{B_{{}_{T0}}}\\big{(}\\mathrm{T}\\big{)}=\\mathrm{B_{{}_{T0}}}\\big{(} \\mathrm{T_{{}_{0}}}\\big{)}+\\Bigg{(}\\frac{\\partial\\mathrm{B_{{}_{T}}}}{ \\partial\\mathrm{T}}\\Bigg{)}_{{}_{\\mathrm{p}}}\\big{(}\\mathrm{T}-\\mathrm{T_{{}_{0 }}}\\big{)}. \\tag{4}\\]
and
\\[\\mathrm{V_{{}_{0}}}\\big{(}\\mathrm{T}\\big{)}=\\mathrm{V_{{}_{0}}}\\big{(} \\mathrm{T_{{}_{0}}}\\big{)}\\mathrm{e}^{\\int\\limits_{\\mathrm{d}(\\mathrm{T}) \\mathrm{dT}}^{\\mathrm{T}}} \\tag{5}\\]
where \\(\\mathrm{\\alpha}\\big{(}\\mathrm{T}\\big{)}\\) is the volume coefficient of thermal expansion at ambient-pressure [e.g. 6; 7]. The temperature dependence of the volume coefficient of thermal expansion can be described as:
\\[\\mathrm{\\alpha}\\big{(}\\mathrm{T}\\big{)}=\\mathrm{a}+\\mathrm{b}\\mathrm{T}-\\frac {\\mathrm{c}}{\\mathrm{T}^{2}}\\,. \\tag{6}\\]
Substituting Eqs. (4)-(6) into the isothermal EoSs allows describing the P-V-T relationship of solids within a limited temperature range. Usually one set of parameters is capable to cover a temperature range of 500-800 K [7]. Beyond this range new sets of parameters are needed for an accurate description. Many of the parameters in the EoS are inter-related, which adds to the complexity of calculations. The optimum values of each of the interrelated parameters have to be determined by confidence ellipses [e.g. 7; 8]. The thermodynamic description of solids covering wide temperature and pressure ranges are complicated and time consuming.
In this study a universal form is proposed for the isothermal EoSs. It is suggested that absolute reference frame should be used for the universal description. The zero pressure and temperature initial parameters, volume\\(\\big{[}\\mathrm{V_{{}_{\\circ}}}\\big{]}\\), bulk modulus \\(\\big{[}\\mathrm{B_{{}_{\\circ}}}\\big{]}\\) and volume coefficient of thermal expansion \\(\\big{[}\\mathrm{\\alpha_{{}_{\\circ}}}\\big{]}\\) can be defined as:\\[\\mathrm{V}_{{}_{o}}\\equiv\\mathrm{nV}_{{}_{o}}^{\\mathrm{m}} \\tag{7}\\]
and
\\[\\mathrm{B}_{{}_{o}}\\equiv\\lim_{{}_{P=0}}\\mathrm{B}_{{}_{T=0}} \\tag{8}\\]
and
\\[\\alpha_{{}_{o}}\\equiv\\lim_{{}_{T=0}}\\alpha_{{}_{V_{P=0}}} \\tag{9}\\]
respectively, where \\(\\mathrm{V}_{{}_{o}}^{\\mathrm{m}}\\) is the molar volume at zero pressure and temperature and n is the number of moles. The volume at zero pressure and at a given temperature can be calculated as:
\\[\\mathrm{V}_{{}_{0T}}=\\mathrm{V}_{{}_{o}}\\mathrm{e}^{\\mathrm{T}\\over{}^{T-0}}\\,. \\tag{10}\\]
Assuming that the temperature effect on volume coefficient of thermal expansion is linear [c=0 in Eq. (6)] it can be written as:
\\[\\alpha_{{}_{T}}\\equiv\\alpha_{{}_{o}}+\\alpha_{{}_{01}}\\mathrm{T}\\,. \\tag{11}\\]
Approximate sign is used for the temperature dependence of the volume coefficient of thermal expansion in Eq. (11) because the temperature dependence of the coefficient below the Debye temperature is not linear but rather correlates to the heat capacity [9]. The fitting results of the two substances used in this study indicate that the introduced error is minor and Eq. (11) can be used for P-V-T calculations. Substituting Eq. (11) into Eq. (10) gives the volume at temperature T
\\[\\mathrm{V}_{{}_{0T}}\\equiv\\mathrm{V}_{{}_{o}}\\mathrm{e}^{\\mathrm{T}\\over{}^{T -0}}\\equiv\\mathrm{V}_{{}_{o}}\\mathrm{e}^{(\\alpha_{{}_{0}}+\\alpha_{{}_{01}} \\mathrm{T})\\mathrm{T}}\\,. \\tag{12}\\]
By assuming that the product of the volume coefficient of thermal expansion and the bulk modulus is constant the temperature dependence of the bulk modulus at 1 bar pressure can be derived from fundamental thermodynamic relationships [10]
\\[\\mathrm{B}_{{}_{0T}}=\\mathrm{B}_{{}_{o}}\\mathrm{e}^{\\mathrm{T}\\over{}^{T-0}} \\tag{13}\\]where \\(\\delta\\) is the Anderson-Gruneisen parameter at ambient conditions. Assuming constant value for the Anderson-Gruneisen parameter, which is reasonable at temperatures higher than the Debye temperature [10] and substituting Eq. (11) into Eq. (13) gives the temperature dependence of the bulk modulus as:
\\[\\mathrm{B_{{}_{0T}}=B_{{}_{o}}e^{-\\int\\limits_{T-0}^{T}(\\omega+a_{0}\\mathrm{T}) \\mathrm{d}\\mathrm{T}}}\\cong B_{{}_{o}}e^{-(\\omega+a_{1}\\mathrm{T})\\mathrm{d} \\mathrm{T}}. \\tag{14}\\]
Assuming that the pressure derivative of the bulk modulus remains constant and plugging the thermal EoS [Eq. (12)] and the temperature dependence of the bulk modulus [Eq. (14)] into the original isothermal EoSs [Eqs. (1)-(3)] results in a universal \\(\\left(V,T\\right)\\)\\(\\Rightarrow\\) p description of solids. The universal form for Birch-Murnaghan and Vinet EoSs can be written then as:
\\[\\begin{split}&\\mathrm{p}=\\frac{3B_{{}_{o}}e^{-(\\omega+a_{1}\\mathrm{T}) \\mathrm{d}\\mathrm{T}}}{2}\\Bigg{[}\\Bigg{(}\\frac{V_{{}_{o}}e^{(a_{o}+a_{1}\\mathrm{ T})\\mathrm{T}}}{V}\\Bigg{)}^{\\frac{2}{3}}-\\Bigg{(}\\frac{V_{{}_{o}}e^{(a_{o}+a_{1} \\mathrm{T})\\mathrm{T}}}{V}\\Bigg{)}^{\\frac{5}{3}}\\Bigg{]}\\\\ &\\Bigg{\\{}1+\\frac{3}{4}\\Big{(}B_{{}_{o}}^{{}^{\\prime}}-4\\Big{)} \\Bigg{[}\\Bigg{(}\\frac{V_{{}_{o}}e^{(a_{o}+a_{1}\\mathrm{T})\\mathrm{T}}}{V} \\Bigg{)}^{\\frac{2}{3}}-1\\Bigg{]}\\Bigg{\\}}\\end{split} \\tag{15}\\]
and
\\[\\mathrm{p}=3B_{{}_{o}}e^{-(a_{o}+a_{1}\\mathrm{T})\\mathrm{d}\\mathrm{T}}\\frac{1- \\Bigg{(}\\frac{V}{V_{{}_{o}}e^{(a_{o}+a_{1}\\mathrm{T})\\mathrm{T}}}\\Bigg{)}^{ \\frac{1}{3}}}{\\Bigg{(}\\frac{V}{V_{{}_{o}}e^{(a+a_{1}\\mathrm{T})\\mathrm{T}}} \\Bigg{)}^{\\frac{2}{3}}}\\ \\mathrm{e}^{\\Bigg{[}\\frac{3}{2}\\Big{(}B_{0}^{{}^{ \\prime}}-1\\Big{)}\\Bigg{[}\\cdot\\Bigg{(}\\frac{V}{V_{{}_{o}}e^{(a_{o}+a_{1} \\mathrm{T})\\mathrm{T}}}\\Bigg{)}^{\\frac{1}{3}}\\Bigg{]}\\Bigg{]}} \\tag{16}\\]
respectively.
Perovskite (MgSiO\\({}_{3}\\)) and periclase (MgO) are the most abundant materials in the Earth's lower mantle [11]. This significant interest in geophysics results in the availability of experiments covering wide pressure and temperature range. The 269 experimental data of perovskite [12 and ref. therein] cover the pressure and the temperature range of 0-109 GPa and 293-2199 K. The experiments (360) of periclase are conducted between 0-142 GPa and 0-3000 K [13 and ref. therein].
The fitting accuracy of the EoSs is evaluated by RMSD and Akaike Information Criteria AIC [14; 15]. The Akaike Information Criteria is devised assessing the right level of complexity. Assuming normally distributed errors, the criterion is calculated as:
\\[\\text{AIC}=2\\text{k}+\\text{n}\\ln\\left(\\frac{\\text{RSS}}{\\text{n}}\\right), \\tag{17}\\]
where n is the number of observations, RSS is the residual sum of squares, and k is the number of parameters. AIC penalizes both for increasing the number of parameters and for reducing the size of data. The preferred model is the one which has the smallest AIC value.
The RMS misfits for the complete data set (360) of periclase are 0.54 GPa and 0.55 GPa for the Universal Birch-Murnaghan and Vinet EoSs respectively while the RMS misfits for perovskite (269) are 0.78 GPa and 0.79 GPa respectively.
The original data set was optimized by removing experiments with high misfit. Repeating the fitting for the new set of data the removal of the experiments was accepted as long as the AIC value of the new data set is smaller than the initial one. After removing 34 experiments and satisfying the AIC fitting criteria the RMS misfit of periclase improved to 0.33 GPa, 0.32 GPa for the Universal Birch-Murnaghan and Vinet EoSs respectively. Removing 5 experiments improved the RMS misfit of perovskite to 0.73 GPa, 0.74 GPa for the Universal Birch-Murnaghan and Vinet EoSs respectively. The excellent fitting of the data to the EoSs indicate that all three universal EoS correctly describe the P-V-T relationship of periclase and perovskite. The fitting parameters and results are given in TAB. I and II. The fitting parameters for the recently proposed universal EoS [12] are also listed. In order to compare the determined parameters to previous studies the ambient condition values were calculated [TAB. III]. The values are consistent with previous investigations [16; 17].
Based on the RMS misfit and AIC values the fittings of the conventional EoSs are slightly better than the EoS [12]. It should be noted that the conventional EoSs are minimizing the misfit to the pressure while the EoS [12] minimize the misfit to the volume. Thus higher misfit for the pressure is expected when the misfit is minimized originally to the volume.
Using absolute reference frame, zero pressure and temperature, and incorporating the temperature dependence of the volume and the bulk modulus new universal EoSs are derived from the isothermal Birch-Murnaghan and Vinet EoSs. These universal EoSs contain seven parameters such as initial molar volume, initial bulk modulus, initial volume coefficient of thermal expansion, number of moles, temperature derivative of the volume coefficient of thermal expansion, pressure derivative of the bulk modulus and the Anderson-Gruneisen parameter. These parameters remain constant throughout the entire pressure and temperature range and sufficient to describe the universal P-V-T relationship of elastic solids with high accuracy. The derived universal EoSs were tested against the experiments of perovskite and periclase with positive results. The RMS misfits of the EoSs are slightly higher than the uncertainty of the experiments.
## Acknowledgement
The author thank to Sergio Speziale for reading and commenting the manuscript.
## References
* (1) F. Birch, Phys. Rev. **71**, 809 (1947).
* (2) F.D. Murnaghan, Am. J. Math. **49**, 235 (1937).
* (3) F.D. Murnaghan, Proc. Nat. Acad. Sci. **30**, 244 (1944).
* (4) J.H. Rose, J.R Smith, F. Guinea and J. Ferrante, Phys. Rev. B **29**, 2963 (1984).
* (5) P. Vinet, J. R. Smith, J Ferrante and J. H. Rose, Phys. Rev. B **35,** 1945 (1987).
* (6) T.S. Duffy and Y. Wang, In: Hemley, R.J. (Ed.), Reviews in Mineralogy, **37,** 425 (1998).
* (7) R.J. Angel, In: Hazen, R.M., Downs, R.T. (Eds.) Reviews in Mineralogy and Geochemistry **41**, 35 (2000).
* (8) E. Mattern, J. Matas, Y. Ricard and J. Bass, Geophys. J. Int. **160**, 973 (2005).
* (9) J. Garai, Computer Coupling of Phase Diagrams and Thermochemistry **30**, 354 (2006).
* (10) J. Garai and A. Laugier, J. Applied Phys. **101**, 023514 (2007).
* (11) L. Stixrude, R.J. Hemley, Y. Fei and H.K. Mao, Science **257**, 1099 (1992).
* (12) J. Garai, J. Applied Phys. **102**, 123506 (2007).
* (13) J. Garai, J. Chen, H. Couvy and G. Telekes, J. arXiv08050249 (2008).
* (14) H. Akaike, _Proc. Second International Symposium on Information Theory_, Ed. B.N. Petrov and F. Csaki (Akademia Kiado, Budapest, 1973) p. 267.
* (15) H. Akaike, IEEE Transactions on Automatic Control **19,** 716 (1974).
* (16) Z. Wu, R.M. Wentzcovitch, K. Umemoto, B. Li, K. Hirose, and J. Zheng, J. Geoph. Res_(in press) doi:10.1029/2007JB005275 (2008).
* (17) Y. Aizawa, and A. Yoneda, Phys. Earth. Planet. Int. **155**, 87 (2006).
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c} \\hline \\hline EoS & Number of & \\multicolumn{2}{c}{B\\({}_{\\text{o}}\\)} & \\multicolumn{2}{c}{V\\({}_{\\text{o}}\\)} & \\multicolumn{2}{c}{\\(\\alpha_{\\text{o}}\\)} & \\multicolumn{2}{c}{\\(\\alpha_{\\text{t}}\\)} & \\multicolumn{2}{c}{\\(\\alpha_{\\text{t}}\\)} & \\multicolumn{2}{c}{b} & \\multicolumn{2}{c}{c} & \\multicolumn{2}{c}{d} & \\multicolumn{2}{c}{Pressure} \\\\ \\cline{4-13} & experiments & [GPa] & [cm\\({}^{3}\\)] & B\\({}_{\\text{o}}\\) & x10\\({}^{5}\\) & x10\\({}^{9}\\) & \\multicolumn{2}{c}{\\(\\delta\\)} & \\multicolumn{2}{c}{a} & \\multicolumn{2}{c}{b} & \\multicolumn{2}{c}{c} & \\multicolumn{2}{c}{d} & \\multicolumn{2}{c}{RMS} & \\multicolumn{2}{c}{AIC} \\\\ \\cline{4-13} & & & x10\\({}^{5}\\) & x10\\({}^{9}\\) & & & x10\\({}^{9}\\) & & x10\\({}^{7}\\) & x10\\({}^{9}\\) & & & & & RMS & & & & RMS & & & \\\\ \\hline Universal (V,T)\\(\\Rightarrow\\) p & 360 & 161.56 & 11.153 & 4.135 & 2.909 & 6.835 & 3.597 & & & & & & & & & & & \\\\ Birch-Murnaghan (Eq. 15) & 324 & 167.62 & 11.119 & 4.131 & 3.522 & 4.751 & 4.893 & & & & & & & & 0.329 & -708.0 \\\\ \\hline Universal (V,T)\\(\\Rightarrow\\) p & 360 & 157.41 & 11.161 & 4.488 & 2.892 & 6.770 & 3.572 & & & & & & & 0.547 & -422.3 \\\\ Vinet (Eq. 16) & 324 & 160.07 & 11.145 & 4.474 & 3.106 & 6.165 & 3.645 & & & & & & 0.318 & -729.6 \\\\ \\hline EoS [1] & 360 & 169.14 & 11.131 & & 2.956 & & & 1.681 & -2.195 & -1.858 & 7.220 & 0.678 & -266.2 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Fitting parameters and results of periclase for the total and optimized data sets.
\\begin{table}
\\begin{tabular}{c c c c c c c c c c c c c c c} \\hline \\hline EoS & Number & B\\({}_{\\rm o}\\) & V\\({}_{\\rm o}\\) & B\\({}_{\\rm o}\\) & \\(\\alpha_{\\rm o}\\) & \\(\\alpha_{\\rm 1}\\) & \\(\\delta\\) & a & b & c & d & \\multicolumn{2}{c}{Pressure} \\\\ \\cline{4-13} & & [GPa] & [cm\\({}^{3}\\)] & B\\({}_{\\rm o}\\) & x10\\({}^{-5}\\) & x10\\({}^{9}\\) & \\(\\delta\\) & a & x10\\({}^{-3}\\) & x10\\({}^{-7}\\) & x10\\({}^{9}\\) & f & \\multicolumn{2}{c}{RMS} & \\multicolumn{1}{c}{AIC} \\\\ \\cline{4-13} & Universal \\(\\big{(}\\)V, T\\(\\big{)}\\)\\(\\Rightarrow\\) p & 269 & 260.80 & 24.251 & 4.016 & 2.591 & -1.089 & 3.149 & & & & 0.784 & -118.7 \\\\ Birch-Murnaghan (Eq.15) & 265 & 260.79 & 24.252 & 4.060 & 2.576 & -0.802 & 3.355 & & & & & 0.735 & -151.3 \\\\ Universal \\(\\big{(}\\)V, T\\(\\big{)}\\)\\(\\Rightarrow\\) p & 269 & 258.37 & 24.253 & 4.214 & 2.619 & -1.247 & 3.119 & & & & & 0.787 & -116.7 \\\\ Vinet (Eq. 16) & 265 & 258.23 & 24.253 & 4.267 & 2.609 & -0.976 & 3.325 & & & & & 0.738 & -149.2 \\\\ \\hline & 269 & 267.51 & 24.284 & & 2.079 & & 1.556 & 0 & -1.098 & 0 & & 0.792 & -115.6 \\\\ EoS [Garai, 2007] & 269 & 273.26 & 24.304 & & 1.616 & & 1.392 & 0.900 & -0.585 & 4.301 & 10.23 & 0.786 & -113.3 \\\\ & 265 & 272.55 & 24.303 & & 1.647 & & 1.424 & 0.884 & -0.711 & 4.040 & 7.86 & 0.738 & -145.9 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Fitting parameters and results of perovskite for the total and optimized data sets.
\\begin{table}
\\begin{tabular}{l l c c c c} \\hline & EoS & \\(\\mathrm{V_{0}}\\) & \\(\\mathrm{K_{0}}\\) & \\(\\mathrm{B_{o}}\\) & \\(\\mathrm{\\alpha_{o}}\\) \\\\ & [cm\\({}^{3}\\)] & [GPa] & & \\(\\mathrm{x10^{-5}}\\) \\\\ \\hline \\multirow{4}{*}{Perovskite (MgSiO3)} & Universal \\(\\mathrm{(V,T)}\\)\\(\\Rightarrow\\) p & \\multirow{2}{*}{24.437} & \\multirow{2}{*}{254.24} & \\multirow{2}{*}{4.060} & \\multirow{2}{*}{2.552} \\\\ & Birch-Murnaghan (Eq.15) & & & & \\\\ \\cline{1-1} \\cline{3-6} & Universal \\(\\mathrm{(V,T)}\\)\\(\\Rightarrow\\) p & \\multirow{2}{*}{24.440} & \\multirow{2}{*}{251.73} & \\multirow{2}{*}{4.267} & \\multirow{2}{*}{2.580} \\\\ & Vinet (Eq. 16) & & & & \\\\ \\hline \\multirow{4}{*}{Periclase (MgO)} & Universal \\(\\mathrm{(V,T)}\\)\\(\\Rightarrow\\) p & \\multirow{2}{*}{11.251} & \\multirow{2}{*}{158.93} & \\multirow{2}{*}{4.131} & \\multirow{2}{*}{3.663} \\\\ & Birch-Murnaghan (Eq.15) & & & & \\\\ \\cline{1-1} \\cline{3-6} & Universal \\(\\mathrm{(V,T)}\\)\\(\\Rightarrow\\) p & \\multirow{2}{*}{11.524} & \\multirow{2}{*}{154.47} & \\multirow{2}{*}{4.474} & \\multirow{2}{*}{3.298} \\\\ & Vinet (Eq. 16) & & & & \\\\ \\hline \\end{tabular}
\\end{table}
Table 3: Parameters at ambient condition. | Universal P-V-T equations of states are derived from the original isothermal expressions of Birch-Murnaghan and the Vinet EoSs by using absolute reference frame and incorporating the temperature dependence of the parameters. The equations are tested against the experiments of perovskite [MgSiO\\({}_{3}\\)] and periclase [MgO]. The determined parameters of the EoSs are valid throughout the entire pressure and temperature ranges, which cover 0-142 GPa and 293-3000 K and 0-109 GPa and 293-2199 K for periclase and perovskite respectively. The root-mean-square (RMS) misfits of the universal Birch-Murnaghan EoS are 0.73 and 0.33 GPa and the misfit of the Vinet EoS are 0.74 and 0.32 GPa for perovskite and periclase respectively. | Write a summary of the passage below. |
arxiv-format/0804_4586v2.md | # Oscillating Universe from inhomogeneous EoS and coupled dark energy
Diego Saez-Gomez\\({}^{1,}\\)
\\({}^{1}\\)Consejo Superior de Investigaciones Cientificas ICE/CSIC-IEEC, Campus UAB,
Facultat de Ciencies, Torre C5-Parell-2a pl, E-08193 Bellaterra (Barcelona) Spain
## I Introduction
The discovery of cosmic acceleration in 1998 by two groups independently[1]-[2] brought to propose a big number of dark energy models (for recent reviews, see Ref. [3]-[5]), where this mysterius cosmic fluid was introduced under the prescription that its equation of state (EoS) parameter should be less than -1/3. At the era of precision cosmology, the observational data establishes that the EoS parameter for dark energy \\(w\\) is close to -1 (see Ref.[6]-[7]). The main task is to describe the nature of this component, for that purpose, several candidates have been proposed, we can mention the cosmological constant model with \\(w=-1\\), the so-called dark fluids with an inhomogeneous EoS[8]-[12], and the quintessence/phantom scalar fields models[13]-[39]. This kind of models may reproduce late-time acceleration but it is not easy to construct a model that keeps untouched the radiation/matter dominated epochs.
An additional gain of these models with scalar and ideal dark fluids is that they allow the possibility to unify early and late time acceleration under the same mechanism, in such a way that the Universe history may be reconstructed completly. On the other hand, it is impotant to keep in mind that these models represent just an effective description that own a number of well-known problems, as the ending of inflation. Nevertheless, they may represent a simple and natural way to resolve the coincidence problem, one of the possibilities may be an oscillating Universe (Ref. [11] and [40]-[43]), where the differents phases of the Universe are reproduced due to its periodic behavior. The purpose of this paper is to show that, from inhomogeneous EoS for a dark energy fluid, an oscillating Universe is obtained, and several examples are given to illustrate it. It is studied the possibility of an interaction between dark energy fluid, with homogeneous EoS, and matter that also reproduces that kind of periodic Hubble parameter, such case has been studied and is allowed by the observations (see [44]). The possible phantom epochs are explored, and the possibility that Universe may reach a Big Rip singularity (for a classification of future singularities, see Ref. [45]).
The organization of this paper is the following: in Sec. 2, a dark energy fluid with inhomogeneous EoS is presented, where this EoS depends on Hubble parameter, its derivatives and on time, it is showed that this kind of EoS reproduces an oscillating behavior of the Hubble parameter. In Sec. 3 a matter component is included, in the first part the problem is driven supossing no coupling between the matter and the dark fluid, a periodic Hubble parameter is obtained under some restriction on inhomogeneous EoS for the dark fluid, and in the second part a coupling is introduced, and it is showed that for a constant homogeneous EoS for dark energy, it is possible reconstruct early and late-time acceleration in a naturally way, due to the interaction between both fluids. Finally, in Sec. 4 the mathematically equivalent scalar tensor description is showed, where the above solutions are reproduced by canonical/phantom scalar fields.
## II Inhomogeneous equation of state for dark energy
Let us consider firstly a Universe filled with a dark energy fluid, neglecting the rest of possible components (dust matter, radiation..), where its EoS depends on the Hubble parameter and its derivatives, such kind of EoS has been treated in several articles[8]-[12]. We show that for some choices of the EoS, an oscillating Universe resulted[40]-[43], which may include phantom phases. Then, the whole Universe history, from inflation to cosmic acceleration, is reproduced in such a way that observational constraints may be satisfied[6]-[7]. We work in a spatially flat FRWUniverse, then the metric is given by:
\\[ds^{2}=-dt^{2}+a(t)^{2}\\sum_{i=1}^{3}dx_{i}^{2}. \\tag{1}\\]
The Friedmann equations are obtained:
\\[H^{2}=-\\frac{\\kappa^{2}}{3}\\rho,\\qquad\\dot{H}=-\\frac{\\kappa^{2}}{2}\\left(\\rho+p \\right). \\tag{2}\\]
By combining the FRW equations, the energy conservation equation for dark energy density results:
\\[\\dot{\\rho}+3H(\\rho+p)=0. \\tag{3}\\]
At this section, the EoS considered is given to have the general form:
\\[p=w\\rho+g(H,\\ddot{H},\\ddot{H},..;t)\\, \\tag{4}\\]
where \\(w\\) is a constant and \\(g(H,\\ddot{H},\\ddot{H},..;t)\\) is an arbitrary function of the Hubble parameter \\(H\\), its derivatives and the time \\(t\\), (such kind of EoS has been treated in Ref. [8]). Using the FRW equations (2) and (4), the following differential equation is obtained:
\\[\\dot{H}+\\frac{3}{2}(1+w)H^{2}+\\frac{\\kappa^{2}}{2}g(H,\\dot{H},\\ddot{H},..;t)=0. \\tag{5}\\]
Hence, for a given function \\(g\\), the Hubble parameter is determinated by solving the equation (5). It is possible to reproduce an oscillating Universe by an specific EoS (4). To illustrate this construction, let us consider the following \\(g\\) function as an example:
\\[g(H,\\dot{H},\\ddot{H})=-\\frac{2}{\\kappa^{2}}\\left(\\ddot{H}+\\dot{H}+\\omega_{0}^ {2}H+\\frac{3}{2}(1+w))H^{2}-H_{0}\\right)\\, \\tag{6}\\]
where \\(H_{0}\\) and \\(\\omega_{0}^{2}\\) are constants. By substituting (6) in (5) the Hubble parameter equation acdquieres the form:
\\[\\ddot{H}+\\omega_{0}H=H_{0}\\, \\tag{7}\\]
which is the classical equation for an harmonic oscillator. The solution[37] is found:
\\[H(t)=\\frac{H_{0}}{\\omega_{0}^{2}}+H_{1}\\sin(\\omega_{0}t+\\delta_{0})\\, \\tag{8}\\]
where \\(H_{1}\\) and \\(\\delta_{0}\\) are integration constants. To study the system, we calculate the first derivative of the Hubble parameter, which is given by \\(\\dot{H}=H_{1}\\cos(\\omega_{0}t+\\delta_{0})\\), so the Universe governed by the dark energy fluid (6) oscillates between phantom and non-phantom phases with a frecuency given by the constant \\(\\omega_{0}\\), constructing inflation epoch and late-time acceleration under the same mechanism, and Big Rip singularity avoided
As another example, we consider the following EoS (4) for the dark energy fluid:
\\[p=w\\rho+\\frac{2}{\\kappa^{2}}Hf^{\\prime}(t). \\tag{9}\\]
In this case \\(g(H;t)=\\frac{2}{\\kappa^{2}}Hf^{\\prime}(t)\\), where \\(f(t)\\) is an arbitrary function of the time \\(t\\), and the prime denotes a derivative on \\(t\\). The equation (5) takes the form:
\\[\\dot{H}+Hf^{\\prime}(t)=-\\frac{3}{2}(1+w)H^{2}. \\tag{10}\\]
This is the well-known Bernoulli differential equation. For a function \\(f(t)=-ln\\left(H_{1}+H_{0}\\sin\\omega_{0}t\\right)\\), where \\(H_{1}>H_{0}\\) are arbitrary constants, then the following solution for (10) is found:
\\[H(t)=\\frac{H_{1}+H_{0}\\sin\\omega_{0}t}{\\frac{3}{2}(1+w)t+k}\\, \\tag{11}\\]here, the \\(k\\) is an integration constant. As it is seen, for some values of the free constant parameters, the Hubble parameter tends to infinity for a given finite value of \\(t\\). The first derivative of the Hubble parameter is given by:
\\[\\dot{H}=\\frac{\\frac{H_{0}}{\\omega_{0}}(\\frac{3}{2}(1+w)t+k)\\cos\\omega_{0}t-(H_{1 }+H_{0}\\sin\\omega_{0}t)\\frac{3}{2}(1+w)}{(\\frac{3}{2}(1+w)t+k)^{2}}. \\tag{12}\\]
As it is shown in fig.1, the Universe has a periodic behavior, it passes through phantom and non-phantom epochs, with its respectives transitions. A Big Rip singularity may take place depending on the value of \\(w\\), such that it is avoided for \\(w\\geq-1\\), while if \\(w<-1\\) the Universe reaches the singularity in the Rip time given by \\(t_{s}=\\frac{2k}{3|1+w|}\\).
## III Dark energy ideal fluid and dust matter
### No coupling between matter and dark energy
Let us now explore a more realistic model by introducing a matter component with EoS given by \\(p_{m}=w_{m}\\rho_{m}\\), we consider an inhomogeneous EoS for the dark energy component[8]\\({}^{-}\\)[10]. It is shown below that an oscillating Universe may be obtained by constructing an specific EoS. In this case, the FRW equations (2) take the form:
\\[H^{2}=-\\frac{\\kappa^{2}}{3}(\\rho+\\rho_{m}),\\qquad\\dot{H}=-\\frac{\\kappa^{2}}{2} \\left(\\rho+p+\\rho_{m}+p_{m}\\right). \\tag{13}\\]
At this section, we consider a matter fluid that doesnt interact with the dark energy fluid, then the energy conservation equations are satisfied for each fluid separately:
\\[\\dot{\\rho_{m}}+3H(\\rho_{m}+p_{m})=0,\\qquad\\dot{\\rho}+3H(\\rho+p)=0. \\tag{14}\\]
It is useful to construct an specific solution for the Hubble parameter by defining the effective EoS with an effective parameter \\(w_{eff}\\):
\\[w_{eff}=\\frac{p_{eff}}{\\rho_{eff}},\\quad\\rho_{eff}=\\rho+\\rho_{m},\\quad p_{eff}= p+p_{m}\\, \\tag{15}\\]
and the energy conservation equation \\(\\dot{\\rho}_{eff}+3H(\\rho_{eff}+p_{eff})=0\\) is satisfied. We consider a dark energy fluid which is described by the EoS describes by the following expression:
\\[p=-\\rho+\\frac{2}{\\kappa^{2}}\\frac{2(1+w(t))}{3\\int(1+w(t))dt}-(1+w_{m})\\rho_{ m0}\\mathrm{e}^{-3(1+w_{m})\\int dt\\frac{2}{3\\int(1+w(t))}}\\, \\tag{16}\\]
here \\(\\rho_{m0}\\) is a constant, and \\(w(t)\\) is an arbitrary function of time \\(t\\). Then the following solution is found:
\\[H(t)=\\frac{2}{3\\int dt(1+w(t))}. \\tag{17}\\]
Figure 1: The Hubble parameter H and \\(\\dot{H}\\) for a value \\(w=-1.1\\). Phantom phases ocurrs periodically, and a Big Rip singularity takes place at Rip time \\(t_{s}\\).
And the effective parameter (15) takes the form \\(w_{eff}=w(t)\\). Then, it is shown that a solution for the Hubble parameter may be constructed from EoS (16) by specifying a function \\(w(t)\\).
Let us consider an example[40] with the following function for \\(w(t)\\):
\\[w=-1+w_{0}\\cos\\omega t. \\tag{18}\\]
In this case, the EoS for the dark energy fluid, given by (16), takes the form:
\\[p=-\\rho+\\frac{4}{3\\kappa^{2}}\\frac{\\omega w_{0}\\cos\\omega t}{w_{1}+w_{0}\\sin \\omega t}-(1+w_{m})\\rho_{m0}\\mathrm{e}^{-3(1+w_{m})\\frac{2w}{3(w_{1}+w_{0}\\sin \\omega t)}}\\, \\tag{19}\\]
where \\(w_{1}\\) is an integration constant. Then, by (17), the Hubble parameter yields:
\\[H(t)=\\frac{2\\omega}{3(w_{1}+w_{0}\\sin\\omega t)}. \\tag{20}\\]
The Universe passes through phantom and non phantom phases since the first derivative of the Hubble parameter has the form:
\\[\\dot{H}=-\\frac{2\\omega^{2}w_{0}\\cos\\omega t}{3(w_{1}+w_{0}\\sin\\omega t)^{2}}. \\tag{21}\\]
In this way, a Big Rip singularity will take place in order that \\(|w_{1}|<w_{0}\\), and it is avoided when \\(|w_{1}|>w_{0}\\). As it is shown, this model reproduces unified inflation and cosmic acceleration in a natural way, where the Universe presents a periodic behavior. In order to reproduce accelerated and decelerated phases, the acceleration parameter is studied, which is given by:
\\[\\frac{\\ddot{a}}{a}=\\frac{2\\omega^{2}}{3(w_{1}+w_{0}\\sin\\omega t)^{2}}\\left( \\frac{2}{3}-w_{0}\\cos\\omega t\\right). \\tag{22}\\]
Hence, if \\(w_{0}>2/3\\) the differents phases that Universe passes are reproduced by the EoS (19), presenting a periodic evolution that may unify all the epochs by the same description.
As a second example, we may consider a classical periodic function, the step function:
\\[w(t)=-1+\\left\\{\\begin{array}{ll}w_{0}&0<t<T/2\\\\ w_{1}&T/2<t<T\\end{array}\\right.\\, \\tag{23}\\]
and \\(w(t+T)=w(t)\\). It is useful to use a Fourier expansion such that the function (23) become continuos. Aproximating to third order, \\(w(t)\\) is given by:
\\[w(t)=-1+\\frac{(w_{0}+w_{1})}{2}+\\frac{2(w_{0}-w_{1})}{\\pi}\\left(\\sin\\omega t+ \\frac{\\sin 3\\omega t}{3}+\\frac{\\sin 5\\omega t}{5}\\right). \\tag{24}\\]
Hence, the EoS for the Dark energy ideal fluid is given by (16), and the solution (17) takes the following form:
\\[H(t)=\\frac{2}{3}\\left[w_{2}+\\frac{(w_{0}+w_{1})}{2}t\\right.\\]
\\[\\left.-\\frac{2(w_{0}-w_{1})}{\\pi\\omega}\\left(\\cos\\omega t+\\frac{\\cos 3\\omega t} {9}+\\frac{\\cos 5\\omega t}{25}\\right)\\right]^{-1}. \\tag{25}\\]
The model is studied by the first derivative of the Hubble parameter in order to see the possible phantom epochs, since:
\\[\\dot{H}=-\\frac{3}{2}H^{2}\\left[\\frac{(w_{0}+w_{1})}{2}\\right.\\]
\\[\\left.-\\frac{2}{\\pi}(w_{0}-w_{1})\\left(\\sin\\omega t+\\frac{\\sin 3\\omega t}{3}+ \\frac{\\sin 5\\omega t}{5}\\right)\\right]. \\tag{26}\\]hen, depending on the values from \\(w_{0}\\) and \\(w_{1}\\) the Universe passes through phantom phases. To explore the different epochs of acceleration and deceleration that the Universe passes on, the acceleration parameter is calculated:
\\[\\frac{\\ddot{a}}{a}=H^{2}+\\dot{H}=\\]
\\[H^{2}\\left[1-\\frac{3}{2}\\left(\\frac{(w_{0}+w_{1})}{2}-\\frac{2}{\\pi}(w_{0}-w_{1} )\\left(\\sin\\omega t+\\frac{\\sin 3\\omega t}{3}+\\frac{\\sin 5\\omega t}{5}\\right) \\right)\\right]. \\tag{27}\\]
Then, in order to get acceleration and deceleration epochs, the constants parameters \\(w_{0}\\) and \\(w_{1}\\) may be chosen such that \\(w_{0}<2/3\\) and \\(w_{1}>2/3\\), as it is seen by (23). For this selection, phantom epochs take place in the case that \\(w_{0}<0\\). In any case, the oscillated behavior is damped by the inverse term on the time \\(t\\), as it is shown in fig.2, where the acceleration parameter is ploted for some determinated values of the free parameters. This inverse time term makes reduce the acceleration and the Hubble parameter such that the model tends to a static Universe.
We consider now a third example where a classical damped oscillator is showed, the function \\(w(t)\\) is given by:
\\[w(t)=-1+\\mathrm{e}^{-\\alpha t}w_{0}\\cos\\omega t\\, \\tag{28}\\]
here \\(\\alpha\\) and \\(w_{0}\\) are two positive constants. Then, the EoS for the dark energy ideal fluid is constructed from (16). The solution for the Hubble parameter (17) is integrated, and takes the form:
\\[H(t)=\\frac{2}{3}\\frac{\\omega^{2}+\\alpha^{2}}{w_{1}+w_{0}\\mathrm{e}^{-\\alpha t }(\\omega\\sin\\omega t-\\alpha\\cos\\omega t)}\\, \\tag{29}\\]
where \\(w_{1}\\) is an integration constant. The Hubble parameter oscillates damped by an exponential term, and for big times, it tends to a constant \\(H(t\\longrightarrow\\infty)=\\frac{2}{3}\\frac{\\omega^{2}+\\alpha^{2}}{w_{1}}\\), recovering the cosmological constant model. The Universe passes through different phases as it may be shown by the accelerated parameter:
\\[\\frac{\\ddot{a}}{a}=H^{2}\\left(1-\\frac{3}{2}\\mathrm{e}^{-\\alpha t}w_{0}\\cos \\omega t\\right). \\tag{30}\\]
It is possible to restrict \\(w_{0}>\\frac{2}{3}\\) in order to get deceleration epochs when the matter component dominates. On the other hand, the Universe also passes through phantom epochs, since the Hubble parameter derivative gives:
\\[\\dot{H}=-\\frac{3}{2}H^{2}\\mathrm{e}^{-\\alpha t}w_{0}\\cos\\omega t. \\tag{31}\\]
Hence, the example (28) exposes an oscillating Universe with a frecuency given by \\(\\omega\\) and damped by a negative exponential term, which depends on the free parameter \\(\\alpha\\), these may be adjusted such that the phases agree with the phases times constraints by the observational data.
Figure 2: The Hubble parameter, its derivative and the acceleration parameter are represented for the “step” model for values \\(w_{0}=-0.2\\) and \\(w_{1}=1\\).
### Dark energy and coupled matter
In general, one may consider a Universe filled with a dark energy ideal fluid whose Eos is given \\(p=w\\rho\\), where \\(w\\) is a constant, and matter described by \\(p_{m}=\\omega_{m}\\rho_{m}\\), both interacting with eah other. In order to preserve the energy conservation, the equations for the energy density are written as following:
\\[\\dot{\\rho_{m}}+3H(\\rho_{m}+p_{m})=Q,\\qquad\\dot{\\rho}+3H(\\rho+p_{)}=-Q\\, \\tag{32}\\]
here \\(Q\\) is an arbitrary function. In this way, the total energy conservation is satisfied \\(\\dot{\\rho}_{eff}+3H(\\rho_{eff}+p_{eff})=0\\), where \\(\\rho_{eff}=\\rho+\\rho_{m}\\) and \\(p_{eff}=p+p_{m}\\), and the FRW equations (13) doesnt change. To resolve this set of equations for a determined function \\(Q\\), the second FRW equation (13) is combined with the conservation equations (32), this yields:
\\[\\dot{H}=-\\frac{\\kappa^{2}}{2}\\left[(1+w_{m})\\frac{\\int Q\\exp(\\int dt3H(1+w_{m} ))}{\\exp(\\int dt3H(1+w_{m}))}\\right.\\]
\\[\\left.+(1+w)\\frac{-\\int Q\\exp(\\int dt3H(1+w))}{\\exp(\\int dt3H(1+w))}\\right]. \\tag{33}\\]
In general, this is difficult to resolve for a function Q. As a particular simple case is the cosmological constant where the dark energy EoS parameter \\(w=-1\\) is considered, the equations become very clear, and (32) yields \\(\\dot{\\rho}=-Q\\), which is resolved and the dark energy density is given by:
\\[\\rho(t)=\\rho_{0}-\\int dtQ(t)\\, \\tag{34}\\]
where \\(\\rho_{0}\\) is an integration constant. Then, the Hubble parameter is obtained by introducing (34) in the FRW equations, which yields:
\\[\\dot{H}+\\frac{3}{2}(1+w_{m})H^{2}=\\frac{\\kappa^{2}}{2}(1+w_{m})\\left(\\rho_{0} -\\int dtQ\\right). \\tag{35}\\]
Hence, Hubble parameter depends essentially on the form of the coupling function \\(Q\\). This means that a Universe model may be constructed from the coupling between matter and dark energy fluid, which is given by \\(Q\\), an arbitrary function. It is showed below that some of the models given in the previus section by an inhomogeneous EoS dark energy fluid, are reproduced by a dark energy fluid with constant EoS (\\(w=-1\\)), but coupled to dust matter. By differentiating equation (35), the function \\(Q\\) may be written in terms of the Hubble parameter and its derivatives:
\\[Q=-\\frac{2}{\\kappa^{2}}\\frac{1}{1+w_{m}}\\left(\\ddot{H}+3(1+w_{m})H\\dot{H} \\right). \\tag{36}\\]
As an example, we use the solution(8):
\\[H(t)=H_{0}+H_{1}\\sin(\\omega_{0}t+\\delta_{0}). \\tag{37}\\]
Then, by the equation (35), the function \\(Q\\) is given by:
\\[Q(t)=\\frac{2}{\\kappa^{2}(1+w_{m})}\\left[H_{0}\\omega^{2}\\sin\\omega t+3(1+w_{m} )h_{0}\\omega\\cos\\omega t(H_{1}+H_{0}\\sin\\omega t))\\right]. \\tag{38}\\]
Then, the oscillated model (37) is reproduced by a coupling between matter and dark energy, which also oscillates. Some more complicated models may be constructed for complex functions \\(Q\\). As an example let us consider the solution (20):
\\[H(t)=\\frac{2\\omega}{3(w_{1}+w_{0}\\sin\\omega t)}. \\tag{39}\\]
The coupling function (36) takes the form:
\\[Q(t)=-\\frac{4}{3\\kappa^{2}}\\frac{\\omega^{3}w_{0}}{(1+w_{m})(w_{1}+w_{0}\\sin \\omega t)^{3}}\\]
\\[\\left[\\sin\\omega t(w_{1}+w_{0}\\sin\\omega t)^{2}+2w_{0}\\cos^{2}\\omega t-2(1+w_ {m})\\cos\\omega t\\right]. \\tag{40}\\]
This coupling function reproduces a oscillated behavior that unifies the different epochs in the Universe. Hence, it have been showed that for a constant EoS for the dark energy with \\(w=-1\\), inflation and late-time accceleretion are given in a simple and natural way.
Scalar-tensor description
Let us now consider the solutions showed in the last sections through scalar-tensor description, such equivalence has been constructed in Ref. [46]. We assume, as before, a flat FRW metric, a Universe filled with a ideal matter fluid with EoS given by \\(p_{m}=w_{m}\\rho_{m}\\), and no coupling between matter and the scalar field. Then, the following action is considered:
\\[S=\\int dx^{4}\\sqrt{-g}\\left[\\frac{1}{2\\kappa^{2}}R-\\frac{1}{2} \\omega(\\phi)\\partial_{\\mu}\\phi\\partial^{\\mu}\\phi-V(\\phi)+L_{m}\\right]\\, \\tag{41}\\]
here \\(\\omega(\\phi)\\) is the kinetic term and \\(V(\\phi)\\) represents the scalar potential. Then, the corresponding FRW equations are written as:
\\[H^{2}=\\frac{\\kappa^{2}}{3}\\left(\\rho_{m}+\\rho_{\\phi}\\right)\\,\\qquad\\dot{H}=- \\frac{\\kappa^{2}}{2}\\left(\\rho_{m}+p_{m}+\\rho_{\\phi}+p_{\\phi}\\right)\\, \\tag{42}\\]
where \\(\\rho_{\\phi}\\) and \\(p_{\\phi}\\) given by:
\\[\\rho_{\\phi}=\\frac{1}{2}\\omega(\\phi)\\,\\dot{\\phi}^{2}+V(\\phi)\\,\\qquad p_{\\phi}= \\frac{1}{2}\\omega(\\phi)\\,\\dot{\\phi}^{2}-V(\\phi). \\tag{43}\\]
By assuming:
\\[\\omega(\\phi)=-\\frac{2}{\\kappa^{2}}f^{\\prime}(\\phi)-(w_{m}+1)F_{0 }\\mathrm{e}^{-3(1+w_{m})F(\\phi)}\\,\\] \\[V(\\phi)=\\frac{1}{\\kappa^{2}}\\left[3f(\\phi)^{2}+f^{\\prime}(\\phi) \\right]+\\frac{w_{m}-1}{2}F_{0}\\,\\mathrm{e}^{-3(1+w_{m})F(\\phi)}. \\tag{44}\\]
The following solution is found[37]\\({}^{-}\\)[39]:
\\[\\phi=t\\,\\quad H(t)=f(t)\\, \\tag{45}\\]
which yields:
\\[a(t)=a_{0}\\mathrm{e}^{F(t)},\\qquad a_{0}=\\left(\\frac{\\rho_{m0}}{F _{0}}\\right)^{\\frac{1}{3(1+w_{m})}}. \\tag{46}\\]
Then, we may assume solution (29), in such case the \\(f(\\phi)\\) function takes the form:
\\[f(\\phi)=\\frac{2}{3}\\frac{\\omega^{2}+\\alpha^{2}}{w_{1}+w_{0}\\mathrm{e}^{- \\alpha\\phi}(\\omega\\sin\\omega\\phi-\\alpha\\cos\\omega\\phi)}\\, \\tag{47}\\]
And by (44) the kinetic term and the scalar potential are given by:
\\[\\omega(\\phi)=\\frac{3}{\\kappa^{2}}f^{2}(\\phi)w_{0}\\mathrm{e}^{- \\alpha\\phi}\\cos\\omega\\phi-(1+w_{m})F_{0}\\mathrm{e}^{-3(1+w_{m})F(\\phi)},\\] \\[V(\\phi)=\\frac{3f^{2}(\\phi)}{\\kappa^{2}}\\left(1-\\frac{1}{2}w_{0} \\mathrm{e}^{-\\alpha\\phi}\\cos\\omega\\phi\\right)+\\frac{w_{m}-1}{2}F_{0}\\mathrm{e }^{-3(1+w_{m})F(\\phi)}, \\tag{48}\\]
where \\(F(\\phi)=\\int d\\phi f(\\phi)\\) and \\(F_{0}\\) is an integration constant. Then, the periodic solution (29) is reproduced in the mathematical equivalent formulation in scalar-tensor theories by the action (41) and explicit kinetic term and scalar potential, in this case, is given by (48).
## V Discussions
Along the paper, it has been presented a Universe model that reproduces in a natural way the early and late-time acceleration by a periodic behavior of the Hubble parameter. The late-time transitions are described by this model: the transition from deceleration to acceleration, and the possible transition from non-phantom to phantom epoch. The observational data does not restrict yet the nature and details of the EoS for dark energy, then the possibility that Universe behaves periodically is allowed. For that purpose, several examples have been studied in the present paper, some of them driven by an inhomogeneous EoS for dark energy, and others by a coupling between dark energy and matter which also may provide another possible constraint to look for. On the other hand, as it was commented at the introduction, one has to keep in mind that these kind of models, scalar theories or dark fluids, should be checked to probe if the unification presented is realistic, especially in order to reproduce the details of inflation, to realize the perturbations structure However, that task is beyond the scopes of the present paper, whose main objective is to show the possibility of the reconstruction of an oscillating Universe from the descriptions detailed.
###### Acknowledgements.
I thank Emilio Elizalde and Sergei Odintsov for suggesting this problem, and for giving the ideas and fundamental information to carry out this task. This work was supported by MEC (Spain), project FIS2006-02842, and in part by project PIE2007-50/023.
## References
* (1) A. G. Riess, Astron. J. **116**, 1009 (1998)
* (2) S. Perlmutter Astrophys. J. **517**, 565 (1999)
* (3) Edmund Copeland, M. Sami, Shinji Tsujikawa, Int. J. Mod. Phys.D **15**, 1753 (2006), [arxiv:hep-th/0603057]
* (4) T. Padmanabhan, arXiv:astro-ph/0603114; arXiv:astro-ph/0602117
* (5) S. Nojiri and S. D. Odintsov, arXiv:hep-th/0601213
* (6) L. Perivolaropoulos, arxiv:astro-ph/0601014
* (7) H. Jassal, J. Bagla and T. Padmanabhan, arxiv:astro-ph/0506748
* (8) S. Nojiri and S.D.Odintsov, Phys. Rev. D **72**, 103522 (2005) [arxiv:hep-th/0505215]
* (9) S. Capozziello, V. Cardone, E. Elizalde, S. Nojiri and S.D. Odintsov, Phys. Rev. D **73**, 043512 (2006), [astro-ph/0508350]
* (10) S. Nojiri and S. D. Odintsov, Phys. Lett. B **639**, 144 (2006), [arxiv:hep-th/0606025]
* (11) I. Brevik, O.G. Gorbunova and A. V. Timoshkin, Eur.Phys.J.C51:179-183,(2007) [arxiv:gr-qc/0702089]
* (12) I.Brevik, E. Elizalde, O. Gorbunova. and A. V. Timoshkin Eur. Phys. J. C **52** 223 (2007) [arxiv:gr-qc/0706.2072]
* (13) R. R. Caldwell, M. Kamionkowski, and N. N. Weinberg, Phys. Rev. Lett. **91**, 071301 (2003) [arXiv:astro-ph/0302506]
* (14) B. McInnes, JHEP **0208**, 029 (2002) [arXiv:hep-th/0112066 ; hep-th/0502209]
* (15) S. Nojiri and S. D. Odintsov, Phys. Lett. B **562**, 147 (2003) [arXiv:hep-th/0303117]
* (16) S. Nojiri and S. D. Odintsov, Phys. Lett. B **565**, 1 (2003) [arXiv:hep-th/0304131]
* (17) S. Nojiri and S. D. Odintsov, Phys. Lett. B **595**, 1 (2004) [arXiv:hep-th/0405078]
* (18) P. Gonzalez-Diaz, Phys. Lett. **B586**, 1 (2004) [arXiv:astro-ph/0312579]; arXiv:hep-th/0408225
* (19) L. P. Chimento and R. Lazkoz, Phys. Rev. Lett. **91**, 211301 (2003) [arXiv:gr-qc/0307111]
* (20) L. P. Chimento and R. Lazkoz, Mod. Phys. Lett. **A19**, 2479 (2004) [arXiv:gr-qc/0405020]
* (21) E. Babichev, V. Dokuchaev, and Yu. Eroshenko, Class. Quant. Grav. **22**, 143 (2005) [arXiv:astro-ph/0407190]
* (22) X. Zhang, H. Li, Y. Piao, and X. Zhang, arXiv:astro-ph/0501652
* (23) E. Elizalde, S. Nojiri, S. D. Odintsov, and P. Wang, Phys. Rev. D **71** (2005) 103504 [arXiv:hep-th/0502082];
* (24) E. Elizalde, S. Nojiri and S. D. Odintsov, Phys. REv. D **70** 043539 (2004) [arxiv:hep-th/0405034]
* (25) M. Dabrowski and T. Stachowiak, arXiv:hep-th/0411199;
* (26) F. Lobo, arXiv:gr-qc/0502099;
* (27) R.-G. Cai, H.-S. Zhang, and A. Wang, arXiv:hep-th/0505186;
* (28) I. Ya. Arefeva, A. S. Koshelev, and S. Yu. Vernov, arXiv:astro-ph/0412619; arXiv:astro-ph/0507067;
* (29) W. Godlowski and M. Szydlowski, Phys. Lett. **B623** (2005) 10;
* (30) J. Sola and H. Stefancic, arXiv:astro-ph/0505133;
* (31) B. Guberina, R. Horvat, and H. Nicolic, arXiv:astro-ph/0507666;
* (32) M. Dabrowski, C. Kiefer, and B. Sandhofer, Phys. Rev. D **74** (2006) 044022;
* (33) E. Barboza and N. Lemos, arXiv:gr-qc/0606084;
* (34) M. Szydlowski, O. Hrycyna, and A. Krawiec, arXiv:hep-th/0608219;
* (35) W. Chakraborty and U. Debnath, arXiv:0802.3751[gr-qc].
* (36) A. Vikman [arXiv:astro-ph/0407107]
* (37) S. Nojiri and S. D. Odintsov, Gen. Rel. Grav. **38**, 1285 (2006) [arXiv:hep-th/0506212];
* (38) S. Capozziello, S. Nojiri, and S. D. Odintsov, Phys. Lett. B **632**, 597 (2006) [arXiv:hep-th/0507182];
* (39) E. Elizalde, S. Nojiri, S. D. Odintsov, D. Saez-Gomez and V. Faraoni, [arxiv:hep-th/0803.1311]
* (40) S. Nojiri and S. D. Odintsov, Phys. Lett. B **637**:139 (2006) [arxiv:hep-th/0603062]
* (41) Bo Feng, Mighez Li, Yung-Son Piao and Xinmin Zhang, Phys. Lett. B **634**:101,(2006),[arxiv:astro-ph/0407432];
* (42) G. Yang and A. Wang, Gen. Rel. Grav.**37**, 2201 (2005) [arxiv:astro-ph/0510006]
* (43) I. Aref'eva, P. H. Frampton and S. Matsuzaki [arxiv:hep-th/0802.1294]
* (44) Z. K. Guo, N. Ohta and S. Tsujikawa, Phys. Rev. D **76** (2007) 023508 [arXiv:astro-ph/0702015].
* (45) S. Nojiri, S. D. Odintsov and S. Tsujikawa, Phys. Rev. D **71** 063004 (2005), [arxiv:hep-th/0501025].
* (46) S. Capozziello, S. Nojiri and S. D. Odintsov, Phys. Lett. B **634**, 93 (2006) [arxiv:hep-th/0512118] | An occurrence of an oscillating Universe is showed using an inhomogeneous equation of state for dark energy fluid. The Hubble parameter described presents a periodic behavior such that early and late time acceleration are unified under the same mechanism. Also, it is considered a coupling between dark energy fluid, with homogeneous and constant EoS, and matter, that gives a periodic Universe too. The possible phantom phases and future singularities are studied in the oscillating Universe under discussion. The equivalent scalar-tensor representation for the same oscillating Universe is presented too. | Provide a brief summary of the text. |
arxiv-format/0805_0472v1.md | # Validating Time-Distance Far-side Imaging of
Solar Active Regions through Numerical Simulations
Thomas Hartlep1, Junwei Zhao2, Nagi N. Mansour1, and Alexander G. Kosovichev2
Footnote 1: affiliation: NASA Ames Research Center, M/S 230-2, Moffett Field, CA 94035-1000
Footnote 2: affiliation: W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305-4085
## 1 Introduction
Imaging of active regions on the far side of the Sun, the side facing away from Earth, is a valuable tool for space weather forecasting, as well as for studying the evolution of active regions. It allows monitoring of active regions before they rotate into the near side, and after they rotate back into the far side. Far-side images are produced daily by the acoustic holography technique (Lindsey & Braun 2000a) using observations from both the Global NetworkOscillation Group (GONG), and the Michelson Doppler Imager (MDI) on board the Solar and Heliospheric Observatory (SOHO). Lindsey & Braun (2000a) and Duvall & Kosovichev (2001) have pioneered the solar far-side imaging work, mapping the central region of the far-side Sun by analyzing acoustic signals for double-skip ray paths on both sides of the far-side region, by use of helioseismic holography (for a review, see Lindsey & Braun 2000b) and time-distance helioseismology (Duvall et al. 1993) techniques, respectively. Braun & Lindsey (2001) further developed their technique to map the near-limb and polar areas of the solar far side by combining single- and triple-skip acoustic signals. More recently, Zhao (2007) has developed a five-skip time-distance imaging scheme that measures travel times of a combination of double- and triple-skip acoustic wave signals. Combined with the traditionally used four-skip far-side imaging schemes, the new technique greatly reduces the noise level in far-side images, as well as helps to remove some of the spurious features visible in the four-skip images.
In general terms, far-side imaging by time-distance helioseismology detects changes in the travel time for acoustic waves traveling through an active region compared to those traveling only in the quiet Sun, while helioseismic holography detects phase shifts in acoustic wave signals. The exact mechanisms of how the presence of an active region causes the observed variations are not fully understood, although, it is generally believed that a change of the magnetoacoustic wave speed inside active regions (Kosovichev & Duvall 1997; Kosovichev et al. 2000; Zhao & Kosovichev 2006) plays an important role. Also, it has been argued that strong surface magnetic fields associated with active regions (Fan et al. 1995; Lindsey & Braun 2005) may affect inferences obtained by the acoustic holography technique; however, Zhao & Kosovichev (2006) have shown that these effects are not a major factor in the determination of the interior structure of sunspots by time-distance helioseismology.
Far-side imaging has been successful for predicting the appearance of large active regions and complexes of activity. However, it is unclear how robust and accurate the far-side imaging techniques are, and how much we should believe in the far-side images that are being produced daily. Past efforts have tried to evaluate the accuracy of far-side images by comparing them with the directly observed Earth-side images just after active regions have rotated into view from the far side, or before they have rotated out of view into the far side (Gonzalez Hernandez et al. 2007). However, active regions may develop quite fast, emerging or disappearing on a time scale of days or even less. Therefore, such analyses are not sufficient. On the other hand, numerical modeling of solar oscillations can provide artificial data that can enable evaluating and improving these methods. In a global solar model, we can place near-surface perturbations mimicking active regions on the far side of the modeled Sun, and apply helioseismic imaging techniques to the simulated wavefield. The resulting far-side images can be compared directly with the precisely known properties of the perturbations, allowing for a more accurate evaluation of the capabilities and limitations of the far-side imaging techniques.
In this paper, we present results on testing the recently improved time-distance helioseismology far-side imaging technique (Zhao, 2007) by using 3D numerical simulations of the oscillations in the global Sun. We assess the sensitivity of the imaging technique by varying the size and location of a sound speed perturbation mimicking a single active region. In other simulations, we place two active regions at the solar surface in order to examine whether the acoustic waves traveling through the active regions may interfere with each other and affect the imaging of the other region. Finally, we identify one scenario in which artifacts (\"ghost images\") caused by an active region on the near side appear in the far-side maps. A brief description of the simulation technique is given in SS 2, followed by a description of the far-side imaging procedure in SS 3. The main results are presented in SS 4, and a discussion and concluding remarks are given in SS 5.
## 2 Numerical Simulation
### Simulation Code
In the following, we briefly describe the numerical simulation code used in this study. For more details, the reader is referred to Hartlep & Mansour (2005), and in particular, to a detailed description of the code, which will be published soon (Hartlep & Mansour, in preparation).
Simulating the 3D wavefield in the full solar interior is not an easy task, and many simplifications have to be made to make such simulations feasible on currently available supercomputer systems. For the present case, we model solar acoustic oscillations in a spherical domain by using linearized Euler equations and consider a static background in which only localized variations of the sound speed are taken into account. The oscillations are assumed to be adiabatic, and are driven by randomly forcing density perturbations near the surface. For the unperturbed background model of the Sun, we use the standard solar model S of Christensen-Dalsgaard et al. (1996) matched to a model for the chromosphere (Vernazza et al., 1981). Localized sound speed perturbations of various sizes are added in the surface and subsurface layers to mimic the perturbations of the wave speed associated with sunspots and active regions. Non-reflecting boundary conditions are applied at the upper boundary by means of an absorbing buffer layer with a damping coefficient that is zero in the interior and increases smoothly into the buffer layer.
The linearized Euler equations describing wave propagation in the Sun are written in the form:
\\[\\partial_{t}\\rho^{\\prime} = -\\Phi^{\\prime}+S-\\chi\\rho^{\\prime}, \\tag{1}\\] \\[\\partial_{t}\\Phi^{\\prime} = -\\Delta c_{0}^{2}\\rho^{\\prime}+\
abla\\cdot\\rho^{\\prime}\\mathbf{g_{0} }-\\chi\\Phi^{\\prime}, \\tag{2}\\]
where \\(\\rho^{\\prime}\\) and \\(\\Phi^{\\prime}\\) are the density perturbations and the divergence of the momentum perturbations associated with the waves, respectively. \\(S\\) is a random function mimicking acoustic sources, \\(c_{0}\\) is the background sound speed, \\(\\mathbf{g_{0}}\\) is the acceleration due to gravity, and \\(\\chi\\) is the damping coefficient of the absorbing buffer layer. Perturbations of the gravitational potential have been neglected, and the adiabatic approximation has been used. In order to make the linearized equations convectively stable, we have neglected the entropy gradient of the background model. The calculations show that this assumption does not significantly change the propagation properties of acoustic waves including their frequencies, except for the acoustic cut-off frequency, which is slightly reduced. This is quite acceptable for our purpose, because the part of the spectrum that is actually used in the far-side imaging technique lies well below this cut-off frequency. For comparison, other authors have modified the solar model including its sound speed profile (e.g. Hanasoge et al., 2006; Parchevsky & Kosovichev, 2007). In those cases, the oscillation mode frequencies may differ significantly from the real Sun frequencies.
Starting from Eqs. (1) and (2), we absorb the damping terms \\(\\chi\\rho^{\\prime}\\) and \\(\\chi\\Phi^{\\prime}\\) into the other terms by use of an integrating factor, and apply a Galerkin scheme for the numerical discretization. Spherical harmonic functions are used for the angular dependencies, and 4th order B-splines (Loulou et al., 1997; Kravchenko et al., 1999) for the radial direction. 2/3-dealiasing is used in the computations of the \\(c_{0}^{2}\\rho^{\\prime}\\)-term in angular space, while all other operations are performed in spherical harmonic coefficient space. The radial resolution of the B-spline method is varied proportionally to the local speed of sound, i.e. the generating knot points are closely space near the surface (where the sound speed is small), and are coarsely spaced in the deep interior (where the sound speed is large). The simulations presented in this paper employ the spherical harmonics of angular degree \\(l\\) from 0 to 170, and 300 B-splines in the radial direction. A staggered Yee scheme (Yee, 1966) is used for time integration, with a time step of 2 seconds.
The oscillation power spectrum as a function of spherical harmonic degree \\(l\\) computed for one of the performed simulations is shown in Figure 1. It is found that the frequencies of the ridges correspond well with the frequencies from solar observations. As noted before, the model has a lower cut-off frequency, but this does not pose a problem for our purposes. Also, Figure 1 shows a time-distance diagram (i.e., the mean cross-covariance function) calculated for the same simulation data. Even though no filtering has been done for computing the time-distance correlations, both the four-skip and five-skip acoustic signals needed for the far-side imaging technique are clearly visible. In fact, these correlations are stronger than in observational data, where it is essential to filter out other unwanted wave components (compare, e.g. Zhao, 2007). The acoustic travel times are fairly close to those found in solar observations. Even for the long travel times of four- and five-skip signals, the discrepancy between the simulations and the observations is only about 1.2 minutes, or 0.2 percent.
### Active Region Model
Solar active regions are complex structures and are believed to differ from the quiet Sun in their temperature, density, and sound speed distributions, and include complicated flow and magnetic field configurations. The acoustic wave speed variations inside active regions due to temperature changes and magnetic fields has, for obvious reasons, a very direct effect on the travel times. For this investigation, we model active regions by local sound speed perturbations, which include the combined temperature and magnetic effects (Kosovichev et al., 2000), but leave it to a later investigation to include plasma flows. Since the main goal of the current far-side imaging efforts is to detect the locations of active regions and estimate their size, this is quite sufficient. We model a solar active region by a circular region in which the sound speed \\(c\\) differs from the quiet Sun sound speed \\(c_{o}\\) in the following fashion:
\\[\\left(c/c_{o}\\right)^{2}=1+f(\\alpha)g(h), \\tag{3}\\]
where \\(\\alpha\\) is the angular distance from the center of the active region, \\(h\\) the radial distance from the photosphere, and
\\[f(\\alpha)=\\begin{cases}1+\\cos(\\pi\\alpha/\\alpha_{d})&\\text{for }|\\alpha|\\leq \\alpha_{d};\\\\ 0&\\text{otherwise}.\\end{cases} \\tag{4}\\]
The radial profile \\(g(h)\\) of the prescribed sound speed perturbation is shown in Figure 2. The profile has been derived by inversions of the time-distance measurements of an actual sunspot (Kosovichev et al., 2000), and confirmed by a number of other local helioseismology inversions (Jensen et al., 2001; Sun et al., 2002; Basu et al., 2004; Couvidat et al., 2006; Zharkov et al., 2007). Some of these studies have shown that the significant sound speed perturbation associated with the sunspot structure probably extends deeper than what was originally inferred. Also, investigations of large active regions by Kosovichev & Duvall (2006) have indicated that the perturbations are extended significantly deeper than those for the relatively small and isolated sunspot in Kosovichev et al. (2000). Therefore, we extended this profile into the deeper layers as shown in Figure 2. The simulations have been performed for three different active region horizontal sizes \\(\\alpha_{d}\\) corresponding to radii at the solar surface of 45, 90 and 180 Mm, respectively. Effects of structure variations with depth or the strength of the perturbations have not been studied.
## 3 Far-Side Imaging Procedure
Zhao (2007) has imaged the solar far side using medium-\\(l\\) data acquired by SOHO/MDI (Scherrer et al., 1995). MDI medium-\\(l\\) data consist of line-of-sight photospheric velocity images with a cadence of 1 minute and a spatial sampling of \\(0.6^{\\circ}\\) per pixel (here and after, degree means heliographic degree). The data are mapped into heliographic coordinates using Postel's projection, and only the central \\(120^{\\circ}\\times 120^{\\circ}\\) region of the solar disk is used for the far-side imaging analysis. The observational time series were 2048 minutes long.
From the simulations, very similar datasets were generated. Radial velocity maps were computed at a location of 300 km above the photosphere, approximately at the formation height of MDI Dopplergrams (Norton et al., 2006), and stored with a 1-minute cadence and a spatial resolution of \\(0.703^{\\circ}\\) per pixel, slightly lower than the resolution of the MDI data. The region selected for the analysis was of the same size, \\(120^{\\circ}\\times 120^{\\circ}\\), as in the analysis of the MDI observations. The first 500 minutes of each simulation were discarded as they represent transient behavior, and the following 1024 minutes were used in the analysis. This is only half of the duration used in the observational analysis, but as Figure 1 shows, four- and five-skip acoustic signals are sufficiently strong to perform the far-side analysis even with such a relatively short period.
The rest of the procedure for the simulation data is the same as for the observations presented in Zhao (2007). After the remapping, the data are filtered in the Fourier domain, and only waves that travel long enough to return to the near side from the back side after four and five rebounces are kept. The time-distance cross-covariance function is computed for points inside the annuli as indicated in Figure 3. The locations and sizes of these annuli depend on the measurement scheme. For the four-skip scheme and the double-double skip combination, the annulus covers a range of distances of \\(133.8^{\\circ}-170.0^{\\circ}\\) from the targeted point on the far side. For the single-triple combination, this range is \\(66.9^{\\circ}-85.0^{\\circ}\\) for the single skip, and \\(200.7^{\\circ}-255.0^{\\circ}\\
(Kosovichev & Duvall, 1997) to derive the acoustic phase travel times for the four- and five-skip schemes separately. After a mean background travel time is subtracted from each map, the residual travel times maps show variations, corresponding to active regions on the far side.
## 4 Results
### Sensitivity
In order to examine the sensitivity of the time-distance far-side imaging technique to the size of active regions, we have simulated the global acoustic wave fields for solar models with sound speed perturbations of 3 different values of their radius: 180 Mm (large), 90 Mm (medium), and 45 Mm (small). The radial structure of the sound speed perturbation has been given in Sec. 2.
Figure 4 presents the case when a medium-sized active region is located at the far-side center (directly opposite to the observer). It can be seen that both four- and five-skip measurement schemes can recover this far-side region, but with some level of spurious features. The combined image from both schemes gives a better active region image, though not completely clear of spurious features. The images are displayed with thresholds of \\(-3.5\\sigma\\) to \\(-2\\sigma\\), where \\(\\sigma\\) is the standard deviation of the travel-time perturbations, in order to isolate the strong negative signals associated with active regions. The original unrestricted image without thresholding, and the corresponding probability distribution function of the travel time residuals are shown in Figure 5. In this particular case, \\(\\sigma\\) is of the order of 12 seconds for the combined image. For comparison, a lower value of 3.3 seconds was found in observations (Zhao, 2007). The noise level depends on the stochastic properties of solar waves and the length of the data time series. This probably explains the difference in the noise levels. However, this difference is not significant for this study since we measure the signal relative to the noise level.
Figure 6 shows the same, medium-sized active region, but now located closer to the far-side limb. Once again, the combined far-side image gives the best result. It is clear from both Figures 4 and 6 that the time-distance technique determines the size and location of the far-side active regions well but fails to accurately image their shape.
Figure 7 presents the travel-time images combined from the four- and five-skip measurements for the simulations of the large active region. It is evident that the time-distance technique gives the correct size, location, and even shape of the far-side active regions for both far-side locations: at the center, and near the limb.
For the case with a small active region (45 Mm radius), the time-distance helioseismology imaging fails to provide any credible signature of the existence of the region on the far side. The travel-time maps are not shown for this case, since they don't show any significant features. Of course, it should come as no surprise that the imaging technique has a lower limit on what size of active region can be detected. The time-distance far-side imaging method used in this study employs only the oscillation modes with spherical harmonic degrees \\(l\\) between 3 and 50. It is conceivable that structures comparable in size or even smaller than the horizontal wavelength of the acoustic waves used in the analysis will have little effect on such waves. Such small structures would be hard or impossible to detect. A simple estimate of the node-to-node distance for a spherical harmonic of degree 50 (the highest used in the analysis) gives about 90 Mm at the surface, or twice the radius of the small active region.
It is quite common that multiple active regions are present on the Sun. Some of them may produce perturbations of the wave field, which may interfere with the perturbation of a targeted active region. In order to examine whether the different regions would interfere with each other in the far-side images, we performed a simulation with two medium-sized active regions located at the solar equator, 150\\({}^{\\circ}\\) apart from each other. We have examined various different far-side locations of the active regions, and two examples are presented in Figure 8. In all cases we found that these two active regions do not interfere with each other. Both active regions have been imaged correctly as if they were the sole regions on the Sun, except that some \"ghost images\" appeared under certain circumstances. However, such artifacts also appear for a single active region case under the same circumstances, as described in the next section.
For convenience, the active regions in the numerical experiments were placed on the far-side equator. On the real Sun, though, active regions are often far from the equator, and one may be confronted with additional effects such as foreshortening and line-of-sight projection. However, these effects are expected to be small because only oscillations of relatively low angular degrees are used in the analysis and also because of the rather small observing window extending only 60\\({}^{\\circ}\\) from the disk center. To test this expectation, we have performed an additional numerical experiment with a medium-sized active region placed at a latitude of 20\\({}^{\\circ}\\) above the equator, used line-of-sight velocities instead of the pure radial velocities, and included the effect of foreshortening. The results were not significantly different from those in Figure 4.
### Ghost Images
It is found that when an active region is placed at certain locations, a \"ghost image\" of the active region may appear in the far-side image. Figure 9 presents two such examples when an active region on the near side is close to the limb. A \"ghost image\" appears approximately at the antipode of this active region, with a weaker acoustic travel time signal and smaller in size. Note that because of the selection of very small \\(l\\)'s when computing far-side images, the spatial resolution of images is only about \\(10^{\\circ}\\). Therefore, the \"ghost image\" may appear several degrees away from the antipode of the region.
Given the measurement scheme, it is very reasonable to expect such an artifact when the active region is located close to the near-side limb. Consider, for example, a single-triple skip combination in the four-skip measurement scheme. If we select an annulus \\(70^{\\circ}\\) from a targeted far-side quiet region, this annulus is also \\(250^{\\circ}\\) away from that quiet region's antipode. If an active region is located there, acoustic waves with travel time deficits caused by that active region are not filtered out because their distance range also falls in the triple-skip range in our analysis (compare annulus radii in Sec. 3).
## 5 Discussion
We have successfully simulated the global acoustic wavefield of the Sun and have used the simulation data to validate the time-distance far-side imaging technique for two measurement schemes with four and five skips of the acoustic ray paths.
We have found that this technique is able to reliably detect our model active regions with radii of 90 Mm and 180 Mm. The locations and sizes of the far-side active regions are determined correctly, although, their shapes are often slightly different from the original. Expectedly, larger active regions are easier to detect, and their images are more clear. For the small active region of 45 Mm radius, the far-side imaging method fails since it is below the resolution limit. In the case of more than one active regions present on the solar surface, we have found that they do not affect each other's detection. The time-distance analysis can detect the individual active regions as if they were completely independent.
We have also shown that when an active region is located close to the limb on the near side, a \"ghost image\" may appear in the far-side image, approximately at its antipode, but relatively weak and smaller in size. Even though this effect is not completely unexpected, it has not been noticed in previous analyses of observational data in both helioseismic holography (Braun & Lindsey, 2001) and time-distance helioseismology (Zhao, 2007). This is an important finding and gives us hints on when and where features in observational far-sideimages may merely be artifacts (i.e. \"ghost images\") and are not caused by actual far-side active regions.
We thank Dr. Alan A. Wray and Dr. Konstantin V. Parchevsky for reading this manuscript and their helpful comments.
This work was supported by NASA's \"Living With a Star\" program. Support from the NASA Postdoctoral Program administered by Oak Ridge Associated Universities is gratefully acknowledged.
## References
* () Basu, S., Antia, H. M., & Bogart, R. S. 2004, ApJ, 610, 1157
* () Braun, D. C., & Lindsey, C. 2001, ApJ, 560, L189
* () Christensen-Dalsgaard, J., et al. 1996, Science, 272, 1286
* () Christensen-Dalsgaard, J. 2002, Rev. Modern Phys., 72, 1073
* () Couvidat, S., Birch, A. C., & Kosovichev, A. G. 2006, ApJ, 640, 516
* () Duvall, T. L., Jr., Jefferies, S. M., Harvey, J. W., & Pomerantz, M. A. 1993, Nature, 362, 430
* () Duvall, T. L., Jr., & Kosovichev, A. G. 2001, Proc. IAU Symp. 203, 159
* () Fan, Y., Braun, D. C., & Chou, D.-Y. 1995, ApJ, 451, 877
* () Gonzalez Hernandez, I., Hill, F., & Lindsey, C. 2007, ApJ, 669, 1382
* () Hanasoge, S. M., Larsen, R. M., Duvall, T. L., DeRosa, M. L., Hurlburt, N. E., Schou, J., Christensen-Dalsgaard, J., & Lele, K. 2006, ApJ, 648, 1268
* () Hartlep, T., & Mansour, N. N. 2005, Annual Research Briefs-2005, 357, Center for Turbulence Research, Stanford, California
* () Jensen, J. M., Duvall, T. L., Jr., Jacobsen, B. H., & Christensen-Dalsgaard, J. 2001, ApJ, 553, L193
* () Kosovichev, A. G., Duvall, T. L., Jr., & Scherrer, P. H. 2000, Sol. Phys., 192, 159
* () Kosovichev, A. G., & Duvall, T. L., Jr. 1997, SCORe'96 : Solar Convection and Oscillations and their Relationship, 225, 241
* () Kosovichev, A. G., & Duvall, T. L. 2006, Space Science Reviews, 124, 1
* () Kravchenko, A. G., Moin, P., & Shariff, K. 1999, J. Comp. Phys., 151, 757
* () Lindsey, C., & Braun, D. C. 2000a, Science, 287, 1799
* () Lindsey, C., & Braun, D. C.2000b, Sol. Phys., 192, 261
* () Lindsey, C., & Braun, D. C. 2005, ApJ, 620, 1107
* ()* () Loulou, P., Moser, R. D., Mansour, N. N., & Cantwell, B. J. 1997, Technical Memorandum 110436, NASA Ames Reseach Center, Moffett Field, California
* () Norton, A. A., Graham, J. P., Ulrich, R. K., Schou, J., Tomczyk, S., Liu, Y., Lites, B. W., Ariste, A. L., Bush, R. I., Socas-Navarro, H., & Scherrer, P. H. 2006, Sol. Phys., 239, 69
* () Parchevsky, K. V., & Kosovichev, A. G. 2007, ApJ, 666, 547
* () Rhodes, E. J., Kosovichev, A. G., Scherrer, P. H., Schou, J., & Reiter, J 1997, Sol. Phys., 175(2), 287
* () Scherrer, P. H., et al. 1995, Sol. Phys., 162, 129
* () Sun, M.-T., Chou, D.-Y., & The TON Team 2002, Sol. Phys., 209, 5
* () Vernazza, J. E., Avrett, E. H., & Loeser, R. 1981, ApJS, 45, 635
* () Yee, K. S. 1966, IEEE Trans. Antenna and Propagation, 14, 302
* () Zhao, J. 2007, ApJ, 664, L139
* () Zhao, J., & Kosovichev, A. G. 2006, ApJ, 643, 1317
* () Zharkov, S., Nicholas, C. J., & Thompson, M. J. 2007, Astronomische Nachrichten, 328, 240Figure 1: The oscillation power spectrum (_left_) and the time-distance diagram (_right_) of a simulated data set. The white dots in the left panel show for comparison the observed frequencies obtained from 144 days of MDI medium-\\(l\\) data using the averaged-spectrum method (Rhodes et al., 1997). The green dashed lines in the right panel indicate the ray-theory predictions for the four-skip and five-skip signals of the acoustic wave packets, which travel from the Earth side to the far side and back to the Earth side
Figure 2: The radial profile of the sound speed perturbation in the center of the model active region (_solid curve_), with positive distances denoting locations above the photosphere. The profile was derived by extending the perturbation profile of a sunspot in NOAA active region 8243 on June 18, 1998 (_dotted curve_) inferred from time-distance measurements (Kosovichev et al., 2000). The profile is extended to account for deeper perturbations of large active regions (see text for details).
Figure 3: Illustrations of the four-skip and five-skip measurement schemes used in the far-side imaging method of Zhao (2007). The skips correspond to the ray paths of acoustic waves traveling between surface points through the solar interior. Specifically, (_a_) represents the scheme with two skips on either side of the target point, (_b_) the single-triple skip scheme, and (_c_) the double-triple skip scheme.
Figure 4: Results for an active region with a radius of 90 Mm positioned at the center of the solar far side. The individual panels show: (_a_) the far-side image from the four-skip measurement scheme, (_b_) the image from the five-skip scheme, and (_c_) the combined far-side image. The images show the derived travel-time signals for the different schemes, and are displayed with thresholds of \\(-3.5\\sigma\\) to \\(-2\\sigma\\), with \\(\\sigma\\) being the standard deviation of the travel time variations (noise level). The dotted lines in panel (_b_) indicate the spatial limits of the five-skip scheme, which does not cover the whole far side. As an illustration of the actual size and location of the model active region, panel (_d_) depicts the average acoustic power on the far side computed at the photospheric level. Inside the active region, a reduction in the average acoustic power is observed (_indicated in black_) compared to the quiet Sun (_shown in white_).
Figure 5: The combined method image from Figure 4(c) without thresholding (_left_), and the distribution of travel times residuals in the image (_right_). The dotted lines indicate the threshold limits used for rendering the image in Figure 4(c).
Figure 6: Same as Figure 4, except the model active region is placed near the limb of the far side.
Figure 7: Far-side images of a large model active region with a radius of 180 Mm positioned at the center (_a_) and near the limb (_c_) of the far side. The acoustic power at the photosphere for the two cases are shown in panels (_b_) and (_d_), respectively.
Figure 8: Far-side images for a simulation with two medium-sized model active regions at the equator. The two regions are 150\\({}^{\\circ}\\) apart longitudinally, namely, at 180\\({}^{\\circ}\\) and 330\\({}^{\\circ}\\). For the two panels, two different regions have been selected as the near side. The panels depict a combination of the acoustic power map of the near-side (_white background_), and the far-side map (_gray background_) computed from the oscillation data on the near side.
Figure 9: Far-side images for a simulation with a single, 90 Mm-radius model active region. The active region is placed at a longitude of 140\\({}^{\\circ}\\). Its smaller and weaker “ghost image” can be found at approximately 320\\({}^{\\circ}\\) in both panels. Similarly to Figure 8, two different parts of the surface have been selected as the near side for the two panels, and again show a combination of the acoustic power maps of the near side (_white background_) and the far-side image (_gray background_). | Far-side images of solar active regions have become one of the routine products of helioseismic observations, and are of importance for space weather forecasting by allowing the detection of sunspot regions before they become visible on the Earth side of the Sun. An accurate assessment of the quality of the far-side maps is difficult, because there are no direct observations of the solar far side to verify the detections. In this paper we assess far-side imaging based on the time-distance helioseismology method, by using numerical simulations of solar oscillations in a spherical solar model. Localized variations in the speed of sound in the surface and subsurface layers are used to model the perturbations associated with sunspots and active regions. We examine how the accuracy of the resulting far-side maps of acoustic travel times depends on the size and location of active regions. We investigate potential artifacts in the far-side imaging procedure, such as those caused by the presence of active regions on the solar near side, and suggest how these artifacts can be identified in the real Sun far-side images obtained from SOHO/MDI and GONG data.
methods: numerical -- Sun: helioseismology -- Sun: oscillations -- sunspots | Provide a brief summary of the text. |
arxiv-format/0805_1820v1.md | # Fast rotation of neutron stars and equation of state of dense matter
Pawel Haensel
[email protected]
Julian L. Zdunik
Michal Bejger
Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, PL-00-716 Warszawa, Poland
## 1 Introduction
Neutron stars, and more generally, compact stars (neutron stars, hybrid hadron-quark stars, quark stars) are the densest stellar objects in the Universe. Due to their compactness and strong gravity, compact stars can be very fast rotators. In the present paper we limit ourselves to a rigid rotation. Theoretical calculations show that compact stars could rotate at sub-millisecond periods (i.e., at frequency \\(f=1/\\)period \\(>\\)1000 Hz: Cook et al. 1994; Salgado et al. 1994).
The quest for fast rotating compact stars has an interesting history. The first millisecond pulsar B1937+21, rotating at \\(f=641\\) Hz (Backer et al., 1982), remained the most rapid one for 24 years. In January 2006, discovery of a faster pulsar J1748-2446ad rotating at \\(f=716\\) Hz was announced (Hessels et al., 2006). However, sub-kHz frequencies are still too low to significantly affect the structure of massive neutron stars with \\(M>1M_{\\odot}\\) (Shapiro et al., 1983; Haensel et al., 2007). Actually, pulsars B1937+21 and J1748-2446ad still rotate in a _slow rotation_ regime, because their \\(f\\) is significantly smaller than the mass shedding (Keplerian) frequency \\(f_{\\rm K}\\). In the slow rotation regime rotational effects in neutron star structure, e.g., polar flattening, are \\(\\propto(f/f_{\\rm K})^{2}\\ll 1\\), and hence not large. Rapid rotation regime for \\(M>1M_{\\odot}\\) requires sub-millisecond pulsars with supra-kHz frequencies (\\(f>1000\\) Hz).
Exciting news came in December 2006. Kaaret et al. (2007) reported a discovery of oscillation frequency \\(f=1122\\) Hz in an X-ray burst from the X-ray transient, XTE J1739-285. Kaaret et al. (2007) wrote cautiously \"this oscillation frequency suggests that XTE J1739-285 contains the fastest rotating neutron star yet found\". If confirmed, this would be the first detection of a sub-millisecond pulsar (discovery of a 0.5 ms pulsar in SN1987A remnant, announced in January 1989, was withdrawn one year later).
Fast rotation of compact stars is sensitive to the stellar mass and to the equation of state (EOS). Hydrostatic, stationary configurations of neutron stars rotating at given rotation frequency \\(f\\) form a one-parameter family, labeled by the central density. This family - a curve in the mass - equatorial radius plane - is limited by two instabilities. On the highcentral density side, it is instability with respect to axi-symmetric perturbations, making the star collapse into a Kerr black hole. The low central density boundary results from the mass shedding from the equator. In the present paper we discuss the dependence of rotation at \\(f>1000\\) Hz on the poorly known EOS, and derive constraints on the EOS of neutron stars which could result from future observations of stably rotating sub-millisecond pulsars.
The plan of the paper is as follows. In Sect. 1 we briefly describe EOSs used in our calculations. Numerical methods of solving equations of hydrostatic equilibrium of rigidly rotating compact stars in General Relativity, and criteria for their stability, are presented in Sect. 2. In Sect. 3 we consider the EOS dependence case of rotation at \\(f=1122\\) Hz. A systematic study of the EOS-dependence of fast rotation at \\(f=1000-1600\\) Hz is presented in Sect. 4. Reaching fast rotation via disk accretion in LMXBs and the EOS-dependence of the spin-up track is reviewed in Sect. 5. Finally, Sect. 6 summarizes briefly main results reported in the paper.
We are pleased to present our paper on fast rotation of neutron stars in the proceedings of the conference in honor of J.-P. Lasota. In 1990s Jean-Pierre became intrigued by a puzzling precision of \"empirical formula\" for the absolute upper bound on rotation frequency of compact stars, proposed by two of us (PH and JLZ) on the aftermath of ill fated \"discovery\" of 0.5 ms pulsar in SN 1987A (Haensel & Zdunik, 1989). Two joint papers resulted from our collaboration (Lasota et al., 1996; Haensel et al., 1999), some progress has been made, but the basic puzzle still remains to be solved.
## 2 Equations of state
In view of a high degree of our ignorance of EOS of dense matter at supranuclear densities (\\(\\rho>3\\times 10^{14}\\) g cm\\({}^{-3}\\)) we considered a broad set of theoretical models. The set of ten equations of state (EOSs) considered in the paper is presented in Fig. 1. The EOSs are also listed in Table 1, where the basic informations (label of an EOS, theory of dense matter, reference to the original paper) are also collected.
Two EOSs were chosen to represent a soft (BPAL12) and stiff (GN3) extreme cases. These two extreme EOSs should not be considered as \"realistic\", but they are used just to \"bound\" the models from the soft and the stiff sides.
Four EOSs are based on realistic models involving only nucleons (FPS, BBB2, DH, APR). The next four EOSs are softened at high density either by the appearance of hyperons (GNH3, BGN1H1), or a phase transition (GMGS-Km, GMGS-Kp). A softening in the latter case is clearly visible in Fig. 1 at pressure \\(P\\sim 10^{35}\\) dyn/cm\\({}^{2}\\).
Two EOSs, GMGS-Kp and GMGS-Km, describe nucleon matter with a first order phase transition to a kaon condensed state. In both cases the hadronic Lagrangian is the same. However, to get GMGS-Kp we assumed that the phase transition takes place between two pure phases and is accompanied by a density jump, calculated using the Maxwell construction. The GMGS-Km EOS was obtained assuming that the transition occurs via a mixed state of two phases (Gibbs construction). A mixed state is energetically preferred when the surface tension between the two phases is below a certain critical value. As the value of the surface tension is very uncertain, we considered both cases.
## 3 Calculation of stationary rotating configurations and their stability
We computed stationary configurations of rigidly rotating neutron stars in the framework of General Relativity by solving the Einstein equations for stationary axi-symmetric spacetime (Bonazzola et al., 1993; Gourgoulhon et al., 1999). Numerical computations have been performed us
Figure 1: The equations of state in the log\\(P-\\)log\\(\\rho\\) plane. For labels - see Table 1.
ing the rotstar code from the LORENE library ([http://www.lorene.obspm.fr](http://www.lorene.obspm.fr)). We calculated one-parameter families of stationary 2-D configurations for ten EOSs, presented in Fig. 1 and Table 1.
Apart from fulfilling the equations of hydrostatic equilibrium, stationary configurations were required to be stable. Two instabilities were considered.
_Mass-shedding._ Stability with respect to the mass shedding from the equator implies that at a given gravitational mass \\(M\\) the circumferential equatorial radius \\(R_{\\rm eq}\\) should be smaller than \\(R_{\\rm max}\\) which corresponds to the mass shedding (Keplerian) limit. The value of \\(R_{\\rm max}\\) results from the condition that the frequency of a test particle at circular equatorial orbit of radius \\(R_{\\rm max}\\) just above the equator of the _actual rotating star_ is equal to the rotational frequency of the star. This stability condition sets the bound on our rotating configurations from the right side on \\(M-R_{\\rm eq}\\) plane. It fixes the largest radius and the smallest central density, allowed for stable stationary configurations.
_Axi-symmetric oscillations._ Instability with respect to these oscillations determines the bound for most compact stars, with the smallest radius and the highest central density (i.e., from the left side on the \\(M-R_{\\rm eq}\\) plane). This bound is determined by the condition:
\\[\\left(\\frac{\\partial M}{\\partial\\rho_{\\rm c}}\\right)_{J}=0. \\tag{1}\\]
For stable configurations:
\\[\\left(\\frac{\\partial M}{\\partial\\rho_{\\rm c}}\\right)_{J}>0. \\tag{2}\\]
In the opposite case, \\(\\left(\\partial M/\\partial\\rho_{\\rm c}\\right)_{J}<0\\), a compact star star is doomed to collapse into a Kerr black hole.
## 4 An example: compact stars at 1122 Hz
In this section we present the parameters of the stellar configurations rotating at frequency 1122 Hz, a suggested rotation frequency of XTE J1739-285. For details and discussion see Bejger et al. (2007).
In Fig. 2 we show the \\(M(R_{\\rm eq})\\) plots of stable stationary configurations rotating at \\(f=1122\\) Hz. We considered EOSs from Table 1. The relation between the calculated values of \\(M\\) and \\(R_{\\rm eq}\\) at the \"mass shedding point\" is extremely well approximated by the formula for the orbital frequency for a test particle orbiting at \\(r=R_{\\rm eq}\\) in the Schwarzschild space-time of a _spherical mass_\\(M\\) (which can be replaced by a point mass \\(M\\) at \\(r=0\\)). Let us denote the orbital frequency of such a test particle by \\(f_{\\rm orb}^{\\rm Schw.}(M,R_{\\rm eq})\\). The locus of points satisfying \\(f_{\\rm orb}^{\\rm Schw.}(M,R_{\\rm eq})=1122\\) Hz is represented by a dash line in Fig. 2. The points on the dash line satisfy the relation
\\begin{table}
\\begin{tabular}{c|c|c} \\hline \\hline EOS & model & reference \\\\ \\hline BPAL12 & N energy density functional & Bombaci (1995) \\\\ FPS & N energy density functional & Pandharipande \\& Ravenhall (1989) \\\\ GN3 & N relativistic mean field & Glendenning (1985) \\\\ DH & N energy density functional & Douchin \\& Haensel (2001) \\\\ APR & N variational theory \\({}^{a}\\) & Akmal et al. (1998) \\\\ BGN1H1 & NH, energy density functional & Balberg \\& Gal (1997) \\\\ GNHQm2 & NH + mixed baryon-quark state & Glendenning (2000) \\\\ BBB2 & N Brueckner theory & Baldo et al. (1997) \\\\ GNH3 & NH relativistic mean field & Glendenning (1985) \\\\ GMGS-KmN & + mixed nucleon-kaon condensed \\({}^{b}\\) & Pons et al. (2000) \\\\ GMGS-Kp & N + pure kaon condensed \\({}^{b}\\) & Pons et al. (2000) \\\\ \\hline \\hline \\end{tabular} \\({}^{a}\\) A18\\(\\delta\\)+UIX\\({}^{\\star}\\) model of Akmal et al. (1998).
\\({}^{b}\\) GM+GS model with \\(U_{K}^{\\rm lin}=-130\\) MeV.
\\end{table}
Table 1: Equations of state. N - nucleons and leptons only. NH - nucleons, hyperons, and leptons. Labels of EOSs are composed from first letters of names of authors of EOSs of the core. In all cases but FPS, the EOS of the crust is the DH one. For FPS EOS its own crust model is used.
\\[\\frac{1}{2\\pi}\\left(\\frac{GM}{R_{\\rm eq}^{\\ 3}}\\right)^{1/2}=1122\\ {\\rm Hz}. \\tag{3}\\]
This formula, obtained for the Schwarzschild metric, coincides with that for Newtonian gravity for a point mass \\(M\\). As one can see, it passes through (or extremely close to) the open circles denoting the (numerically calculated) actual mass shedding (Keplerian) configurations. This is a quite remarkable property, in view of rapid rotation and strong flattening of neutron star at the mass-shedding point, visualized in Fig. 3.
Equation (3) implies a useful constraint for compact stars rotating at 1122 Hz:
\\[R_{\\rm max}=15.52\\ \\left(\\frac{M}{1.4\\ M_{\\odot}}\\right)^{1/3}\\ {\\rm km}. \\tag{4}\\]
## 5 Submillisecond pulsars
In this section we present results for compact stars rotating at sub-millisecond periods (supra-kHz frequencies) for a broad range of frequencies, 1000 - 1600 Hz. As in Sect. 4, calculations are performed for ten EOSs from Table 1. Mass vs equatorial radius relations for very fast rotating compact stars are presented in Fig. 5. The shape of the \\(M(R_{\\rm eq})\\) curves is more and more flat as rotational frequency increases. For \\(f_{\\rm rot}=1600\\) Hz the curves \\(M(R_{\\rm eq})\\) are almost horizontal. At that frequency _the mass for each EOS is quite well defined_. Moreover, curves for different EOSs are usually well separated. Both features are of a great practical importance. In principle, they are very useful for selecting \"true EOS\", provided one detects a very fast pulsar and is simultaneously able to measure its mass.
A dotted curve in every panel of Fig. 5 corre
Figure 4: Cross section in the plane passing through the rotational axis of neutron stars rotating at 1122 Hz at the axi-symmetric instability limit (i.e., with \\(R_{\\rm eq}=R_{\\rm min}\\)), for the BGN1H1 EOS ( \\(z>0\\)) and DH EOS (\\(z<0\\)). Notations as in Fig. 3.
Figure 3: Cross section in the plane passing through the rotational axis of neutron stars rotating at the mass-shedding limit \\(f_{\\rm K}=1122\\) Hz (i.e., with \\(R_{\\rm eq}=R_{\\rm max}\\)), for the BGN1H1 EOS ( \\(z>0\\)) and DH EOS (\\(z<0\\)). The coordinates \\(x\\) and \\(z\\) are defined as \\(x=r{\\rm sin}\\theta{\\rm cos}\\phi\\), \\(z=r{\\rm cos}\\theta\\), where \\(r\\) is radial coordinate in the space-time metric. Dotted contour - crust-core boundary.
Figure 2: Gravitational mass, \\(M\\), vs. circumferential equatorial radius, \\(R_{\\rm eq}\\), for neutron stars stably rotating at \\(f=1122\\) Hz, for ten EOSs (Fig. 1). Small-radius termination by filled circle: setting-in of instability with respect to the axi-symmetric perturbations. Dotted segments to the left of the filled circles: configurations unstable with respect to those perturbations. Large-radius termination by an open circle: the mass-shedding instability. The mass-shedding points are very well fitted by the dashed curve \\(R_{\\rm min}=15.52\\ (M/1.4M_{\\odot})^{1/3}\\) km. For further explanation see the text.
sponds to the formula
\\[M=\\frac{4\\pi^{2}f^{2}}{G}R_{\\rm eq}^{3}\\, \\tag{5}\\]
used for a panel frequency (1000 Hz, 1400 Hz, 1600 Hz). Notice that Eq. (3) is a special case of Eq. (5). Equation (5) works very well in very broad range of rotational frequencies and EOSs (recently this formula has been tested by Krastev et al. (2007) for the frequency 716 Hz of PSR J1748-2446ad.)
## 6 Spin up by accretion
It is commonly believed that fast (millisecond) pulsars are recycled old neutron stars, spun up to kHz frequencies via a long-time disk accretion in LMXBs. Such neutron stars have a weak magnetic field (\\(B<10^{10}\\) G), which does not affect the accretion flow. Therefore, the accretion disk extends down to the innermost stable circular orbit (ISCO).
In the present section we study some aspects of spin-up by accretion from the ISCO. We use the prescription given by Zdunik et al. (2002, 2005). Specific angular momentum per unit baryon mass of a matter element orbiting the neutron star at the ISCO, \\(l_{{}_{\\rm ISCO}}\\), is calculated by solving exact equations of the orbital motion of a particle in the space-time produced by a rotating neutron star ( Appendix A of Zdunik et al. 2002).
Consider accretion of an infinitesimal amount of baryon mass \\({\\rm d}M_{\\rm B}\\) onto a rotating neutron star. As the star is assumed to spin up in a quasi-stationary manner, accretion of \\({\\rm d}M_{\\rm B}\\) leads to a new rigidly rotating configuration of mass \\(M_{\\rm B}+{\\rm d}M_{\\rm B}\\) and angular momentum \\(J+{\\rm d}J\\), with
\\[{\\rm d}J=x_{l}l_{{}_{\\rm ISCO}}\\ {\\rm d}M_{\\rm B}. \\tag{6}\\]
Here, \\(x_{l}\\) denotes the fraction of the angular momentum of the matter element, transferred to the star. The remaining fraction, \\(1-x_{l}\\), is assumed to be lost via radiation or other dissipative processes. Let us consider two values of \\(x_{l}\\): \\(x_{l}=1\\) and \\(x_{l}=0.5\\). In Fig. 6 we plot the curves \\(M(R_{\\rm eq})\\), corresponding to the spin-up from \\(f=0\\) to (final) \\(f_{\\rm rot}=1400\\) Hz. Calculations are performed for the DH EOS. Point F on this curve corresponds to the onset of instability with respect to axi-symmetric oscillations (con
Figure 5: \\(M\\) vs \\(R_{\\rm eq}\\) for stably rotating sub-millisecond compact stars. Notations as in Fig. 2. To indentify a curve corresponding to a specific EOS from Table 1, one has to use the sequence of open circles (ordered in the same way as in this figure) at the mass-shedding limit in Fig. 2.
Figure 6: Mass vs radius relation for an accreting neutron star with DH EOS. The star spins-up from \\(f=0\\) to \\(f_{\\rm rot}=1400\\) Hz. For further explanations see the text.
dition given by Eq. (1)). Point E is the Keplerian configuration at frequency 1400 Hz, while point G corresponds to maximum mass along the curve with a fixed rotation frequency 1400 Hz.
Let us consider several spin-up tracks, all terminating on the 1400 Hz mass-radius line. The curves starting at points A,B,C and D are the tracks of accreting neutron stars defined by the Eq. ( 6), for \\(x_{l}=1\\) (solid line) and \\(x_{l}=0.5\\) (dotted line) for cases C,D, and A,B, respectively. In order to reach the frequency 1400 Hz, one has to start with a non-rotating neutron star located in the segment CD (if \\(x_{l}=1\\)), or AB (if \\(x_{l}=0.5\\)). As one can see, there are bounds on the initial mass of a non-rotating star, which can be spun-up to a given frequency \\(f_{\\rm rot}\\) via disk accretion. For \\(f_{\\rm rot}=1400\\) Hz \\(x_{l}=1\\) the allowed mass range for initial non-rotating star is \\(1.7M_{\\odot}<M_{i}<1.92M_{\\odot}\\).
In Fig 7 we plotted, for the DH EOS, the bounds on the initial mass of a non-rotating neutron star, for which the spin-up via accretion could reach a required rotation frequency. Results are presented for two values of \\(x_{l}\\). Shaded areas correspond to the allowed a range of initial masses of non-rotating star. Left \"triangle\" was obtained for \\(x_{l}=0.5\\), while the central shaded \"triangle\" (with points C and D) - for \\(x_{l}=1\\).
The limits on the actual mass of rotating NS which was formed by disk accretion onto an initial static star are given by the three curves on the right in Fig. 7. These curves pass by three points E,F, and G in Fig. 6. The curve passing by the point E (green) is the bound resulting from the Keplerian limit of a rotating star. The (magenta) line passing by the point F is the boundary resulting from the instability with respect to axi-symmetric perturbations. Finally, curve passing through the point G is a locus of maxima on the mass-radius curve at a fixed final rotation frequency: point G is obtained for 1400 Hz. As the frequency of the final configuration increases, the mass at the Keplerian limit increases more rapidly, than that defined by the onset of the axi-symmetric instability at the maximum mass (at 1400 Hz: points F and G, respectively). For very high frequencies the maximum mass of the stars rotating at a fixed frequency is given by the value for Keplerian limit. In the considered case of the DH EOS, the point G disappears at frequency \\(\\simeq 1500\\) Hz (point G\\({}_{\\rm max}\\)). For faster rotation the curve \\(M(R_{\\rm eq})\\) is monotonous (no maximum between the both ends). However, the range of masses of stars rotating at so high frequency is very narrow. For \\(f_{\\rm rot}>1400\\) Hz it is smaller than \\(0.1M_{\\odot}\\) (see also discussion of Fig.5 in Sect. 5).
## 7 Discussion and conclusions
Let us summarize main results reported in the present paper, and obtained using a large set of theoretical EOSs. The \\(M(R_{\\rm eq})\\) curve for \\(f\\gtrsim 1400\\) Hz (rotation period \\(\\lesssim 0.7\\) ms) is flat. Therefore, at such frequencies, for any given EOS the mass of a stably rotating compact star is quite well defined. Conversely, a measured mass of a compact star rotating at \\(f\\gtrsim 1400\\) Hz will allow us to unveil the actual EOS of dense matter. The \"Newtonian\" formula for the Keplerian frequency reproduces surprisingly well precise 2-D simulations and sets a firm upper limit on \\(R_{\\rm eq}\\) for a given \\(f\\). Finally, observation of \\(f\\gtrsim 1200\\) Hz sets stringent limits on the initial mass of a non-rotating star which was spun up to this frequency via accretion from a disk.
In the present paper we limited ourselves to _hadronic stars_, built exclusively or predominantly
Figure 7: (Color online) The bounds on the initial mass a static neutron star (horizontal axis) to be spun-up up to a given final frequency (vertical axis). Calculations for the DH EOS. Labeled points (A, B, ) correspond to the same points in Fig. 6. The triangular region on the left (red) corresponds to \\(x_{l}=0.5\\). Triangular region near the center (cyan) is obtained for \\(x_{l}=1\\). The non-shaded region on the right, composed of a tilted triangle and a lentil-like upper determines the range of allowable masses of final configurations rotating at a given frequency.
of hadrons. We did not discuss our results obtained for hypothetical compact stars built of a self-bound quark matter, called _quark stars_ or _strange stars_. Some of results for quark stars with and without crust were briefly reported in Ref. Bejger et al. (2007). Generally, most of general features and relations obtained for hadronic stars hold also for quark stars. However, is some cases one notices systematic difference between rapidly rotating hadronic and quark stars. Our results will be described in a forthcoming publication.
Acknowledgments This work was partially supported by the Polish MNiSW grant no. N203.006.32/0450 and by the LEA Astrophysics Poland-France (Astro-PF) program. MB was also partially supported by the Marie Curie Intra-european Fellowship MEIF-CT-2005-023644.
## References
* Akmal et al. (1998) Akmal, A., Pandharipande, V.R., Ravenhall, D.G., 1998, Phys.Rev. C, 58, 1804
* Backer et al. (1982) Backer, D.C., Kulkarni, S.R., Heiles, C., et al., 1982, Nature, 300, 615
* Balberg & Gal (1997) Balberg, S., Gal, A., 1997, Nucl. Phys. A., 625, 435
* Baldo et al. (1997) Baldo, M., Bombaci, I., Burgio G.F. 1997, A & A, 328, 274
* Bejger et al. (2007) Bejger, M., Haensel, P., Zdunik, J. L., 2007, A&A, 464, 49
* Bombaci (1995) Bombaci, I., 1995, in: Perspectives on Theoretical Nuclear Physics, ed. by I. Bombaci, A. Bonaccorso, A. Fabrocini, et al. (Pisa: Edizioni ETS), p. 223
* Bonazzola et al. (1993) Bonazzola, S., Gourgoulhon, E., Salgado, M., Marck J.-A., 1993, A & A, 278, 421
* Cook et al. (1994a) Cook, G. B., Shapiro, S.L., Teukolsky, S.A., 1994a ApJ, 424, 823
* Douchin & Haensel (2001) Douchin, F., Haensel, P., 2001, A & A, 380, 151
* Glendenning (1985) Glendenning, N. K. 1985, ApJ, 293, 470
* Glendenning (2000) Glendenning, N. K., 2000, Compact Stars: Nuclear Physics, Particle Physics, and General Relativity, Springer, New York
* Glendenning & Moszkowski (1991) Glendenning, N. K., Moszkowski, S.A., 1991, Phys. Rev. Lett., 67, 2414
* Glendenning & Schaffner-Bielich (1999) Glendenning, N.K., Schaffner-Bielich, J., 1999, Phys. Rev. C, 60, 025803
* Haensel & Zdunik (1989) Haensel, P., Zdunik, J.L., 1989, Nature, 340, 617
* Haensel et al. (1999) Haensel, P., Lasota, J.-P., Zdunik, J.L., 1999, Astron. Astrophys., 344, 151
* Haensel et al. (2007) Haensel, P., Potekhin, A.Y., Yakovlev, D.G., 2007 Neutron Stars 1. Equation of State and Structure, (Springer, New York)
* Hessels et al. (2006) Hessels, J.W.T., Ransom, S.M., Stairs, I.H., Freire, P.C.C., Kaspi, V.M., Camilo, F., 2006, Science, 311, 1901
* Gourgoulhon et al. (1999) Gourgoulhon, E., Haensel, P., Livine, R., Paluch, E., Bonazola, S., Marck, J.-A., 1999, A&A, 349, 851
* Kaaret et al. (2007) Kaaret, P., Prieskorn, Z., in't Zand, J.J.M., Brandt, S., Lund, N., Mereghetti, S., Goetz, D., Kuulkers, E., Tomsick, J.A., 2007, ApJ Letters, 657,97
* Krastev & Li (2007) Krastev P.G., Li B., Worley A., 2007, arXiv:0709.3621
* Lasota et al. (1996) Lasota, J.-P., Haensel, P., Abramowicz, M.A., 1996, ApJ, 456, 300
* Pandharipande & Ravenhall (1989) Pandharipande, V.R., Ravenhall, D.G., 1989, in Proc. NATO Advanced Research Workshop on nuclear matter and heavy ion collisions, Les Houches, 1989, ed. M. Soyeur et al. (Plenum, New York, 1989), 103
* Pons et al. (2000) Pons, J.A., Reddy, S., Ellis, P.J., Prakash, M., Lattimer, J.M., 2000, Phys. Rev. C, 62, 035803
* Salgado et al. (1994) Salgado, M., Bonazzola, S., Gourgoulhon, E., Haensel, P., 1994, A & A 108, 455
* Shapiro et al. (1983) Shapiro, S.L., Teukolsky, S.A., Wasserman, I., 1983, ApJ, 272, 702
* Zdunik et al. (2002) Zdunik, J. L., Haensel, P., Gourgoulhon, E., 2002, A&A, 381, 933
* Zdunik et al. (2005) Zdunik, J. L., Haensel, P., Bejger, M., 2005, A&A, 441, 207 | Fast rotation of compact stars (at submillisecond period) and, in particular, their stability, are sensitive to the equation of state (EOS) of dense matter. Recent observations of XTE J1739-285 suggest that it contains a neutron star rotating at 1122 Hz (Kaaret et al., 2007). At such rotational frequency the effects of rotation on star's structure are significant. We study the interplay of fast rotation, EOS, and gravitational mass of a submillisecond pulsar. We discuss the EOS dependence of spin-up to a submillisecond period, via mass accretion from a disk in a low-mass X-ray binary.
keywords: Dense matter, Equation of state, stars: neutron, stars: rotation, Pulsars, Low-mass X-ray binaries Pacs: 26.60+c, 97.60.Gb, 97.60.Jd, 97.80.Jd +
Footnote †: journal: | Give a concise overview of the text below. |
arxiv-format/0805_1881v2.md | # Soil Moisture Monitoring Using GNSS Reflected Signals
A. Egido
G. Ruffini
M. Caparrini
C. Martin
E. Farres
X. Banque
_Starlab Barcelona Edifici de l'Observatori Fabra, Muntaanya del Tibidabo Cami de l'Observatori Fabra s/n - 08035 - Barcelona - Spain Email: [email protected]_
######
GNSS-R studies and investigations have been mainly focused on sea surface topography. Within this frame, Starlab Barcelona has developed Oceanpal(r), a fully operational system that can provide GNSS-R data and higher level products, such as real time significant wave height (SWH) and altimetry data.
The application of GNSS-R to land remote sensing has been largely overlooked. Nevertheless, there is experimental evidence that GPS reflected signals from the ground can be detected and processed in order to obtain soil moisture estimates, [2][3][4].
The importance of soil moisture relies in the fact that it is a prime parameter for the surface hydrology cycle, which is one of the keys for the understanding of the interaction between continental surfaces and the atmosphere in environmental studies. Water storage in the soil, either in the surface layer or in deeper levels, affects not only the evapotranspiration but also the heat storage ability of the soil, its thermal conductivity, and the partitioning of energy between latent and sensible heat fluxes. In addition, the value of the surface layer volumetric soil moisture direct evaporation from soil, and determines the possibility of surface runoff after rainfalls.
Despite the recognised relevance of soil moisture, providing such parameter on global scales remains a significant challenge. Sensors based on GNSS-R offer this possibility and could represent a very important milestone in the development of a global soil moisture model. Starlab expects to develop an operational GNSS-R sensor oriented to soil moisture retrieval, which will be based in the Oceanpal(r) instrument's architecture. The present paper reviews the most important theoretical aspects to take into consideration for the development of a GNSS-R soil moisture sensor, and suggests how the use of the forthcoming Galileo signals might help in this task.
## 2 Soil Moisture Estimation with GNSS Signals
The basis for the retrieval of soil moisture with GNSS-R systems lays in the variability of the ground dielectric properties associated to soil moisture. Higher concentrations of water in the soil yield a higher dielectric constant and reflectivity. Consequently, the reflected signal's peak power can be related to soil moisture.
Previous investigations [2-7] have demonstrated the capability of GPS bistatic scatterometers to obtain signal to noise ratios high enough to sense small changes in surface reflectivity. Furthermore, these systems present some advantages with respect to those currently used to retrieve soil moisture. First, GPS signals lie in L band, which is the most sensitive band for soil moisture microwave remote sensing. Secondly, in contrast to microwave radiometry, variations on thermal background do not dramatically contaminate the GPS reflected signals. As will be seen below, thermal background influences soil moisture observables, but this effect is not as important as for microwave radiometry. Thirdly, GPS scatterometry from space has a potential higher spatial resolution than microwave radiometry, due to the highly stable carrier and code modulations of the incident signals which enables the use of Delay Doppler mapping.
Nevertheless, in order to obtain precise soil moisture estimates there are several phenomena that need to be taken into consideration, mainly the effects of diffuse scattering over the soil surface: soil roughness and vegetation canopy.
## 3 The Oceanpal(r) Instrument
Oceanpal(r) is a GNSS-R based sensor designed for operational coastal monitoring. It is an inexpensive, all-weather, dry and passive instrument which can be deployed on multiple platforms, static (coasts, harbours, off-shore), and slowly moving (boats, floating platforms, buoys). In its present form, Oceanpal(r) can deliver two kinds of Level-2 products: sea-surface height and significant wave height (SWH). However, due to its flexibility in terms of data acquisition, the Oceanpal(r) instrument can be applied to the retrieval of soil moisture in a straightforward way.
### Instrument's Architecture
Oceanpal(r) comprises three subsystems: a radio frequency (RF) section, an intermediate frequency section and a data processing section. The basic system architecture is illustrated in Fig. 1. The RF section features a pair of low gain L-band antennas. An RHCP (Right-hand circular polarized) zenith antenna collects the direct GNSS signals while an LHCP (left-hand circular polarized) nadir antenna collects the sea-surface reflected GNSS signals. Data bursts of some minutes are acquired from each channel using two radio frequency front-ends that down-convert the signal to intermediate frequency (IF). The acquisition time is a parameter that can be specified by the user.
Within the IF section, the signal is one-bit sampled and stored on a hard disk. After the acquisition process, these direct and reflected raw data are then fed into the processing section of the instrument where a pair of software GNSS receivers detects and tracks the available signals in the direct channel (which works as master) and blindly dispreads the reflected signals in the reflected/slave channel. The result of this processing is a set of direct and reflected electromagnetic field time series (complex waveforms) for each satellite in view, plus some ancillary information. The complex waveforms are then used to produce higher level products by the data processing algorithms.
It must be noted at this point that Oceanpal(r) is currently a GPS based instrument. However, the interoperability of the GPS and Galileo L1 signals and the fact that Oceanpal(r) is implemented as a software receiver (after the digitization of the signal) the evolution of the system towards a GPS and Galileo instrument is relatively easy. Starlab Barcelona expects to have this GNSS-R instrument working by the beginning of 2008.
Figure 1: Oceanpal basic setup.
### The Interferometric Complex Field
The fundamental product of the instrument data processing chain is the so called Interferometric Complex Field (ICF), which is a time series calculated as the ratio between the reflected and direct waveform peaks, (1)
\\[ICF(t)=\\frac{p_{R}(t)}{p_{D}(t)} \\tag{1}\\]
where \\(p_{R}\\)_and_\\(p_{D}\\) represent the time series of waveform peaks for the reflected and direct signals, respectively. The ICF is the basis for the algorithms used to calculate different higher order level products, and represents also the fundamental magnitude for soil moisture estimation.
## 4 Soil Moisture Estimation using Oceanpal(r)
In the previous section, the interferometric complex field has been defined. It must be noted in (1) that the squared absolute value of the ICF can be considered as the peak power ratio of the reflected and the direct signal waveforms in the lapse of time t. This means that it represents a measure of the surface reflectivity, which in turn, as mentioned in section 2, can be related to the soil moisture volumetric content.
However, there are several parameters such as surface roughness, vegetation canopy, and thermal background, which affect the determination of soil moisture. Their effects are reviewed more in depth in the next section. Considering, as a first approximation, that the only parameter affecting the reflected signal is the soil reflectivity, the following incoherent averaging can be performed:
\\[\\Gamma_{av}=\\frac{1}{N}\\sum_{i=1}^{N}\\left|ICF(t)\\right|^{2} \\tag{2}\\]
where N is the number of waveforms computed during one data acquisition. With (2) an averaged value of the soil's reflectivity \\(\\Gamma_{av}\\) is be obtained for the whole acquisition time span, typically one minute. From the Fresnel equations of reflection, the reflection coefficients for vertical and horizontal polarization can be obtained:
\\[\\Gamma_{v}=\\frac{\\mathcal{E}_{r}\\sin\\gamma-\\sqrt{\\mathcal{E}_{r}-\\cos^{2} \\gamma}}{\\mathcal{E}_{r}\\sin\\gamma+\\sqrt{\\mathcal{E}_{r}-\\cos^{2}\\gamma}} \\tag{3}\\]
and
\\[\\Gamma_{h}=\\frac{\\sin\\gamma-\\sqrt{\\mathcal{E}_{r}-\\cos^{2}\\gamma}}{\\sin\\gamma+ \\sqrt{\\mathcal{E}_{r}-\\cos^{2}\\gamma}} \\tag{4}\\]
where \\(\\gamma\\)is the incidence angle. The GPS signals is mostly right hand circular polarized (RHCP), which means that it presents a vertical and a horizontal polarization component. However, for high incidence angles, e.g. above 60\\({}^{\\text{o}}\\), the difference between the reflections coefficients for vertical and horizontal polarization can be considered negligible, in a first approach, and therefore, just the reflection coefficient for vertical polarization can be taken into account. As noted in [4], the error in using just the vertical value is 5% of the reflectivity at the worst incidence angle. Identifying \\(\\mathcal{E}_{r}\\) with \\(\\mathcal{E}_{soil}\\) and solving the previous equation for the permittivity one can show that
\\[\\mathcal{E}_{soil}=\\frac{1\\pm\\sqrt{1-4\\sin^{2}\\gamma\\cdot\\cos^{2}\\gamma\\cdot \\left(\\frac{1-\\Gamma}{1+\\Gamma}\\right)^{2}}}{2\\sin^{2}\\gamma\\cdot\\left(\\frac{1 -\\Gamma}{1+\\Gamma}\\right)^{2}} \\tag{5}\\]In order to relate the soil permittivity to soil moisture a semi-empirical model presented in [11] can be used. The authors suggest that the polynomial that describes the relationship between soil moisture and dielectric constant for a frequency around 1.4 GHz is given by
\\[\\varepsilon_{soil}=2.862-0.012S+0.001C+\\left(3.803+0.462C-0.341\\right)m_{v}+ \\left(119.003-0.500S+0.633C\\right)m_{v}^{2} \\tag{6}\\]
where S and C are the sand and clay textural compositions of a soil in percent by weight, and \\(m_{v}\\) is the volumetric soil moisture. This model has been used as a semi-empirical approach in [5] with acceptable results.
Another partially different approach for determine the complex permittivity of the soil using navigation reflected signals has been proposed in [2][12]. The main difference in this approach is the fact that only one antenna is used to collect both the direct and the reflected signal, thus obtaining an interferometric field as sum of the two EM waves.
Other more complicated and accurate models than those presented above have also been used. For instance, in [6] a model based on Kirchoff Approximation and Geometric Optics is adopted. This model has been successfully applied in modelling various GNSS bistatic radar scenarios (see for instance [7] which is the most commonly used model in GNSS scattering from ocean surfaces).
## 5 Constraint in the determination of soil moisture
As mentioned in the previous section, the estimation of soil moisture with L band signals is affected by several ancillary phenomena that distort the interferometric complex field and bias the measurements of the soil reflectivity. The present section reviews the most important ones and their effects in the scattered signals.
### Surface Roughness and Vegetation Canopy
Under the assumption of a flat scattering surface, the GPS signal emitted by a certain space vehicle would be reflected basically from a specular point over the surface (more precisely, from the first Fresnel zone [8]). It is known that the specular point is determines the shortest path between the emitter and the receiver, through the reflecting surface. However, if a rough surface is considered and according to the Geometrical Optics model, slopes may exist with the proper orientation to redirect the incoming radiation to the receiver antenna from sites away from the specular point, as depicted in Fig.1. The locus of points over the surface from which the reflected signal arrives at the same delay, with respect to the specular point, is given by ellipses which are called iso-delay lines. The signals will also be affected by Doppler shifts due to the changing geometry of the scenario and the relative motion of emitter and receiver, however this effect will not be considered for the moment for soil moisture retrieval. Note that this approximation is perfectly valid in the case of a fix receiver.
The power signal in the receiver at any delay is the result of summing up the reflected field from each individual scatterer within the corresponding iso-delay ellipse. Assuming that natural scenes are composed of independently phased scatterers, the resulting composite signal is stochastic with Rayleigh distribution (i.e., affected by speckle [8]). Note that the signal strength received from an individual scatterer depends on the reflection coefficient of each surface element, as well as the incidence angle. In addition, further scattering and attenuation occurs when the signal path includes vegetation canopy.
GNSS receivers rely on the spread spectrum properties of the pseudo-random noise code (PRN), which modulates the carriers, in order to track the signals. The tracking is performed through the correlation of the received signals with a clean replica of itself (matched filtering), obtaining a waveform whose shape resembles, in an average sense, the autocorrelation of the original PRN code. However, for the case of the reflected signal, in addition to the delay introduced as a consequence of a longer signal path, the waveform is in general distorted and its peak power diminished due to the scattering process. In Fig.2, ideal waveforms of the direct and reflected signals are sketched.
As mentioned before, the peak power of the reflected signal waveform depends on the surface reflectivity and therefore can be related to soil moisture. However, since the reflected signal becomes a stochastic process through the scattering process, measuring the peak of the waveform is not a straightforward task.
The effects of vegetation canopy in the scattering process of GNSS signals are a very important factor to be taken into consideration [4]. Specifically, for soil moisture remote sensing, vegetation is often modelled separately from the bare soil surface as a signal attenuation which is proportional to vegetation water content [9].
### System Noise
Back to the assumption of a perfectly smooth reflecting surface, the only changes affecting in the reflected signal with respect to the direct one would be the amplitude, decreased by the reflectivity of the surface, and a phase shift. Thus, the peak power of the reflected signal normalized by the peak power of the direct signal provides an observable that is proportional to the soil's reflectivity. However, the only direct observable of GPS signals is the waveform that results from the cross-correlation of the incoming signal with the locally generated PRN code, as explained above.
It can be demonstrated, although it is not within the scope of this review document, that the waveform's peak powers of the direct and reflected signals are proportional to the signal to noise ratio of the received signal. In the realistic case of a rough scattering surface, the reflected signal is affected by additive Gaussian white noise of thermal origin, plus speckle. As a consequence of the speckle noise, the waveform shape is distorted and the peak power fluctuates due to fading effects. Thus, estimation of the peak power requires a certain amount of incoherent averaging in order to reduce uncertainty in the measurements. The equation that models the mean complex waveform power can be approximated as
\\[\\left\\langle\\left|\\widehat{C}_{k}\\left(\\tau\\right)\\right|^{2}\\right\\rangle=s^ {2}\\Lambda^{2}\\left(\\tau\\right)+f\\left(T,\\tau_{n}\\right) \\tag{7}\\]
where \\(s\\) is the signal SNR, \\(\\Lambda\\left(\\tau\\right)\\) indicates the triangle correlation function of the PRN code, and \\(f\\) is a term which is function of the coherent integration time and the noise coherence time. The caret denotes independent stochastic variables that have some probability distribution function (a combination of Gaussian thermal noise and Rayleighspeckle). The lag variable \\(\\tau\\) identifies different variables. It can be inferred from the previous relation that the soil moisture observable will be affected by the variations of thermal background through variations in the signal to noise ratio, which will need to be accounted for in the inversion process for soil moisture estimates.
Another significant effect of the system noise is the maximum allowed precision with which the waveform peak power can be determined. The minimum variance associated with an estimator is defined by the Cramer-Rao lower bound. The Cramer-Rao lower bound for estimating a peak power \\(\\alpha^{2}_{{}_{\\sigma_{p}}}\\) is given by [10]
\\[\\alpha^{2}_{{}_{\\sigma_{p}}}=\\frac{\\alpha^{2}_{{}_{\\tau}}}{N\\Upsilon} \\tag{8}\\]
where N is the number of independent samples averaged and \\(\\Upsilon\\) is the detected signal energy to noise ratio. In order to increase the signal energy to noise ratio, longer integration times should be used when performing the correlation of the incoming signal with the clean replica. However, the coherent integration time has a natural practical upper limit, for the case of GPS signals, which corresponds to the duration of a navigation bit, e.g., 20ms for GPS C/A. In the case of Galileo signals, the existence of pilot signals which are not modulated by a navigation message eliminates the upper limit restriction for the integration time. Therefore the variance of the waveform peak power estimation should be significantly reduced by using this new signal.
The other way to further reduce the uncertainty in the final estimation is by averaging independent samples. Independency of observations implies that the phases of the scatterers are uncorrelated in each data set to be averaged. This independency translates into a sufficient change of the geometry of the reflection in subsequent data takes. Considering a ground based instrument, since the soil surface is static, the independency condition is accomplished by allowing enough time to pass between consecutive samples, so that the geometry of the observations is sufficiently different because of the movement of the transmitting GPS space vehicles. The fact that a GNSS-R based instrument (such as Oceanpal(r)) is capable to simultaneously track and process signals from several satellites, provides additional independent measures of the scattering surface, which will also contribute to reduce the variance of the estimation.
It is important to stress that in the approach to soil moisture estimation presented, variations in the ICF which can be caused by the different thermal background fluctuations in the direct and reflected signals, as well as any mismatching in the direct and reflected receiving chains have to be accounted for since they directly impact the magnitude used for the estimation. Similarly, since most GNSS-R systems use one zenith and one nadir antenna, a temperature gradient in the system is likely to occur, which can result in an additional difference of the noise affecting the direct and the reflected receiving chains. Hence, both receiving chains will need to be calibrated to perform effective soil moisture estimation.
## 6 Conclusions
The most important aspects and constraints for soil moisture retrieval with a GNSS-R based instrument, as well as a simple first-approach scattering model have been reviewed in this article. In addition, a GPS-R instrument by Starlab (Oceanpal(r)), suitable for soil moisture retrieval, has been presented. The state of the art does not provide the capability to perform GNSS-R soil moisture remote sensing in an accurate and precise way. Further investigations are needed to relate in a more precise way the effects of soil moisture to the GNSS bistatic scattering process; differences between vertical and horizontal reflection coefficients will need to be accounted for, as well as diffuse scattering effects due to surface roughness, adverse effects caused by temperature variations, and vegetation canopy should also be included in forthcoming scattering models. Extensive validation and calibration campaigns will need to be performed, so that soil moisture estimates can be compared and related to in-situ soil moisture measurements with precise knowledge of surface roughness and vegetation canopy.
Concerning the use of GALILEO, whereas for other remote sensing applications it has been proven to represent a big step forward (for example in ocean mesoscale altimetry [13]), with respect to soil moisture estimation, at the moment, the specific contribution of these new signals seems to be limited to the possibility of longer integration time and longer codes, for noise reduction, and to the use of more frequencies and satellites to increase the number of measurements.
Notwithstanding the highlighted difficulties, soil moisture remote sensing with GNSS-R remains an interesting goal, taking into account the huge advance it would represent in obtaining global estimations of such an important parameter for the hydrologic cycle.
## References
* [1] M.Martin-Neira. A Passive reflectometry and interferometry system (PARIS): Application to ocean altimetry. _ESA J._, 17:331-355, 1993
* [2] A. Kavak, G. Xu, W.J. Vogel, GPS Multipath Fade Measurements to Determine L-Band Groud Reflectivity Properties, University of Texas, Austin, 1998.
* [3] D. Masters, V. Zavorotny, S. Katzberg, W. Emery GPS Signal Scattering from Land for Moisture Content Determination _IGARSS Proceedings_, July 24-28, 2000.
* [4] S. J. Katzberg, O. Torres, M. S. Grant, and D. Masters, \"Utilizing calibrated GPS reflected signals to estimate soil reflectivity and dielectric constant: Results from SMEX02,\" _Remote Sens. Environ._, vol. 100, no. 1, pp. 17-28, Jan. 2006.
* [5] O.Torres, Analysis of Reflected Global Positioning System Signals as a Method for the Determination of Soil Moisture, Masters Thesis.
* [6] D. Masters, Surface Remote Sensing Applications of GNSS Bistatic Radar, Soil Moisture and Aircraft Altimetry
* [7] V.U. Zavorotny, A.G. Voronovich, \"Bistatic GPS Signal Reflections at Various Polarizations from Rough Land Surface with Moisture Content\". Proc. IEEE Int. Geoscience Remote Sensing Symp., volume 7, 2000
* [8] P. Beckman and A. Spizzichino. _The scattering of Electromagnetic Waves from Rough Surfaces_. Artech House, Norwood, MA, 1963.
* [9] F. Ulaby, R. Moore, A. Fung. _Microwave Remote Sensing_. Artech House, 1986.
* [10] P. Peebles. _Radar Principles_. John Wiley & Sons, Inc, 1998.
* [11] Hallikainen and Ulaby, Microwave Dielectric Behavior of Wet Soil-Part 1: Empirical Models and Experimental Observations, _IEEE Transactions on Geoscience and Remote Sensing_ Vol. GE-23, No. 1, pp. 25-34, Jan. 1985.
* [12] Kavak, A., Vogel, W.J., Guanghan, X., Using GPS to measure ground complex permittivity, _Electronic Letters_, vol 34, no. 3, p. 254, Feb 5th 1998.
* [13] O. Germain, G. Ruffini, A revisit to the GNSS-R code range precision, _Proceedings of the GNSS-R '06 Workshop,_ 14-15 June 2006, ESA/ESTEC, Noordwijk, The Netherlands | The use of GNSS signals as a source of opportunity for remote sensing applications has been a research area of great interest since 1993, when M. Martin Neira (ESA) proposed that GPS signals reflected from the Earth's surface could be detected to retrieve ocean altimetry information accounting for the existent delay between the direct and the reflected signals (the PARIS concept, PAssive Reflectometry and Interferometry System) [1]. Since then, several applications based on a GNSS bistatic radar configuration have been developed taking advantage of the high availability and stability of GNSS signals. This technique is commonly known as GNSS-R (Global Navigation Satellite System - Reflections). | Summarize the following text. |
arxiv-format/0806_0582v1.md | # Sampling Spatially Correlated Clutter
Oscar H. Bustos
Facultad de Matematica Astronomia y Fisica, Universidad Nacional de Cordoba, Ing. Medina Allende esq. Haya de la Torre, 5000 Cordoba, Argentina, Fax: 54-351-4334054, {bustos,flesia}@mate.uncor.edu
Ana Georgina Flesia
Facultad de Matematica Astronomia y Fisica, Universidad Nacional de Cordoba, Ing. Medina Allende esq. Haya de la Torre, 5000 Cordoba, Argentina, Fax: 54-351-4334054, {bustos,flesia}@mate.uncor.edu
Alejandro C. Frery
Universidade Federal de Alagoas, Instituto de Computacao, Campus A. C. Simes, BR 104 - Norte, Km 97, Tableiro, dos Martins - Maceio - AL, CEP 57072-970. [email protected]
Maria Magdalena Lucini
Universidad Nacional de Nordeste, Facultad de Ciencias Exactas, Naturales y Agrimensura, Av. Libertad 5450 - Campus \"Deodoro Roca\", (3400) Corrientes, Tel: +54 (3783) 473931/473932 [email protected]
######
**Keywords:** image modeling, simulation, spatial correlation, speckle.
Introduction
The demand for exhaustive and controlled clutter measurements in all scenarios would be alleviated if plausible data could be obtained by computer simulation. Clutter simulation is an important element in the development of target detection algorithms for radar, sonar, ultrasound and laser imaging systems. Using simulated data, the accuracy of clutter models may be assessed and the performance of target detection algorithms may be quantified with controlled clutter backgrounds. This article is concerned with the simulation of random clutter having appropriate both first and second order statistical properties.
The use of correlation in clutter models is significant and relevant since the correlation effects within the clutter often dominate system performance. Models merely based on single-point statistics could, therefore, produce misleading results, and several commonly used forms for clutter statistics fall into this category.
The statistical properties of heterogeneous clutter returned by Synthetic Aperture Radar (SAR) sensors have been largely investigated in the literature. A theoretical model widely adopted for these images assumes that the value in every pixel is the observation of an uncorrelated stochastic process \\(Z_{A}\\), characterized by single-point (first order) statistics. A general agreement has been reached that amplitude fields are well explained by the \\(\\mathcal{K}_{A}\\) distribution. Such distribution arises when coherent radiation is scattered by a surface having Gamma-distributed cross-section fluctuations. Though agricultural fields and woodland are very well fitted by this distribution, it is also known that it fails giving accurate statistical description of extremely heterogeneous data, such as urban areas and forest growing on undulated relief.
As discussed in [1, 2], another distribution, the \\(\\mathcal{G}_{A}\\) law, can be used to describe those extremely heterogeneous regions, with the advantage that it has the \\(\\mathcal{K}_{A}\\) distribution as a particular case. This distribution arises in all coherent imaging applications as a result of the action of multiplicative speckle noise on an underlying square root of a generalized inverse Gaussian distribution. The main drawback of this general model is that it requires an extra parameter, besides its theoretical complexity.
Nevertheless, it can be seen in [3, 4, 5] that a special case of the \\(\\mathcal{G}_{A}\\) distribution, namely the \\(\\mathcal{G}_{A}^{0}\\) law, which has as many parameters as the \\(\\mathcal{K}_{A}\\) distribution, is able to model with accuracy every type of clutter. As a consequence, efforts have been directed toward the simulation of \\(\\mathcal{G}_{A}^{0}\\) textures, but no exact method for generating patterns with arbitrary spatial autocorrelation functions has been envisaged so far, in spite of it being more tractable than the \\(\\mathcal{K}_{A}\\) distribution.
As previously stated, spatial correlation is needed in order to increase the adequacy of the model to real situations. This paper tackles the problem of simulating correlated \\(\\mathcal{G}_{A}^{0}\\) fields.
## 2 Correlated \\(\\mathcal{G}_{A}^{0}\\) clutter
The main properties and definitions of the \\(\\mathcal{G}_{A}^{0}\\) clutter are presented in this section, starting with the first order properties of the distribution and concluding with the definition of a \\(\\mathcal{G}_{A}^{0}\\) stochastic process that will describe \\(Z_{A}\\) fields.
### Marginal properties
The \\(\\mathcal{G}_{A}^{0}(\\alpha,\\gamma,n)\\) distribution is characterized by the following probability density function:
\\[f_{Z_{A}}(z,(\\alpha,\\gamma,n))=\\frac{2n^{n}\\Gamma(n-\\alpha)}{\\sqrt{\\gamma} \\Gamma(-\\alpha)\\Gamma(n)}\\cdot\\frac{\\left(\\frac{z}{\\sqrt{\\gamma}}\\right)^{2n- 1}}{\\left(1+\\frac{z^{2}}{\\gamma}n\\right)^{n-\\alpha}}\\cdot\\mathbb{I}_{(0,+ \\infty)}(z),\\quad\\alpha<0,\\gamma>0, \\tag{1}\\]
being \\(n\\geq 1\\) the number of looks of the image, which is controlled at the image generation process, and \\(\\mathbb{I}_{T}(\\cdot)\\) the indicator function of the set \\(T\\). The parameter \\(\\alpha\\) describes the roughness, being small values (say \\(\\alpha\\leq-15\\)) usually associated to homogeneous targets, like pasture, values ranging in the \\((-15,-5]\\) interval usually observedin heterogeneous clutter, like forests, and big values (\\(-5<\\alpha<0\\) for instance) commonly seen when extremely heterogeneous areas are imaged. The parameter \\(\\gamma\\) is related to the scale, in the sense that if \\(Z\\) is \\(\\mathcal{G}_{A}^{0}(\\alpha,1,n)\\) distributed then \\(Z_{A}=\\sqrt{\\gamma}Z\\) obeys a \\(\\mathcal{G}_{A}^{0}(\\alpha,\\gamma,n)\\) law.
A SAR image over a suburban area of Munchen, Germany, is shown in Figure 1. It was obtained with E-SAR, an experimental polarimetric airborne sensor operated by the German Aerospace Agency (Deutsches Zentrum fur Luft- und Raumfahrt - DLR e. V.) The data here shown were generated in single look format, and exhibit the three discussed types of roughness: homogeneous (the dark areas to the middle of the image), heterogeneous (the clear area to the left) and extremely heterogeneous (the clear area to the right).
Figure 1: E-SAR image showing three types of texture.
The \\(r\\)-th moments of the \\({\\cal G}^{0}_{A}(\\alpha,\\gamma,n)\\) distribution are
\\[E(Z^{r}_{A})=\\left(\\frac{\\gamma}{n}\\right)^{\\frac{r}{2}}\\frac{\\Gamma(-\\alpha- \\frac{r}{2})\\Gamma(n+\\frac{r}{2})}{\\Gamma(-\\alpha)\\Gamma(n)},\\qquad\\alpha<-r/2,n\\geq 1, \\tag{2}\\]
when \\(-r/2\\leq\\alpha<0\\) the \\(r\\)-th order moment is infinite. Using equation (2) the mean and variance of a \\({\\cal G}^{0}_{A}(\\alpha,\\gamma,n)\\) distributed random variable can be computed:
\\[\\mu_{Z_{A}} =\\sqrt{\\frac{\\gamma}{n}}\\frac{\\Gamma(n+\\frac{1}{2})\\Gamma(-\\alpha -\\frac{1}{2})}{\\Gamma(n)\\Gamma(-\\alpha)},\\] \\[\\sigma^{2}_{Z_{A}} =\\frac{\\gamma\\left[n\\Gamma^{2}(n)(-\\alpha-1)\\Gamma^{2}(-\\alpha- 1)-\\Gamma^{2}(n+\\frac{1}{2})\\Gamma^{2}(-\\alpha-\\frac{1}{2})\\right]}{n\\Gamma^ {2}(n)\\Gamma^{2}(-\\alpha)}.\\]
Figure 2 shows three densities of the \\({\\cal G}^{0}_{A}(\\alpha,\\gamma,n)\\) distribution for the single look (\\(n=1\\)) case. These densities are normalized so that the expected value is 1 for every value of the roughness parameter. This is obtained using equation (2) for setting the scale parameter \\(\\gamma=\\gamma_{\\alpha,n}=n\\left(\\Gamma(-a)\\Gamma(n)/\\left(\\Gamma(-a-1/2) \\Gamma(n+1/2)\\right)\\right)^{2}\\). These densities illustrate the three typical situations described above: homogeneous areas (\\(\\alpha=-15\\), dashes), heterogeneous clutter (\\(\\alpha=-5\\), dots) and an extremely heterogeneous target (\\(\\alpha=-1.5\\), solid line).
Following Barndorff-Nielsen and Blesild [6], it is interesting to see these densities as log probability functions, particularly because the \\({\\cal G}^{0}_{A}\\) is closely related to the class of Hyperbolic distributions [7]. Figure 3 shows the densities of the \\(\\mathcal{G}^{0}_{A}(-3,1,1)\\) and \\(\\mathcal{N}(3\\pi/16,1/2-9\\pi^{2}/256)\\) distributions in semilogarithmic scale, along with their mean value \\(\\mu=3\\pi/16\\). The parameters were chosen so that these distributions have equal mean and variance. The different decays of their tails is evident: the former behaves logarithmically, while the latter decays quadratically. This behavior ensures the ability of the \\(\\mathcal{G}^{0}_{A}\\) distribution to model data with extreme variability.
Besides being essential for the simulation technique here proposed, cumulative distribution functions are needed for carrying out goodness of fit tests and for the proposal of estimators based on order statistics. It can be seen in [3, 8, 9] that the cumulative distribution function of a \\(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n)\\) distributed random variable is given, for every \\(z>0\\), by \\(G(z,(\\alpha,\\gamma,n))=\\Upsilon_{2n,-2\\alpha}(-\\alpha z^{2}/\\gamma)\\), where \\(\\Upsilon_{s,t}\\) is the cumulative distribution function of a Snedecor's \\(F_{s,t}\\) distributed random variable with \\(s\\) and \\(t\\) degrees of freedom. Both \\(\\Upsilon_{\\cdot,\\cdot}\\) and \\(\\Upsilon_{\\cdot,\\cdot}^{-1}\\) are readily available in most platforms for computational statistics.
The single look case is of particular interest since it describes the noisiest images and it exhibits nice analytical properties. The distribution is characterized by the density \\(f(z;\\alpha,\\gamma,1)=-\\frac{2\\alpha}{\\gamma^{\\alpha}}\\,z(\\gamma+z^{2})^{ \\alpha-1}\\mathbb{I}_{(0,\\infty)}(z)\\), whith \\(-\\alpha,\\gamma>0\\). Its cumulative distribution function is given by \\(F(t)=1-\\left(1+t^{2}/\\gamma\\right)^{\\alpha}\\mathbb{I}_{[0,\\infty)}(t)\\), and its inverse, useful for the generation of random deviates and the computation of quantiles, is given by \\(F^{-1}(t)=\\left(\\gamma\\left((1-t)^{1/\\alpha}-1\\right)\\right)^{1/2}\\mathbb{I}_{ (0,1)}(t)\\).
Figure 3: Densities of the \\(\\mathcal{G}^{0}_{A}\\) and Gaussian distributions with same mean values \\(\\mu=3\\pi/16\\) in semilogarithmic scale.
### Correlated clutter
Instead of defining the model over \\(\\mathbb{Z}^{2}\\), in this section a realistic description of finite-sized fields is made. Let \\(Z_{A}=(Z_{A}(k,\\ell))_{0\\leq k\\leq N-1,0\\leq\\ell\\leq N-1}\\) be the stochastic model that describes the return amplitude image.
**Definition 1**: _We say that \\(Z_{A}\\) is a \\(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n)\\) stochastic process with correlation function \\(\\rho_{Z_{A}}\\) (in symbols \\(Z_{A}\\)\\(\\sim(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n),\\rho_{Z_{A}})\\)) if for all \\(0\\leq i,j,k,\\ell\\leq N-1\\) holds that_
1. \\(Z_{A}(k,\\ell)\\) _obeys a_ \\(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n)\\) _law;_
2. _the mean field is_ \\(\\mu_{Z_{A}}=E(Z_{A}(k,\\ell))\\)_;_
3. _the variance field is_ \\(\\sigma^{2}_{Z_{A}}=Var(Z_{A}(k,\\ell))\\)_;_
4. _the correlation function is_ \\(\\rho_{Z_{A}}((i,j),(k,\\ell))=\\left(E(Z_{A}(i,j)Z_{A}(k,\\ell))-\\mu_{Z_{A}}^{2} \\right)/\\sigma^{2}_{Z_{A}}\\)_._
The scale property of the parameter \\(\\gamma\\) implies that correlation function \\(\\rho_{Z_{A}}\\) and \\(\\gamma\\) are unrelated and, therefore, it is enough to generate a \\(Z_{A}^{1}\\sim(\\mathcal{G}^{0}_{A}(\\alpha,1,n),\\rho_{Z_{A}})\\) field and then simply multiply every outcome by \\(\\gamma^{1/2}\\) to get the desired field.
This paper presents a variation of a method used for simulation of correlated Gamma variables, called Transformation Method, that can be found in [10]. This method can be summarized in the following three steps:
1. Generate independent outcomes from a convenient distribution.
2. Introduce correlation in these data.
3. Transform the correlated observations into data with the desired marginal properties [11].
The transformation that guarantees the validity of this procedure is obtained from the cumulative distribution functions of the data obtained in step 2, and from the desired set of distributions.
Recall that if \\(U\\) is a continuous random variable with cumulative distribution function \\(F_{U}\\) then \\(F_{U}(U)\\) obeys a uniform \\(\\mathcal{U}(0,1)\\) law and, reciprocally, if \\(V\\) obeys a \\(\\mathcal{U}(0,1)\\) distribution then \\(F_{U}^{-1}(V)\\) is \\(F_{U}\\) distributed. In order to use this method it is necessary to know the correlation that the random variables will have after the transformation, besides the function \\(F_{U}^{-1}\\).
The method here studied consists of the following steps:
1. propose a correlation structure for the \\(\\mathcal{G}^{0}_{A}\\) field, say, the function \\(\\rho_{Z_{A}}\\);
2. generate a field of independent identically distributed standard Gaussian observations;
3. compute \\(\\tau\\), the correlation structure to be imposed to the Gaussian field from \\(\\rho_{Z_{A}}\\), and impair it using the Fourier transform without altering the marginal properties;
4. transform the correlated Gaussian field into a field of observations of identically distributed \\(\\mathcal{U}(0,1)\\) random variables, using the cumulative distribution function of the Gaussian distribution (\\(\\Phi\\));
5. transform the uniform observations into \\(\\mathcal{G}^{0}_{A}\\) outcomes, using the inverse of the cumulative distribution function of the \\(\\mathcal{G}^{0}_{A}\\) distribution (\\(G^{-1}\\)).
The function that relates \\(\\rho_{Z_{A}}\\) and \\(\\tau\\) is computed using numerical tools. In principle, there are no restrictions on the possible roughness parameters values that can be obtained by this method, but issues related to machine precision must be taken into account. Another important issue is that not every desired final correlation structure \\(\\rho_{Z_{A}}\\) is mapped onto a feasible intermediate correlation structure \\(\\tau\\). The procedure is presented in detail in the next section.
## 3 Transformation Method
Let \\(G(\\cdot,(\\alpha,\\gamma,n))\\) be the cumulative distribution function of a \\(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n)\\) distributed random variable. As previously stated,
\\[G(x,(\\alpha,\\gamma,n))=\\Upsilon_{2n,-2\\alpha}\\left(-\\frac{\\alpha x^{2}}{\\gamma} \\right),\\]
where \\(\\Upsilon_{\
u_{1},\
u_{2}}\\) is the cumulative distribution function of a Snedecor \\(F_{\
u_{1},\
u_{2}}\\) distribution, i.e.,
\\[\\Upsilon_{\
u_{1},\
u_{2}}(x)=\\frac{\\Gamma\\left(\\frac{\
u_{1}+\
u_{2}}{2} \\right)}{\\Gamma\\left(\\frac{\
u_{2}}{2}\\right)\\Gamma\\left(\\frac{\
u_{2}}{2} \\right)}\\left(\\frac{\
u_{1}}{\
u_{2}}\\right)^{\\frac{\
u_{1}}{2}}\\int_{0}^{x}t ^{\\frac{\
u_{1}-2}{2}}\\left(1+\\frac{\
u_{1}}{\
u_{2}}t\\right)^{-\\frac{\
u_{1 }+\
u_{2}}{2}}dt.\\]
The inverse of \\(G(\\cdot,(\\alpha,\\gamma,n))\\) is, therefore,
\\[G^{-1}(t,(\\alpha,\\gamma,n))=\\sqrt{-\\frac{\\gamma}{\\alpha}\\Upsilon_{2n,-2\\alpha }^{-1}(t)}.\\]
To generate \\(Z^{1}_{A}=(Z^{1}_{A}(k,\\ell))_{0\\leq k\\leq N-1,0\\leq\\ell\\leq N-1}\\sim(\\mathcal{ G}^{0}_{A}(\\alpha,1,n),\\rho_{Z_{A}})\\) using the inversion method we define every coordinate of the process \\(Z_{A}\\) as a transformation of a Gaussian process \\(\\zeta\\) as \\(Z^{1}_{A}(i,j)=G^{-1}(\\Phi(\\zeta(i,j)),(\\alpha,1,n))\\), where \\(\\zeta=(\\zeta(i,j))_{0\\leq i\\leq N-1,0\\leq j\\leq N-1}\\) is a stochastic process such that \\(\\zeta(i,j)\\) is a standard Gaussian random variable and with correlation function \\(\\tau_{\\zeta}\\) (i.e. where \\(\\tau_{\\zeta}((i,j),(k,\\ell))=E(\\zeta(i,j)\\zeta(k,\\ell))\\)) satisfying
\\[\\rho_{Z_{A}}((i,j),(k,\\ell))=\\varrho_{(\\alpha,n)}(\\tau_{\\zeta}((i,j),(k,\\ell))) \\tag{3}\\]
for all \\(0\\leq i,j,k,\\ell\\leq N-1\\) and \\((i,j)\
eq(k,\\ell)\\) and where \\(\\Phi\\) denotes the cumulative distribution function of a standard Gaussian random variable.
Posed as a diagram, the method consists of the following transformations among Gaussian (\\(\\mathcal{N}\\)), Uniform (\\(\\mathcal{U}\\)) and \\(\\mathcal{G}^{0}_{A}\\)-distributed random variables:
A central issue of the method is finding the correlation structure that the Gaussian field has to obey, in order to have the desired \\(\\mathcal{G}^{0}_{A}\\) field after the transformation. The function \\(\\varrho_{(\\alpha,n)}\\) is defined on \\((-1,1)\\) by
\\[\\varrho_{(\\alpha,n)}(\\tau)=\\frac{R_{(\\alpha,n)}(\\tau)-\\left(\\frac{1}{n}\\right) \\left(\\frac{\\Gamma(n+\\frac{1}{2})\\Gamma(-\\alpha-\\frac{1}{2})}{\\Gamma(n)\\Gamma (-\\alpha)}\\right)^{2}}{-\\frac{1}{1+\\alpha}-\\left(\\frac{1}{n}\\right)\\left( \\frac{\\Gamma(n+\\frac{1}{2})\\Gamma(-\\alpha-\\frac{1}{2})}{\\Gamma(n)\\Gamma(- \\alpha)}\\right)^{2}},\\]
with
\\[R_{(\\alpha,n)}(\\tau) =\\iint_{\\mathbb{R}^{2}}G^{-1}(\\Phi(u),(\\alpha,1,n))G^{-1}(\\Phi( v),(\\alpha,1,n))\\phi_{2}(u,v,\\tau)))dudv\\] \\[=\\frac{1}{|\\alpha|\\,2\\pi\\sqrt{1-\\tau^{2}}}\\iint_{\\mathbb{R}^{2} }\\sqrt{\\Upsilon_{2n,-2\\alpha}^{-1}(\\Phi(u)).\\Upsilon_{2n,-2\\alpha}^{-1}(\\Phi (v))}\\exp\\left(-\\frac{u^{2}-2\\tau.u.v+v^{2}}{2(1-\\tau^{2})}\\right)dudv,\\]
where
\\[\\phi_{2}(u,v,\\tau)=\\frac{1}{2\\pi\\sqrt{(1-\\tau^{2})}}\\exp\\left(-\\frac{u^{2}-2 \\tau.u.v+v^{2}}{2(1-\\tau^{2})}\\right).\\]
Note that \\(R_{(\\alpha,n)}(\\tau_{\\zeta}((i,j),(k,\\ell)))=E(Z^{1}_{A}(i,j)Z^{1}_{A}(k,\\ell))\\) for all \\(0\\leq i,j,k,\\ell\\leq N-1\\) and \\((i,j)\
eq(k,\\ell)\\).
The answer to the question of finding \\(\\tau_{\\zeta}\\) given \\(\\rho_{\\mathbf{Z_{A}}}\\) is equivalent to the problem of inverting the function \\(\\varrho_{(\\alpha,n)}\\). This function is only available using numerical methods, an approximation that may impose restrictions on the use of this simulation method.
### Inversion of \\(\\varrho_{(\\alpha,n)}\\)
The function \\(\\varrho_{(\\alpha,n)}\\) has the following properties:
1. The set \\(\\{\\varrho_{(\\alpha,n)}(\\tau)\\colon\\tau\\in(-1,1)\\}\\) is strictly included in \\((-1,1)\\), and depends on the values of \\(\\alpha\\).
2. The function \\(\\varrho_{(\\alpha,n)}\\) is strictly increasing in \\((-1,1)\\).
3. The values \\(\\varrho_{(\\alpha,n)}(\\tau)\\) are strictly negative for all \\(\\tau<0\\).
Let \\(\\eth_{(\\alpha,n)}\\) be the inverse function of \\(\\varrho_{(\\alpha,n)}\\). Then, in order to calculate its value for a fixed \\(\\rho\\in(-1,1)\\), we have to solve the following equation in \\(\\tau\\):
\\[R_{(\\alpha,n)}(\\tau)+\\frac{\\rho}{1+\\alpha}+(\\rho-1)\\left(\\frac{1}{n}\\right) \\left(\\frac{\\Gamma(n+\\frac{1}{2})\\Gamma(-\\alpha-\\frac{1}{2})}{\\Gamma(n)\\Gamma( -\\alpha)}\\right)^{2}=0\\]
Then, it follows from the properties of \\(\\varrho_{(\\alpha,n)}\\), that for certain values of \\(\\alpha\\) the set of \\(\\tau\\) such that this equation is solvable is a strict subset of \\((-1,1)\\). Table 1 shows some values of the function \\(\\eth_{(\\alpha,n)}\\) for specific values of \\(\\rho\\), \\(n\\) and \\(\\alpha\\). Figure 4 shows \\(\\tau\\) as a function of \\(\\rho\\) for the \\(n=1\\) case and varying values of \\(\\alpha\\), and it can be seen that the smaller \\(\\alpha\\) the closer this function is to the identity. This is sensible, since the \\(\\eth_{A}^{0}\\) distribution becomes more and more symmetric as \\(\\alpha\\to-\\infty\\) and, therefore, simulating outcomes from this distribution becomes closer and closer to the problem of obtaining Gaussian deviates.
Figure 5 presents the same function for \\(\\alpha=-1.5\\) and varying number of looks. It is noticeable that \\(\\tau\\) is far less sensitive to \\(n\\) than to \\(\\alpha\\), a feature that suggests a shortcut for computing the values of Table 1: disregarding the dependence on \\(n\\), i.e., considering \\(\\tau(\\rho,\\alpha,n)\\simeq\\tau(\\rho,\\alpha,n_{0})\\) for a fixed convenient \\(n_{0}\\).
The source FORTRAN file with routines for computing the functions \\(\\varrho_{(\\alpha,n)}\\) and \\(\\eth_{(\\alpha,n)}\\) can be obtained from the first author of this paper.
### Generation of the process \\(\\zeta\\)
The process \\(\\zeta\\), that consists of spatially correlated standard Gaussian random variables, will be generated using a spectral technique that employs the Fourier transform. This method has computational advantages with respect to the direct application of a convolution filter. Again, the concern here is to define a finite process instead of working on \\(\\mathbb{Z}^{2}\\) for the sake of simplicity.
Consider the following sets:
\\[R_{1} =\\{(k,\\ell)\\colon 0\\leq k,\\ell\\leq N/2\\},\\] \\[R_{2} =\\{(k,\\ell)\\colon N/2+1\\leq k\\leq N-1,0\\leq\\ell\\leq N/2\\},\\] \\[R_{3} =\\{(k,\\ell)\\colon 0\\leq k\\leq N/2,N/2+1\\leq\\ell\\leq N-1\\},\\] \\[R_{4} =\\{(k,\\ell)\\colon N/2+1\\leq k\\leq N-1,N/2+1\\leq\\ell\\leq N-1\\},\\] \\[R_{N} =R_{1}\\cup R_{2}\\cup R_{3}\\cup R_{4}=\\{(k,\\ell)\\colon 0\\leq k,\\ell \\leq N-1\\},\\] \\[\\overline{R_{N}} =\\{(k,\\ell)\\colon\\,-(N-1)\\leq k,\\ell\\leq N-1\\}.\\]
Let \\(\\rho\\colon R_{1}\\longrightarrow(-1,1)\\) be a function, extended onto \\(\\overline{R_{N}}\\) by:
\\[\\rho(k,\\ell)=\\left\\{\\begin{array}{ccc}\\rho(N-k,\\ell)&\\mbox{if}&(k,\\ell)\\in R _{2},\\\\ \\rho(k,N-\\ell)&\\mbox{if}&(k,\\ell)\\in R_{3},\\\\ \\rho(N-k,N-\\ell)&\\mbox{if}&(k,\\ell)\\in R_{4},\\\\ \\rho(N+k,\\ell)&\\mbox{if}&-(N-1)\\leq k<0\\leq\\ell\\leq N-1,\\\\ \\rho(k,N+\\ell)&\\mbox{if}&-(N-1)\\leq\\ell<0\\leq k\\leq N-1,\\\\ \\rho(N+k,N+\\ell)&\\mbox{if}&-(N-1)\\leq k,\\ell<0.\\end{array}\\right.\\]
Let \\(Z_{A}=(Z_{A}(k,\\ell))_{0\\leq k\\leq N-1,0\\leq l\\leq N-1}\\) be a \\(\\mathcal{G}^{0}_{A}(\\alpha,\\gamma,n)\\) stochastic process with correlation function \\(\\rho_{Z_{A}}\\) defined by
\\[\\rho_{Z_{A}}((k_{1},\\ell_{1}),(k_{2},\\ell_{2}))=\\rho(k_{2}-k_{1},\\ell_{2}-\\ell_ {1}).\\]
Assume that \\(\\tau(k,\\ell)=\\mathcal{G}_{(\\alpha,n)}(\\rho(k,\\ell))\\) is defined for all \\((k,\\ell)\\) in \\(R_{N}\\).
Let \\(\\mathcal{F}(\\tau)\\colon R_{N}\\longrightarrow\\mathbb{C}\\) be the normalized Fourier Transform of \\(\\tau\\), that is,
\\[\\mathcal{F}(\\tau)(k,\\ell)=\\frac{1}{N^{2}}\\sum_{k_{1}=0}^{N-1}\\sum_{\\ell_{1}=0 }^{N-1}\\tau(k_{1},\\ell_{1})\\exp(-2\\pi i(k\\cdot k_{1}+\\ell\\cdot\\ell_{1})/N^{2}).\\]
Let \\(\\psi\\colon R_{N}\\longrightarrow\\mathbb{C}\\) be defined by \\(\\psi(k,\\ell)=\\sqrt{\\mathcal{F}(\\tau)(k,\\ell)}\\) and let the function \\(\\theta:\\overline{R_{N}}=\\{(k,\\ell)\\colon-(N-1)\\leq k,\\ell\\leq N-1\\} \\longrightarrow\\mathbb{R}\\) be defined by
\\[\\theta(k,\\ell)=\\mathcal{F}^{-1}(\\psi)(k,\\ell)/N=\\frac{1}{N}\\sum_{k_{1}=0}^{N- 1}\\sum_{\\ell_{1}=0}^{N-1}\\psi(k_{1},\\ell_{1})\\exp(2\\pi i(k\\cdot k_{1}+\\ell\\cdot \\ell_{1})/N^{2}),\\]
(the normalized inverse Fourier Transform of \\(\\psi\\)) for all \\((k,\\ell)\\in R_{N}\\); and
\\[\\theta(k,\\ell)=\\left\\{\\begin{array}{r@{\\quad\\quad}l}\\theta(N+k,\\ell)\\quad \\quad&\\text{if}\\quad-(N-1)\\leq k<0\\leq\\ell\\leq N-1,\\\\ \\theta(k,N+\\ell)\\quad\\quad&\\text{if}\\quad-(N-1)\\leq\\ell<0\\leq k\\leq N-1,\\\\ \\theta(N+k,N+\\ell)\\quad\\quad&\\text{if}\\quad\\quad&-(N-1)\\leq k,\\ell<0.\\end{array}\\right.\\]
A straightforward calculation shows that
\\[(\\theta*\\theta)(k,\\ell)=\\sum_{k_{1}=0}^{N-1}\\sum_{\\ell_{1}=0}^{N-1}\\theta(k_{1 },\\ell_{1})\\theta(k-k_{1},\\ell-\\ell_{1})=\\tau(k,\\ell),\\]
for all \\((k,\\ell)\\in R_{N}\\).
**Remark 1**: _We can see that \\(\\mathcal{F}(\\tau)(k,\\ell)\\geq 0\\) and the last equality for all \\((k,\\ell)\\in R_{N}\\) is easily deduced from the results in Section 5.5 of [12]; more details can be seen in [13]._
Finally we define \\(\\zeta=(\\zeta(i,j))_{0\\leq i\\leq N-1,0\\leq j\\leq N-1}\\) by
\\[\\zeta(k,\\ell)=(\\theta*\\xi)(k,\\ell)=N\\mathcal{F}^{-1}((\\psi\\mathcal{F}(\\xi)))(k,\\ell),\\]
where \\(\\xi=(\\xi(k,\\ell))_{(k,\\ell)\\colon R_{N}}\\) is a Gaussian white noise with standard deviation \\(1\\).
Then it is easy to prove that \\(\\zeta=(\\zeta(i,j))_{0\\leq i\\leq N-1,0\\leq j\\leq N-1}\\) is a stochastic process such that \\(\\zeta(i,j)\\) is a standard Gaussian random variable with correlation function \\(\\tau_{\\zeta}\\) satisfying (3).
### Implementation
The results presented in previous sections were implemented using the IDL Version 5.3 Win 32 [14] development platform, with the following algorithm:
**Algorithm 1**: _Input: \\(\\alpha<-1\\), \\(\\gamma>0\\), \\(n\\geq 1\\) integer, \\(\\rho\\) and \\(\\tau\\) functions as above, then:_
1. _Compute the frequency domain mask_ \\(\\psi(k,\\ell)=\\sqrt{\\mathcal{F}(\\tau)(k,\\ell)}\\)_._
2. _Generate_ \\(\\xi=(\\xi(k,\\ell))_{(k,\\ell)\\in R_{N}}\\)_, the Gaussian white noise with zero mean and variance_ \\(1\\)_._
3. _Calculate_ \\(\\zeta(k,\\ell)=N\\mathcal{F}^{-1}((\\psi\\cdot\\mathcal{F}(\\xi)))(k,\\ell)\\)_, for every_ \\((k,\\ell)\\)_._
4. _Obtain_ \\(Z^{1}_{A}(k,\\ell)=G^{-1}(\\Phi(\\zeta(k,\\ell)),(\\alpha,1,n))\\)_, for every_ \\((k,\\ell)\\)_._
5. _Return_ \\(Z_{A}(k,\\ell)=\\sqrt{\\gamma}Z^{1}_{A}(k,\\ell)\\) _for every_ \\((k,\\ell)\\)_._Simulation results
In practice both parametric and non-parametric correlation structures are of interest. The former rely on analytic forms for \\(\\rho\\), while the latter merely specify values for the correlation. Parametric forms for the correlation structure are simpler to specify, and its inference amounts to estimating a few numerical values; non-parametric forms do not suffer from lack of adequacy, but demand the specification (and possibly the estimation) of potentially large sets of parameters.
In the following examples the technique presented above will be used to generate samples from both parametric and non-parametric correlation structures.
**Example 1** (Parametric situation): _This correlation model is very popular in applications. Consider \\(L\\geq 2\\) an even integer, \\(0<a<1\\), \\(0<\\varepsilon\\) (for example \\(\\varepsilon=0.001\\)), \\(\\alpha<-1\\) and \\(n\\geq 1\\). Let \\(h\\colon\\mathbb{R}\\longrightarrow\\mathbb{R}\\) be defined by_
\\[h(x)=\\left\\{\\begin{array}{ll}x&\\mbox{if}\\quad|x|\\geq\\varepsilon,\\\\ 0&\\mbox{if}\\quad|x|<\\varepsilon.\\end{array}\\right.\\]
_Let \\(\\rho\\colon R_{1}\\longrightarrow(-1,1)\\) be defined by \\(\\rho(0,0)=1\\) if \\((k,\\ell)\
eq(0,0)\\) in \\(R_{1}\\) by:_
\\[\\rho(k,\\ell)=\\left\\{\\begin{array}{ll}h(a\\exp(-k^{2}/L^{2}))&\\mbox{if}\\quad k \\geq\\ell,\\\\ -h(a\\exp(-\\ell^{2}/L^{2}))&\\mbox{if}\\quad k<\\ell.\\end{array}\\right.\\]
_The image shown in Figure 6, of size \\(128\\times 128,\\) was obtained assuming \\(a=0.4\\), \\(L=2\\), \\(\\alpha=-1.5\\), \\(\\gamma=1.0\\) and \\(n=1\\)._
**Example 2** (Mosaic): _A mosaic of nine simulated fields is shown in Figure 7. Each field is of size \\(128\\times 128\\) and obeys the model presented in Example 1 with \\(a=0.4\\), \\(\\gamma=1.0\\), \\(n=1\\), roughness \\(\\alpha\\) varying in the rows (\\(-1.5\\), \\(-3.0\\) and \\(-9.0\\) from top to bottom) and correlation length \\(L\\) varying along the columns (\\(2\\), \\(4\\) and \\(8\\) from left to right)._
**Example 3** (Non-parametric situation): _The starting point is the urban area seen in Figure 8. This \\(128\\times 128\\) pixels image is a small sample of data obtained by the E-SAR system over an urban area. The complete dataset was used as input for estimating the correlation structure defined by an \\(16\\times 16\\) correlation matrix using Pearson's procedure (\\(\\hat{\\rho}\\) below, where only values bigger than \\(10^{-3}\\) are shown; see appendix A). The correlation structure for the Gaussian process is \\(\\tau\\) below, where only values bigger than \\(10^{-3}\\) are shown. The roughness and scale parameters were estimated using the moments technique. The simulated \\(\\mathcal{G}_{A}^{0}\\) field is shown in Figure 9._
\\[\\hat{\\rho}=\\left(\\begin{array}{cccc}1.00&0.65&0.22\\\\ 0.97&0.63&0.22\\\\ 0.88&0.58&0.21\\\\ 0.76&0.50&0.19\\\\ 0.64&0.43&0.16\\\\ 0.53&0.36&0.14\\\\ 0.43&0.30&0.12\\\\ 0.36&0.25&0.10\\\\ 0.29&0.20&0.00\\\\ 0.24&0.17&0.00\\\\ 0.20&0.13&0.00\\\\ 0.16&0.11&0.00\\\\ 0.13&0.00&0.00\\\\ 0.11&0.00&0.00\\\\ \\end{array}\\right),\\tau=\\left(\\begin{array}{cccc}1.00&0.76&0.32\\\\ 0.98&0.74&0.32\\\\ 0.93&0.70&0.31\\\\ 0.85&0.63&0.28\\\\ 0.75&0.56&0.24\\\\ 0.68&0.49&0.21\\\\ 0.56&0.42&0.18\\\\ 0.49&0.36&0.16\\\\ 0.41&0.294&0.00\\\\ 0.35&0.25&0.00\\\\ 0.29&0.20&0.00\\\\ 0.24&0.17&0.00\\\\ 0.20&0.00&0.00\\\\ 0.17&0.00&0.00\\\\ \\end{array}\\right)\\]Conclusions and future work
A method for the simulation of correlated clutter with desirable marginal law and correlation structure was presented. This method allows the obtainment of precise and controlled first and second order statistics, and can be easily implemented using standard numerical tools.
The adequacy of the method for the simulation of several scenarios will be assessed using real data, following the procedure presented in Example 3: estimating the underlying correlation structure and then simulating fields with it. A mosaic of true and synthetic textures will be composed and made available for use in algorithm assessment.
## Acknowledgements
This work was partially supported by Conicor and SeCyT (Argentina) and CNPq (Brazil).
## References
* [1] A. C. Frery, H.-J. Muller, C. C. F. Yanasse, and S. J. S. Sant'Anna. A model for extremely heterogeneous clutter. _IEEE Transactions on Geoscience and Remote Sensing_, 35(3):648-659, May 1997.
* [2] A. C. Frery, A. H. Correia, C. D. Renno, C. C. Freitas, J. Jacobo-Berlles, M. E. Mejail, and K. L. P. Vasconcellos. Models for synthetic aperture radar image analysis. _Resenhas (IME-USP)_, 4(1):45-77, 1999.
* [3] M. E. Mejail, A. C. Frery, J. Jacobo-Berlles, and O. H. Bustos. Approximation of distributions for SAR images: proposal, evaluation and practical consequences. _Latin American Applied Research_, 31:83-92, 2001.
* [4] O. H. Bustos, M. M. Lucini, and A. C. Frery. M-estimators of roughness and scale for GA0-modelled SAR imagery. _EURASIP Journal on Applied Signal Processing_, 2002(1):105-114, Jan. 2002.
* [5] F. Cribari-Neto, A. C. Frery, and M. F. Silva. Improved estimation of clutter properties in speckled imagery. _Computational Statistics and Data Analysis_, 40(4):801-824, 2002.
* [6] O. E. Barndorff-Nielsen and P. Blaesild. Hyperbolic distributions and ramifications: Contributions to theory and applications. In C. Taillie and B. A. Baldessari, editors, _Statistical distributions in scientific work_, pages 19-44. Reidel, Dordrecht, 1981.
* [7] A. C. Frery, C. C. F. Yanasse, and S. J. S. Sant'Anna. Alternative distributions for the multiplicative model in SAR images. In _International Geoscience and Remote Sensing Symposium: Quantitative Remote Rensing for Science and Applications_, pages 169-171, Florence, Jul. 1995. IEEE Computer Society. IGARSS'95 Proc.
* [8] M. E. Mejail, J. C. Jacobo-Berlles, A. C. Frery, and O. H. Bustos. Classification of SAR images using a general and tractable multiplicative model. _International Journal of Remote Sensing_. In press.
* [9] M. E. Mejail, J. Jacobo-Berlles, A. C. Frery, and O. H. Bustos. Parametric roughness estimation in amplitude SAR images under the multiplicative model. _Revista de Teledeteccion_, 13:37-49, 2000.
* [10] O. H. Bustos, A. G. Flesia, and A. C. Frery. Generalized method for sampling spatially correlated heterogeneous speckled imagery. _EURASIP Journal on Applied Signal Processing_, 2001(2):89-99, June 2001.
* [11] C. Oliver and S. Quegan. _Understanding Synthetic Aperture Radar Images_. Artech House, Boston, 1998.
* [12] A. K. Jain. _Fundamentals of Digital Image Processing_. Prentice-Hall International Editions, Englewood Cliffs, NJ, 1989.
* [13] S. M. Kay. _Modern Spectral Estimation: Theory & Application_. Prentice Hall, Englewood Cliffs, NJ, USA, 1988.
* [14] Research Systems. _Using IDL_. [http://www.rsinc.com](http://www.rsinc.com), 1999.
## Appendix A Estimating correlation structure with Pearson's method
Consider the image \\(\\mathbf{z}\\) with \\(M\\) rows and \\(N\\) columns
\\[\\mathbf{z}=\\left[\\begin{array}{ccc}z(0,0)&\\cdots&z(N-1,0)\\\\ \\vdots&\\ddots&\\vdots\\\\ z(0,M-1)&\\cdots&z(
where
\\[C_{\\mathbf{Z}}((m,n),(k,\\ell)) =\\sum_{j=0}^{n_{f}-1}\\sum_{i=0}^{n_{c}-1}\\left(z(2n_{v}i+m,2n_{v}j+n )-\\overline{z}(m,n)\\right)\\left(z(2n_{v}i+k,2n_{v}j+\\ell)-\\overline{z}(k,\\ell) \\right),\\] \\[s_{\\mathbf{Z}}(m,n) =\\sqrt{\\sum_{j=0}^{n_{f}-1}\\sum_{i=0}^{n_{c}-1}\\left(z(2n_{v}i+m,2 n_{v}j+n)-\\overline{z}(m,n)\\right)^{2}},\\] \\[\\overline{z}(m,n) =\\frac{1}{n_{c}n_{f}}\\sum_{j=0}^{n_{f}-1}\\sum_{i=0}^{n_{c}-1}z(2 n_{v}i+m,2n_{v}j+n).\\]Figure 4: Values of \\(\\tau\\) as a function of \\(\\rho\\) for \\(n=1\\) and varying \\(\\alpha\\).
Figure 5: Values of \\(\\tau\\) as a function of \\(\\rho\\) for \\(\\alpha=-1.5\\) and varying \\(n\\).
Figure 6: Correlated \\(\\mathcal{G}^{0}(-1.5,1,1)\\)-distributed amplitude image with the correlation structure defined in Example 1.
Figure 7: Mosaic of nine simulated fields.
Figure 8: Urban area as seen by the E-SAR system.
Figure 9: Simulated urban area using a non-parametric correlation structure. | Correlated \\(\\mathcal{G}\\) distributions can be used to describe the clutter seen in images obtained with coherent illumination, as is the case of B-scan ultrasound, laser, sonar and synthetic aperture radar (SAR) imagery. These distributions are derived using the square root of the generalized inverse Gaussian distribution for the amplitude backscatter within the multiplicative model. A two-parameters particular case of the amplitude \\(\\mathcal{G}\\) distribution, called \\(\\mathcal{G}_{A}^{0}\\), constitutes a modeling improvement with respect to the widespread \\(\\mathcal{K}_{A}\\) distribution when fitting urban, forested and deforested areas in remote sensing data. This article deals with the modeling and the simulation of correlated \\(\\mathcal{G}_{A}^{0}\\)-distributed random fields. It is accomplished by means of the Inverse Transform method, applied to Gaussian random fields with spatial correlation. The main feature of this approach is its generality, since it allows the introduction of negative correlation values in the resulting process, necessary for the proper explanation of the shadowing effect in many SAR images. | Give a concise overview of the text below. |
arxiv-format/0806_0824v2.md | # Constraints on the Properties of the Neutron Star Xte J1814-338 from Pulse Shape Models
Denis A. Leahy1, Sharon M. Morsink2, Yi-Ying Chung3, & Yi Chou3
Footnote 1: affiliation: Department of Physics and Astronomy, University of Calagary, Calagary AB, T2N 1A4, Canada; [email protected]
Footnote 2: affiliation: Department of Physics, University of Alberta, Edmonton, AB, T6G 2G7, Canada; [email protected]
Footnote 3: affiliation: Graduate Institute of Astronomy, National Central University, Jhongli 32001, Taiwan; [email protected]
## 1. Introduction
The accretion-powered millisecond-period X-ray pulsars are promising targets for constraining the neutron star equation of state (EOS) through the modeling of emission from hot spots on the pulsar's surface. The first pulsar discovered in this class, SAX J1808.4-3658 (Wijnands & van der Klis, 1998), has a spectrum consistent (Gierlinski et al., 2002) with emission from a hot spot on the star's surface. Pulse shape modeling of rapidly rotating neutron stars relies on two relativistic effects: the gravitational bending of light rays reduces the modulation of the pulsed emission and depends on the mass to radius ratio \\(M/R\\); and the Doppler boosting due to the star's rotation creates an asymmetry in the pulse shape and depends on the star's radius \\(R\\). These features, combined with reasonable models of the emission properties at the neutron star's surface can be used to constrain the neutron star's mass and radius and hence the EOS of supra-nuclear density matter.
XTE J1814-338 (hereafter XTE J1814) was discovered during outburst in June 2003 (Markwardt & Swank, 2003), and is an accretion powered millisecond pulsar with spin frequency 314.36 Hz and orbital period of 4.3 hr (Markwardt et al., 2003). A detailed timing analysis for XTE J1814 was performed by Papitto et al. (2007) to obtain accurate values for orbital period, projected semimajor axis, pulse spin frequency and spin down rate. A similar analysis of the pulse arrival times was carried out by Watts & Strohmayer (2006) and Chung (2007), which both included an analysis of phase lags. Soft lags were found in the 2-10 keV energy band, similar to those for SAX J1808-3658 and consistent with an origin in Doppler boosting of a Comptonized pulse component with a much broader emission pattern than the blackbody component.
Strohmayer et al. (2003) found the same frequency in the X-ray bursts as was found in the persistent emission, but with a lower second harmonic content. Watts et al. (2008) showed that the X-ray burst oscillations are tightly phase-locked with the non-burst pulsations. Bhattacharyya et al. (2005) modeled the oscillations during X-ray bursts with a hot spot model for a spherical star and for 2 equations of state. Using a large grid of models they found an upper limit on compactness \\(R_{S}/R<0.48\\), with \\(R_{S}\\), the Schwarzschild radius.
There are pulse shape models for a few other X-ray pulsars. The 1.2 s period X-ray pulsar Her X-1 was modeled by Leahy (2004) using a model that includes accretion columns. The model for Her X-1 constrains the neutron star EOS to a fairly moderate stiffness (Leahy, 2004). Zavlin & Pavlov (1998) and Bogdanov et al. (2007) have modeled the X-ray emission from the 5.8 ms period radio pulsar PSR J0437-4715 using a Hydrogen atmosphere model. In the case of PSR J0437-4715, Bogdanov et al. (2007) found that a simple isotropic blackbody model is inconsistent with the data. In their models, Bogdanov et al. (2007) showed that the radius of PSR J0437-4715 must be larger than 6.7 km if the mass is \\(1.4M_{\\odot}\\). Unfortunately the mass of this pulsar is not well-constrained. Bogdanov et al. (2008) have shown that constraints on radius for a number of other ms radio pulsars are also possible.
Constraints on SAX J1808.4-3658 (with a spin period of 2.5 ms) were made by Poutanen & Gierlinski (2003) using data from the 1998 outburst. The modeling done by Poutanen & Gierlinski (2003) included blackbody emission from a hot spot that is Compton scattered by electrons above the hot spot. Their model makes use of a spherical model for the star's surface and does not include the effects of relative time-delays caused by the different time of flights for photons emitted from differentparts of the star's surface. More recently Cadeau et al. (2005) and Cadeau et al. (2007) have shown that time-delays and the star's oblate shape are important factors that can affect the outcome of pulse-shape modeling for rapidly rotating pulsars such as SAX J1808.4-3658. The 1998 outburst data for SAX J1808.4-3658 was revisited using a models that included time-delays and oblateness (Leahy et al., 2008) with the result that the EOS for SAX J1808.4-3658 is constrained to be very soft.
In this paper we model the accretion-powered pulsations of XTE J1814 using a hot-spot model. The hot-spot model allows for one or two circular hot spots with a two-component spectrum. The spectral model includes isotropic blackbody emission and an anisotropic Compton-scattered component described by a power-law. The photons are propagated to the observer using the oblate Schwarzschild approximation (Morsink et al., 2007) which allows the photon initial conditions to be placed on an oblate-shaped initial surface determined by an empirical formula. The Schwarzschild metric is used to compute the photon bending angles and time delays since it has been shown (Cadeau et al., 2007) that the corrections induced by the Kerr black hole metric or a numerical metric for a rotating star are insignificant compared to the corrections induced by the oblate shape. In order to do the pulse-shape modeling, we construct light curves in two narrow energy bands, 2-3 keV and 7-9 keV. We first analyse a composite pulse-shape constructed from 23 days of data and then consider pulse-shapes constructed from single days of data in order to determine whether variations of the pulse-shape with time are significant.
The outline of this paper is as follows. In Section 2 the method used to construct the light curves and analyse them is outlined. The results of the best-fit models are presented in Section 3. A discussion of the results is presented in Section 4.
## 2. Method
### Construction of Light Curves
Pulse shapes for the accretion-powered pulsations are constructed using the ephemeris given by Chung (2007). Data is limited to the first 23 days of the 2003 outburst, June 5 to June 27 2003 (MJD 2452795-2452817), in order to avoid the later period of the outburst when the flux and pulse shape became more erratic (see for example, Watts et al. (2005)). X-ray bursts were cut out of the data during the interval between 100 s before and after the start of each burst.
Although the RXTE observations include data in the range of 2 - 50 keV, we have chosen to concentrate on the lower energy range from 2 - 10 keV for two reasons. First, the data is noisier at energies above 10 keV. Second, the Chandra observations by Krauss et al. (2005) constrain the spectrum in the 2 - 10 keV range. It is also useful to separate the data into narrow energy bands in order to separate the different spectral components. We have chosen two narrow bands, the 2-3 keV band and the 7-9 keV band based on the Chandra spectrum. The narrow-band pulse shapes constructed using data from the full observation period (June 5 - 27) are shown in Figure 1.
We have also investigated the variability of the pulse shape with time. In order to do this, the data was separated into one-day segments and separate light curves constructed for each day. It is not computationally feasible to model all days simultaneously, so we instead focus on two separate days. The days were chosen by comparing light curves in the 2 - 10 keV range for different days using a \\(\\chi^{2}\\) test and selecting two days which differ the most from each other. This also has the effect of selecting days with intrinsically smaller error bars. The days resulting from this selection process correspond to June 20 and 27. Light curves for these two days, in the two narrow energy bands are shown in Figure 2.
### Analysis of Light Curves
Krauss et al. (2005) observed XTE J1814 with Chandra on June 20, 2003. They modeled the spectrum in the 0.5 - 10 keV range and found that the best fit solution corresponds to a combination of a 0.95 keV blackbody and powerlaw emission with a photon spectral index of \\(\\Gamma=1.4\\). The ratio of flux from the blackbody to the powerlaw in their model is about 10%. We use the Krauss et al. (2005) spectral model and assume that it holds for the other days covered by the RXTE data. This assumption is motivated by the fact that the relative normalization of different energy bands is approximately constant from day to day, although the overall flux at all wavelengths changes with time.
The spectral model of Krauss et al. (2005) motivates the use of two narrow bands in our pulse shape models. A low energy band is necessary in order to capture the blackbody component, so we choose the lowest possible
Figure 1.— Pulse profiles for XTE J1814 constructed with data from all days between June 5 - 27 (excluding X-ray bursts). Light curves for two energy bands, 2-3 keV and 7-9 keV are shown.
Figure 2.— One-day pulse profiles for XTE J1814 constructed with data from days June 20 and June 27 (MJD 2452810 and 2452817). Two energy bands are shown for each day.
XTE energy band at 2 - 3 keV. The spectrum in this band is dominated by the powerlaw component, but the blackbody contribution is still important. We also choose the 7 - 9 keV band as the highest energy band covered by the Chandra observation. In this high energy band the blackbody flux is negligible.
Our method for modeling the observed emission is very similar to the method presented in Leahy et al. (2008). The spectral model has three components: (1) Comptonized flux in the high energy band (7-9 keV); (2) Comptonized flux in the low energy band (2-3 keV); and (3) blackbody flux in the low energy band. The observed flux for the \\(i\\)th component, \\(F_{i}\\), integrated over the appropriate observed energy band is given by
\\[F_{i}(E)=I_{i}\\eta^{3+\\Gamma_{i}}(1-a_{i}\\mu). \\tag{1}\\]
In equation (1), \\(I_{i}\\) is a constant amplitude, \\(\\eta\\) is the Doppler boost factor, \\(\\Gamma_{i}\\) is the photon spectral index in the star's rest frame, \\(\\mu\\) is the cosine of the angle between the normal to the star's surface and the initial photon direction, and the constant \\(a_{i}\\) describes the anisotropy of the emitted light. For a definition of \\(\\eta\\) as well as a more complete description of the modeling method, please see Leahy et al. (2008).
In our modeling, the amplitudes \\(I_{1}\\) and \\(I_{2}\\) are free parameters while the third amplitude \\(I_{3}\\) is defined through the constant \\(b=\\bar{F_{3}}/\\bar{F_{2}}\\), the ratio of the phase-averaged blackbody to Comptonized flux in the low-energy band. In the spectral model by Krauss et al. (2005), \\(b=0.1\\), but we include this parameter as a fitting parameter with \\(1\\sigma\\) limits from their spectral model. The photon spectral indices for the Comptonized components are fixed at \\(\\Gamma_{1}=\\Gamma_{2}=1.4\\) as given by their model. In the narrow range of the low-energy band the blackbody component of 0.95 keV can be modeled by a powerlaw with photon spectral index \\(\\Gamma_{3}=0.85\\). The anisotropy parameters for the Comptonized components \\(a_{1}=a_{2}=a\\) are assumed to be equal, and the parameter \\(a\\) is kept as a free parameter.
In the modelling of the non-accreting ms pulsars, it was found (Zavlin & Pavlov, 1998; Bogdanov et al., 2007, 2008) that a limb-darkened Hydrogen atmosphere spectral model is required by the data. It is reasonable to expect that that the blackbody component of the spectrum should also be limb-darkened. We tested this hypothesis by multiplying the blackbody flux by a limb-darkening function of the form \\(e^{-\\tau/\\mu}\\). We then computed the bestfit neutron star models for two type of models: (1) models with non-zero optical depth \\(\\tau\\) and (2) models with zero optical depth. The bestfit models for these two cases are almost identical: the mass and radius of the bestfit model changes by less than 0.5% when a nonzero optical depth is added, and the value of \\(\\delta\\chi^{2}=0.1\\) when the limb-darkening is added. Since the change in \\(\\chi^{2}\\) and the physical parameters are negligible we conclude that adding an extra parameter to model limb-darkening is not warranted by the data. The reason for this is due to the Chandra model which restricts the blackbody contribution in the 2-3 keV band to only 10% of the Comptonized contribution, and effectivly sets the blackbody component to zero in the high energy band. Since the Comptonized flux is dominant and has fan-beaming included, small changes to the anisotropy of the blackbody component don't affect the final models. For this reason we have set the anisotropy parameter for the blackbody component to zero (\\(a_{3}=0\\)). This is consistent with the results found for SAX J1808.4-3658 (Leahy et al., 2008) which also did not require any limb-darkening. The final set of free parameters describing the spectrum are \\(I_{1}\\), \\(I_{2}\\), \\(b\\) and \\(a\\).
In order to fit a set of light curves we also need to introduce a set of parameters describing the star and the emission geometry. These parameters are the mass \\(M\\) and equatorial radius \\(R\\) of the star, the co-latitude of the spot \\(\\theta\\), the inclination angle \\(i\\) as well as a free phase \\(\\phi\\). The radius of the circular spot (in the star's rest frame) is kept fixed at 1.5 km, as given by the Chandra spectral model (Krauss et al., 2005).
Our models make use of light curves for two different days' data, which requires a separate set of parameters for each day. However, on the two different days the parameters \\(M\\), \\(R\\) and \\(i\\) do not change. In order to simplify the analysis, we also assume that the photon spectral indices and the parameters \\(a\\) and \\(b\\) are also fixed. The full set of free parameters are: \\(\\{I_{1},I_{2},\\theta,\\phi\\}\\) for each day plus \\(M\\), \\(R\\), \\(i\\), \\(a\\) and \\(b\\), for a total of 13 free parameters. However, for each of our fits, the ratio \\(M/R\\) is kept fixed, so for any one value of \\(M/R\\), there are only 12 free parameters.
We use the oblate Schwarzschild approximation (Morsink et al., 2007) to connect photons emitted at the star's surface with those detected by the observer. In previous studies (Cadeau et al., 2005, 2007) we have shown that, to the accuracy required for extracting the parameters of a rapidly rotating neutron star, it is sufficient to use the Schwarzschild metric to compute the bending of light rays and the relative time delays of photons emitted at different locations on the star. The extra time delays and light bending caused by frame-dragging or higher order rotational corrections in the metric are negligible. However, the rotation of the star causes a deformation of the star into an oblate shape, which changes (relative to a sphere) the directions that photons can be emitted into. We have developed a simple approximation (Morsink et al., 2007) that allows an empirical fit to the oblate shape of a rotating star to be embedded in the Schwarzschild metric and make use of it in this analysis.
## 3. Results
### Evidence for a Second Spot
The pulse profiles in Figure 1 show a feature in the phase interval between 0.24 and 0.4. This feature is seen in all of the other energy bands as well. In order to investigate the nature of this feature, we restrict the analysis here to just the 7-9 keV light curve. Since the blackbody contribution in this energy band is negligible, we use a simplified model which only includes the Comptonized component of the radiation.
The simplest model for the emission is a single spot. We fitted the 7-9 keV light curve shown in Figure 3 with a single spot model by first fixing \\(2M/R=0.4\\) and varying the parameters \\(M,i,\\theta,a,I\\) and \\(\\phi\\). (Similar results are obtained for other values of \\(2M/R\\).) Since there are 32 data points this corresponds to 25 degrees of freedom. The best fit solution for a single spot model has \\(\\chi^{2}=50.7\\), which is not a very good fit. This best-fit solution is shown as a solid curve in Figure 3.
We now turn to a two-spot model, where the second spot is allowed to have an arbitrary location relative to the first spot, but the spectrum is assumed to be the same. The introduction of a second spot introduces three new parameters to the model: an intensity and two angles. The resulting fit for \\(2M/R=0.4\\) has \\(\\chi^{2}=28.3\\) for 23 degrees of freedom, which is a significant improvement. The light curve for this model is shown with a dotted curve in Figure 3. The mass for this model is \\(2.04~{}M_{\\odot}\\). In this model, the second spot's location is situated so that the second spot is almost never seen and its light only contributes during the phase interval between 0.24 and 0.4. Outside of this interval the light received only originates from the primary spot. This suggests a simpler one-spot model where bins in the phase interval 0.24 to 0.4 (marked with a circle in Figure 3) are removed from the data set. This reduces the degrees of freedom to 19 (32-6 data points and 5 parameters for a one-spot model). The resulting best-fit model for \\(2M/R=0.4\\) has \\(\\chi^{2}=19.2\\), which is also a significant improvement from the one-spot model that uses all of the data. The mass for this model is \\(2.08M_{\\odot}\\) and the light curve for this model is shown as a dashed line in Figure 3.
Comparing the two-spot model with the one-spot model with the circle-bins excluded, we see that the difference in \\(\\chi^{2}\\) is not significant, and there is little change between the best-fit values of mass and radius. This leads us to the conclusion that there is good evidence for a second spot (or a feature that mimics a second spot), but that the amount of data encoded in those bins affected by the second spot is not sufficient to allow us to model the details of the second spot with any confidence. Since the inclusion or exclusion of the second spot does not change the best-fit values of the main physical parameters of the star \\((M,R,i)\\) it is more appropriate to choose the simpler one-spot model. The qualitative results do not change when we look at different energy bands. As a result, for the remaining modeling reported in this paper we only use one-spot models where data in the 0.24 - 0.4 phase range is removed from the analysis.
### Best-fit Models using Data from All Days
Our procedure for modeling the two-energy band data shown in Figure 1 is to assume a one-spot model that includes both blackbody and Comptonized emission. Data in the phase period between 0.24 and 0.4 is omitted, as described in section 3.1. We do a number of fits, each with a fixed value of \\(2M/R\\). (Fixing the ratio of \\(M/R\\) simplifies the fitting procedure since the light-bending and time delays depend on this ratio.) Once \\(2M/R\\) is fixed, all other parameters are allowed to vary and the minimum value of \\(\\chi^{2}\\) is found. In addition to the parameters described in Section 2.2, we also added two parameters corresponding to DC offsets for the two energy bands. This allows for small errors in the background subtraction. Once \\(2M/R\\) is fixed, we have a total of 10 free parameters: \\(M\\), \\(\\theta\\), \\(i\\), \\(a\\), \\(b\\), 2 amplitudes, 2 DC offsets and one overall phase. (The parameter \\(b\\) is restricted to have a value that is within 1 \\(\\sigma\\) of the value found by Krauss et al. (2005).) Since each energy band has 32 points, but we exclude 6 of these points we have \\(64-12-10=42\\) degrees of freedom.
For a fixed value of \\(2M/R\\), we find that there are two local minima. These two minima are shown in Table 1 and we label these two best-fit solutions as the high and low mass solutions. The lowest value of \\(\\chi^{2}\\) corresponds to the high mass solution and the lower mass solution has a higher value of \\(\\chi^{2}\\). Although the high mass solution is a better fit, we exclude this solution on physical grounds. First of all, it requires a neutron star radius of 28 km, which is not allowed by any known equation of state. Secondly, once the neutron star mass and the inclination angle are known, the companion's mass can be calculated (shown in the column labeled \\(M_{c}\\) in Table 1). In the high mass case the companion's mass is \\(1.7M_{\\odot}\\). Due to the dim nature of the companion, Krauss et al. (2005) have shown that the companion (if a main sequence star) would have to have a mass that is no bigger than \\(0.5M_{\\odot}\\). Clearly this excludes the high mass solution but allows the lower mass solution. For these reasons, we exclude the high mass solutions.
In Table 2 the best-fit solution for each value of \\(2M/R\\) is shown. In each case only the lower mass solution is shown. The best-fit solution shown as a solid curve in Figure 1 corresponds to the \\(2M/R=0.4\\) solution shown in the Table 2. Although we call this these solutions \"lower mass\", clearly they still correspond to high mass neutron stars. In Table 2, only solutions for \\(2M/R=0.3\\) to 0.6 are shown. In the case of less compact stars, such as \\(2M/R=0.2\\), the only solution corresponded to the \"high mass\" branch of unphysical solutions. We did not test solutions that were more compact than \\(2M/R=0.6\\) since these solutions would allow spots to have multiple images, and our program is unable to handle multiple images. Technically, the solutions with \\(2M/R=0.6\\) could have spots with multiple images, but we have checked that the relative values of \\(\\theta\\) and \\(i\\) do not lead to this problem for our solutions for the most compact stars.
In Figure 4 we show a number of mass-radius curves for stars spinning at 314 Hz as well as the 2- and 3
\\begin{table}
\\begin{tabular}{l r r r r r r r} \\hline \\hline \\multicolumn{1}{c}{ Model} & \\multicolumn{1}{c}{\\(2M/R\\)} & \\multicolumn{1}{c}{\\(M\\)} & \\multicolumn{1}{c}{\\(R\\)} & \\multicolumn{1}{c}{\\(\\theta\\)} & \\multicolumn{1}{c}{\\(i\\)} & \\multicolumn{1}{c}{\\(a\\)} & \\multicolumn{1}{c}{\\(M_{c}\\)} & \\multicolumn{1}{c}{\\(\\chi^{2}\\)/dof} \\\\ & \\multicolumn{1}{c}{\\(M_{\\odot}\\)} & \\multicolumn{1}{c}{km} & \\multicolumn{1}{c}{deg.} & \\multicolumn{1}{c}{deg.} & & \\multicolumn{1}{c}{\\(M_{\\odot}\\)} & \\multicolumn{1}{c}{\\(M_{\\odot}\\)} & \\multicolumn{1}{c}{\\(\\chi^{2}\\)/dof} \\\\ \\hline High Mass & 0.3 & 2.86 & 28.7 & 66.8 & 11.8 & 0.81 & 1.70 & 55.9/42 \\\\ Low Mass & 0.3 & 1.95 & 19.8 & 42.4 & 25.0 & 0.59 & 0.55 & 61.0/42 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1Comparison of the Two Minima.
Figure 3.— Models used to fit the data in the 7-9 keV band (data combined from all 23 days). A one-spot model that uses all data points (both squares and circles) results in the best fit solid curve with \\(\\chi^{2}/\\)dof = 50.7/25. A two-spot model that uses all data points results in the best fit dotted curve with \\(\\chi^{2}/\\)dof = 28.3/23. A one-spot model that omits the circle bins results in the best fit dashed curve with \\(\\chi^{2}/\\)dof = 19.2/19.
\\(\\sigma\\) confidence regions for the \"low mass\" solutions. (In this figure, the radius \\(R\\) refers to the equatorial radius.) These regions are found by fixing the value of \\(2M/R\\), and varying all other parameters and finding the solutions that have \\(\\chi^{2}\\) larger than the global minimum of \\(\\chi^{2}_{min}=61.0\\) by \\(\\delta\\chi^{2}=4\\) (\\(2~{}\\sigma\\)) or \\(9\\) (\\(3~{}\\sigma\\)). The region allowed with 99.7% confidence (\\(3~{}\\sigma\\)) only includes large stars with high mass. Of the equations of state displayed in Figure 4, the only one lying in the \\(3~{}\\sigma\\) allowed region is L, corresponding to pure neutron matter computed in a mean field approximation (Pandharipande et al., 1976). A pure neutron core is unlikely to be the correct description of supra-nuclear density matter. However it is possible for an EOS that includes some softening due to the presence of other species to be allowed by this data.
### Best-fit Models Using Data From 2 Days
It is possible for variability of the data to affect the fit results, as appears to be the case for SAX J1808 (Leahy et al., 2008). For this reason we have rebinned the data into one-day segments in order to see if there is any significant change in the pulse shape on a day to day basis. We performed a \\(\\chi^{2}\\) test to see how closely each day's data matched the other days' data. Comparisons of one day with an adjacent day gave values of \\(\\chi^{2}\\)/dof ranging from 0.8 to 1.6, indicating day-to-day changes are small. The largest change is between day 810 (June 20) and day 817 (June 27) with \\(\\chi^{2}\\)/dof = 4.8. Light curves in the two narrow energy bands for these two days are shown in Figure 2.
The data corresponding to these two days is binned into 32 bins per period. Since we continue to remove the 6 bins corresponding to the second \"spot\", this corresponds to a total of 104 data points. We fit the data for these two days by assuming that the parameters \\(M,R,i,a\\) are the same for both days. The spot's latitude is allowed to vary, as are the amplitudes of the energy bands and the value of phase. Since the DC contributions found in our previous fits are very small, we do not include terms for DC offsets. In addition, we keep the value of \\(b\\) fixed at the Krauss et al. (2005) value in order to simplify the fits. This corresponds to 12 parameters, and a total of 92 degrees of freedom.
The best-fit solutions for this 2-day joint fit are shown in Table 3. For each fixed value of \\(2M/R\\) we only found one minimum, unlike the case with all days included. The angular locations of the spot on the two days are labeled \\(\\theta_{1}\\) and \\(\\theta_{2}\\). The change in angular location of the spot between the two days is less than \\(2^{\\circ}\\) in all cases. The solutions continue to have large masses and radii, as in the case of fits using all of the data. In the case of \\(2M/R=0.2\\) a low mass (\\(1.2~{}M_{\\odot}\\)) solution is allowed, but it has a very large radius. The 2- and 3-\\(\\sigma\\) confidence regions for the two-day joint fits are shown in Figure 5. The 3-\\(\\sigma\\) confidence region is somewhat larger than the same region computed using all of the data, but the two methods have a significant overlap. The 2-day joint fit also only allows the stiffest EOS L.
### Dependence of Models on Assumed Parameters
The models in this paper depend on the results of the spectral models of Krauss et al. (2005). We now consider the effect of allowing the parameters in the spectral model to vary within the error bars. As mentioned in Section 2.2 we already allow the ratio of the blackbody to powerlaw components (\\(b\\)) to vary within the \\(1\\sigma\\) limits given by the Krauss et al. (2005) spectral model.
\\begin{table}
\\begin{tabular}{r r r r r r r r} \\hline \\hline \\(2M/R\\) & \\(M\\) & \\(R\\) & \\(\\theta\\) & \\(i\\) & \\(a\\) & \\(M_{c}\\) & \\(\\chi^{2}\\)/dof \\\\ & \\(M_{\\odot}\\) & km & deg. & deg. & & \\(M_{\\odot}\\) & \\\\ \\hline
0.3 & 1.95 & 19.8 & 42.4 & 25.0 & 0.59 & 0.55 & 61.0/42 \\\\
0.4 & 2.45 & 18.4 & 47.0 & 24.2 & 0.61 & 0.65 & 61.2/42 \\\\
0.5 & 2.38 & 14.2 & 36.7 & 39.4 & 0.59 & 0.39 & 61.9/42 \\\\
0.6 & 2.42 & 11.9 & 40.9 & 42.8 & 0.59 & 0.37 & 62.6/42 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2Best-fit solutions using data from all days.
Figure 4.— Best-fit mass and radius values using the combined data from all days, separated into two narrow energy bands. Contours shown are for 2- and 3-\\(\\sigma\\) confidence levels. Mass-Radius curves for stars spinning at 314 Hz are shown as solid curves. The EOS shown are: APR (Akmal et al., 1998), BBBI (Baldo et al., 1997), ABPR2-3 (Alford et al., 2005), H3-7 Lackey, Nayyar, & Owen (2006) and L (mean-field theory, pure neutrons (Pandharipande et al., 1976)).
Figure 5.— Best-fit mass and radius values using data only from day 810 and 817, separated into two narrow energy bands. Contours shown are for 2- and 3-\\(\\sigma\\) confidence levels. EOS labels are the same as in Figure 4.
Another spectral parameter that could affect the fits is the photon spectral index \\(\\Gamma\\) for the powerlaw component of the spectrum. Krauss et al. (2005) found a value of \\(\\Gamma=1.41\\pm 0.06\\) and in all of our models presented in the previous section we kept the photon spectral index fixed at a value of \\(\\Gamma=1.40\\). We would expect that a larger value of \\(\\Gamma\\) would allow for smaller stars. This is because the flux in Equation (1) is proportional to the Doppler boost factor \\(\\eta\\) raised to the power \\(\\Gamma+3\\). The Doppler boost factor is mainly responsible for introducing higher harmonics into the signal, so a larger value of \\(\\Gamma\\) creates a more asymmetric pulse shape. In order to compensate, the best-fit solution will require a smaller value of stellar radius \\(R\\) in order to decrease the value of \\(\\eta\\). In order to test the dependence of the best-fit values of the parameters on \\(\\Gamma\\), we chose a value of \\(\\Gamma=1.50\\) which is somewhat larger than the range allowed by Krauss et al. (2005) and fit the data using the same method described earlier in this paper. The results of the best-fit parameters for the two values of \\(\\Gamma\\) for the case of \\(2M/R=0.4\\) are shown in Table 4. As expected, increasing \\(\\Gamma\\) allowed for a smaller star, but the decrease is only by 3%. Similar results occur for other values of \\(M/R\\). Clearly the dependence on the photon spectral index is not sensitive enough to affect the resulting large size of the best-fit stars.
In our models we keep the spot size (as measured on the star's surface) fixed at a diameter of 3 km. In previous modeling (Leahy et al., 2008) of SAX J1808.4-3658 we found that the final values of the best-fit parameters were not sensitive to changes in the spot size, assuming that the spot is small compared to the size of the star. For this reason we have kept the spot size fixed at 3 km for all models in this analysis. We have also assumed the hot spot is circular, although recent MHD models (Kulkarni & Romanova, 2005) more complicated spot shapes. In this paper we have only attempted to use the simplest possible model that is still consistent with the data. Adding extra parameters to our models in order to describe more complicated spot shapes is not yet warranted by the quality of the data.
For all models computed so far, we have made use of an empirical formula for the oblate shape of the star, and have included the relative time-delays for photons emitted from different parts of the star. In Table 4 the effects of oblateness and time-delays on the fits are shown. For the model labeled \"sphere\", a spherical initial surface was assumed, but relative time-delays were included in the computation. The resulting best-fit solution is about 10% larger than the corresponding oblate model (labeled \"oblate\" and \\(\\Gamma=1.4\\) in Table 4). This shrinkage of the star's radius when oblateness is included has been observed in the modeling of SAX J1808.4-3658 (Leahy et al., 2008). For the model labeled \"no td\" time-delays were omitted from the calculation and a spherical surface was used. Comparison of the two spherical models in Table 4 shows that the model that includes time-delays is about 3% smaller than the model that omits time-delays.
### Comparison with X-ray Burst Data
In our analysis of XTE J1814 we have only included data from the accretion-powered pulsations and have omitted any data corresponding to an X-ray burst. Bhattacharyya et al. (2005) analyzed the light curves constructed from the X-ray bursts for this neutron star. In their analysis they assumed a spherical surface for the star and traced the paths of the X-rays using the Kerr metric. They also made use of a limb-darkened blackbody emission (2 keV) spectral model appropriate for X-ray bursts. Due to the method that they adopted, it was necessary for them to assume one of two different equations of state. The stiffer EOS used by Bhattacharyya et al. (2005) is the same as the EOS that we label \"APR\" and is the A18+\\(\\delta v\\)+UIX computed by Akmal et al. (1998). The analysis of the X-ray bursts by Bhattacharyya et al. (2005) allows the APR EOS, while our analysis of the accretion-powered pulsations only allows stiffer EOS. From their analysis it is difficult to determine whether or not their analysis of the X-ray burst data is consistent with a very stiff EOS, as indicated by our analysis.
Watts et al. (2005) provide a detailed analysis of many aspects of both the X-ray bursts and the non-burst emission. One of the quantities that they measured was the fractional amplitude of the pulsations at the fundamental frequency and the first harmonic for both the burst and non-burst emission. They found that the non-burst emission (modeled in this paper) has a larger harmonic content than the burst emission studied by Bhattacharyya et al. (2005). Since Doppler boosting is partially responsible for the harmonic content, it is perhaps not surprising that our analysis of the non-burst pulsations implies a larger Doppler factor for the star, which in turn implies a larger radius than a similar analysis for the X-ray burst oscillations.
## 4. Discussion
We have analyzed the accretion-powered (non-X-ray burst) pulsations of XTE J1814 using a hot spot model. Our modeling includes (1) an isotropic blackbody spectral component; (2) an anisotropic Comptonized component; (3) relativistic time-delays; (4) the oblate shape of the star due to rotation. The model presented in this paper is the simplest possible model that is consistent with the data. The resulting best-fit models favor stiff equations of state, as can be seen from the 3-\\(\\sigma\\) allowed regions in Figures 4 and 5. In Figure 4 all data from a 23 day period of the 2003 outburst were included, while for Figure 5 data from only 2 days were included. The allowed regions for the two data sets differ slightly, but both only allow equations of state that are stiffer than EOS APR (Akmal et al., 1998).
It is interesting that a large mass has been inferred for the pulsar NGC 6440B (Freire et al., 2008) through measurements of periastron precession. Assuming that the observed periastron precession is purely from rela
\\begin{table}
\\begin{tabular}{l c c c c c c c c c c} \\hline \\hline Model & \\(\\Gamma\\) & \\(\\frac{2M}{R}\\) & \\(M\\) & \\(R\\) & \\(\\theta_{1}\\) & \\(\\theta_{2}\\) & \\(i\\) & \\(a\\) & \\(M_{c}\\) & \\(\\chi^{2}\\)/dof \\\\ & & \\(M_{\\odot}\\) & km & deg. & deg. & deg. & deg. & & \\(M_{\\odot}\\) & \\\\ \\hline oblate & 1.4 & 0.4 & 2.13 & 16.1 & 35.1 & 36.6 & 33.6 & 0.55 & 0.43 & 125.4/92 \\\\ oblate & 1.5 & 0.4 & 2.07 & 15.6 & 34.5 & 35.9 & 34.3 & 0.55 & 0.41 & 126.6/92 \\\\ sphere & 1.4 & 0.4 & 2.38 & 17.6 & 32.8 & 34.3 & 31.6 & 0.54 & 0.49 & 124.7/92 \\\\ no td & 1.4 & 0.4 & 2.45 & 18.5 & 49.0 & 50.9 & 21.1 & 0.60 & 0.76 & 126.8/92 \\\\ \\hline \\end{tabular}
\\end{table}
Table 4Dependence of Models on Parameters. Joint fits for two energy bands for two separate days (810 and 817).
tivistic effects, the pulsar's mass is \\(M=2.74\\pm 0.21M_{\\odot}\\) (1-\\(\\sigma\\) error bars) (Freire et al., 2008). If the mass really is this high, it would be consistent with the stiff equations of state allowed by our analysis of XTE J1814. However, it is still possible that the large periastron precession observed for NGC 6440B could be caused by a very rapidly rotating companion Freire et al. (2008), in which case the pulsar's mass would be smaller and compatible with more moderate equations of state. A high mass for SAX J1808.4-3658 is also inferred by observations during its quiescent state (Heinke et al., 2007). Modelling of the neutron star X7 in 47 Tuc (Heinke et al., 2006) allows for a high mass neutron star, although for X7 a low mass neutron star is also allowed.
Similar hot spot models of SAX J1808.4-3658 imply a soft equation of state and a column model for Her X-1 also implies a soft EOS (see Leahy et al. (2008) and Leahy (2004) for details). The best-fit pulse-shape models found for XTE J1814 have mass and radius incompatible with the 3-\\(\\sigma\\) allowed regions of mass and radius found for SAX J1808.4-3658 or Her X-1. Some possible interpretations could be (1) time-variations in the pulse profile of SAX J1808.4-3658 led to an underestimate of the star's radius; (2) the equation of state for dense matter has a two-phase nature allowing both large and small compact stars; or (3) the simple hot-spot model doesn't describe one or more of these pulsars.
The first reason is a factor for SAX J1808.4-3658. It was demonstrated in Leahy et al. (2008) that the analysis of the pulse profile averaged over a long observation gave significantly different results than the pulse profile obtained from a much shorter observation. Supporting this conclusion is a recent analysis by Hartman et al. (2008) of data from the 1998, 2002 and 2005 outbursts of SAX J1808.4-3658 showing a great deal of variation in the pulse shape over time (see Figure 3 of Hartman et al. (2008)). The pulse-shape analysis by Leahy et al. (2008) only made use of the data from the 1998 outburst. The 1998 data is very sinusoidal in nature and has very little harmonic content. The results of Hartman et al. (2008) show that the later outbursts have a stronger harmonic content. Since a larger radius star can produce a stronger harmonic content, it is possible that the addition of data from 2002 and 2005 will alter the conclusions of Leahy et al. (2008) about SAX J1808. But the effect of pulse shape variability is not a factor for Her X-1 where the pulse shape has high stability.
The second reason above, i.e. a bimodal equation of state, is a possible, but speculative, solution to the greatly different allowed regions for M and R. In this scenario, there is still only one baryonic equation of state and one quark matter EOS, but above a certain critical density, \\(\\rho_{crit}\\), the whole star makes a transition from baryonic matter to quark matter. Then for stars with central density \\(\\rho_{c}\\) having \\(\\rho_{c}<\\rho_{crit}\\), the M vs. R relation follows a stiff baryonic EOS, somewhat like EOS L, whereas for \\(\\rho_{c}>\\rho_{crit}\\) the star has converted to a quark star and lies on a quark matter EOS curve in the mass-radius diagram. In the case of EOS L, the 3\\(\\sigma\\) region with \\(M<2.7M_{\\odot}\\) would require a value of \\(\\rho_{crit}\\sim 10^{15}g/cm^{3}\\). Since quark matter EOS curves have a lower maximum mass than baryonic EOS curves, any baryonic star that makes the transition and has mass above the quark star maximum mass must lose mass to end up as a stable quark star. The mass loss depends on the physics of the transition process and is likely vary from star to star. In this scenario, we interpret XTE J1814 to lie on the baryonic branch of the mass vs. radius diagram and SAX J1808.4-3658 and Her X-1 to lie on the quark matter branch.
The third listed reason for the discrepancy, that the emission region models are too simple to represent the actual emission regions on the stars, is a definite possibility. For instance, if the emission is coming from the magnetosphere, then our models are incorrect. Alternatively, the emission may arise from surface spots, but the region's shape might be more complicated than a circle. This can only be tested by constructing more complex models and applying them to the observed pulse shapes. However with a more complex model with more parameters describing the model, better pulse shape data is required to constrain the model parameters. Future work is planned to explore more complex emission models, and to test whether these resolve the apparent discrepancy in mass and radius for different pulsars.
This research was supported by grants from NSERC to D. Leahy and S. Morsink. Y. Chung and Y. Chou acknowledge partial support from Taiwan National Science Council grant NSC 95-2112-M-008-026-MY2. We also thank the Theoretical Physics Institute at the University of Alberta for supporting D. Leahy's visits to the University of Alberta.
## References
* Akmal et al. (1998) Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. 1998, Phys. Rev. C, 58, 1804
* Alford et al. (2005) Alford, M., Braby, M., Paris, M., & Reddy, S. 2005, ApJ 629, 969
* Baldo et al. (1997) Baldo, M., Bombaci, I., & Burgio, G. F., 1997, A&A, 328, 274
* Bhattacharya et al. (2005) Bhattacharya, S., Strohmayer, T. E., Miller, M. C., & Markwardt, C. B. 2005, ApJ, 619, 483
* Bogdanov et al. (2007) Bogdanov, S., Rybicki, G. B., & Grindlay, J. E. 2007, ApJ, 670, 668
* Bogdanov et al. (2008) Bogdanov, S., Grindlay, J. E., & Rybicki, G. B. 2008, ArXiv e-prints, 801, arXiv:0801.4030
* Cadeau et al. (2005) Cadeau, C., Leahy, D. A., & Morsink, S. M. 2005, ApJ, 618, 451
* Cadeau et al. (2007) Cadeau, C., Morsink, S. M., Leahy, D. A., & Campbell, S. S. 2007, ApJ, 654, 458
* Chung (2007) Chung, Y. 2007, Study of Orbital and Pulsation Properties of Accretion-Powered Millisecond Pulsar XTE J1814-338, MSc. Thesis, National Central University, Taiwan.
* Freire et al. (2008) Freire, P. C., et al. 2008, ApJ, 675, 670
* Gierlinski et al. (2002) Gierlinski, M., Done, C., & Barret, D. 2002, MNRAS, 331, 141
* Hartman et al. (2008) Hartman, J. M., et al. 2008, ApJ, 675, 1468
* Heinke et al. (2006) Heinke, C. O., Rybicki, G. B., Narayan, R., & Grindlay, J. E. 2006, ApJ, 644, 1090
* Heinke et al. (2007) Heinke, C. O., Jonker, P. G., Wijnands, R., & Taam, R. E. 2007, ApJ, 660, 1424
* Krauss et al. (2005) Krauss, M. I., et al. 2005, ApJ, 627, 910
* Kulkarni & Romano (2005) Kulkarni, A. K. & Romano, M. M. 2005, ApJ, 633, 349
* Lackey et al. (2006) Lackey, B. D., Nayyar, M., & Owen, B. J. 2006, Phys. Rev. D, 73, 024021
* Leahy (2004) Leahy, D. A. 2004, ApJ, 613, 517
* Leahy et al. (2008) Leahy, D. A., Morsink, S. M., & Cadeau, C. 2008, ApJ, 672, 1119
* Markwardt & Swank (2003) Markwardt, C. B., & Swank, J. H. 2003, IAU Circ., 8144, 1
* Markwardt et al. (2003) Markwardt, C. B., Strohmayer, T. E., & Swank, J. H. 2003, Astron. Telegram, 164, 1
* Morsink et al. (2007) Morsink, S. M., Leahy, D. A., Cadeau, C. & Braga, J. 2007, ApJ, 663, 1244
* Pandharipande et al. (1976) Pandharipande, V. R., Pines, D., & Smith, R. A. 1976, ApJ, 208, 550* Papitto et al. (2007) Papitto, A., di Salvo, T., Burderi, L., Menna, M. T., Lavagetto, G., & Riggio, A. 2007, MNRAS, 375, 971
* Pechenick et al. (1983) Pechenick, K. R., Ftaclas, C., & Cohen, J. M. 1983, ApJ, 274, 846
* Poutanen & Gierlinski (2003) Poutanen, J. & Gierlinski, M. 2003, MNRAS, 343, 1301
* Strohmayer et al. (2003) Strohmayer, T. E., Markwardt, C. B., Swank, J. H., & in't Zand, J. 2003, ApJ, 596, L67
* Sunyaev & Titarchuk (1985) Sunyaev, R. A., & Titarchuk, L. G. 1985, A&A, 143, 374
* Watts et al. (2005) Watts, A. L., Strohmayer, T. E., & Markwardt, C. B. 2005, ApJ, 634, 547
* Watts & Strohmayer (2006) Watts, A. L., & Strohmayer, T. E. 2006, MNRAS, 373, 769
* Watts et al. (2008) Watts, A. L., Patruno, A., & van der Klis, M. 2008, ArXiv e-prints, 805, arXiv:0805.4610
* Wijnands & van der Klis (1998) Wijnands, R., & van der Klis, M. 1998, Nature, 394, 344
* Zavlin & Pavlov (1998) Zavlin, V. E., & Pavlov, G. G. 1998, A&A, 329, 583 | The accretion-powered (non-X-ray burst) pulsations of XTE J1814-338 are modeled to determine neutron star parameters and their uncertainties. The model is a rotating circular hot spot and includes: (1) an isotropic blackbody spectral component; (2) an anisotropic Comptonized spectral component; (3) relativistic time-delays and light-bending; and (4) the oblate shape of the star due to rotation. This model is the simplest possible model that is consistent with the data. The resulting best-fit parameters of the model favor stiff equations of state, as can be seen from the 3-\\(\\sigma\\) allowed regions in the mass-radius diagram. We analyzed all data combined from a 23 day period of the 2003 outburst, and separately analyzed data from 2 days of the outburst. The allowed mass-radius regions for both cases only allow equations of state (EOS) that are stiffer than EOS APR (Akmal et al., 1998), consistent with the large mass that has been inferred for the pulsar NGC 6440B (Freire et al., 2008). The stiff EOS inferred by this analysis is not compatible with the soft EOS inferred from a similar analysis of SAX J1808.
Subject headings:stars: neutron -- stars: rotation -- X-rays: binaries -- relativity -- pulsars: individual: XTE J1814-338 +
Footnote †: slugcomment: Accepted by ApJ: September 26, 2008 | Summarize the following text. |
arxiv-format/0806_1740v1.md | # Gravitational Instabilities, Chondrule Formation, and the FU Orionis Phenomenon
Aaron C. Boley12 & Richard H. Durisen1
Footnote 1: affiliation: Astronomy Department, Indiana University, 727 E. Third St., Bloomington, IN 47405- 7105
Footnote 2: affiliation: Institute for Theoretical Physics, University of Zurich, Winterthurerstrasse 190, Zurich, CH-8057, Switzerland; [email protected]
## 1 Introduction
### GIs, MRI, FU Orionis Outbursts, and Chondrules
Gravitational instabilities can activate in a disk when the Toomre (1964) parameter \\(Q=c_{s}\\kappa/\\pi G\\Sigma\\lesssim 1.7\\) for a thick disk (see Durisen et al., 2007), where \\(c_{s}\\) is the sound speed, \\(\\Sigma\\) is the surface density, and \\(\\kappa\\) is the epicyclic frequency. As indicated by \\(Q\\), a disk isunstable against GIs when it is cold and/or massive. The resulting spiral waves driven by self-gravity efficiently transfer angular momentum outward and mass inward (e.g., Lynden-Bell & Kalnajs 1972; Durisen et al. 1986). Another mechanism that can efficiently transfer angular momentum outward is the magnetorotational instability (MRI; see Balbus & Hawley 1991; Desch 2004). In contrast to GIs, the MRI only requires a weak magnetic field coupled to the gas. These mechanisms, either separately or in combination, are likely to be the principal way T Tauri stars accrete gas from a disk (Hartmann et al. 2006).
In order for the MRI to occur, ionized species must be present in the gas phase. Thermal ionization of alkalis occurs wherever \\(T\\gtrsim 1000\\) K, but depletion of ions by dust grains may move the temperature threshold closer to \\(T\\sim 1700\\) K (Desch 1999; Sano et al. 2000), where the dust sublimates completely. Elsewhere, the ionization must be driven by a nonthermal source, e.g., energetic particles (EPs). For this discussion, EPs refers to any particles that could ionize the gas, e.g., X-rays. Gammie (1996) proposed that disks may have active and inactive MRI layers due to attenuation of EPs by the gas. In the inner regions of a disk (at radii \\(\\sim\\) few AU) where the column densities are large, MRI may only be active in a thin layer, resulting in _layered accretion_. As one moves outward, column densities drop, and the entire disk can become MRI active. The region where the MRI is mostly absent is called the _dead zone_. EPs are attenuated by a surface density of only about 100 g cm\\({}^{-2}\\) (Stepinski 1992), and so even a minimum mass solar nebula (MMSN) will likely exhibit layered accretion (Desch 2004). Even if mass accretion is only reduced and not altogether halted as a result of Reynolds stresses (Fleming & Stone 2003; Oishi et al. 2007), mass may still pile up in the dead zone. If enough mass accumulates, then even for an otherwise low-mass disk, GIs can activate.
The FU Orionis phenomenon is characterized by a rapid (1-10s yr) increase in optical brightness of a young T Tauri object, typically by 5 magnitudes, and is driven by sudden mass accretion of the order \\(10^{-4}M_{\\odot}\\) yr\\({}^{-1}\\) from the inner disk onto the star (Hartmann & Kenyon 1996). Because FU Ori objects appear to have decay timescales of about 100 yr, the entire mass of a MMSN (\\(\\sim 0.01\\)\\(M_{\\odot}\\)) can be accreted onto the star. To date, the best explanation for the optical outburst is a thermal instability (e.g., Bell & Lin 1994; Kley & Lin 1999; see discussions in Hartmann & Kenyon 1996, Green et al. 2006; and Zhu et al. 2007). Armitage et al. (2001) suggested that GIs in a bursting dead zone might be able to trigger an FU Ori outburst by rapidly increasing the accretion into the inner disk (\\(\\lesssim 0.1\\) AU) and initiating an MRI through thermal ionization. Hartmann (2007, in private communication) and Zhu et al. (2007) also suggest that the heating due to gravitational torques might drive an MRI for \\(r>0.1\\) AU, which would then feed mass inside 0.1 AU until a thermal instability sets in. The FU Ori phenomenon may be a result of a _cascade_ of instabilities, starting with a burst of GI activity in a dead zone, followed by accretion due to an MRI, followed finally by a thermal instability (cf Kley & Lin, 1999). Indeed, recent observations of FU Ori indicate that very large mass fluxes are present out to at least \\(r\\sim 0.5\\) AU (Zhu et al., 2007).
Although details are still being debated, most chondrules formed in the first 1 to 3 Myr of the Solar Nebula's evolution (Bizzarro et al., 2004; Russell et al., 2005). Chondrule precursors were flash melted from solidus to liquidus, where high temperatures \\(T\\sim 1700\\) K were experienced by the precursors for a few minutes. The melts then cooled over hours, with the actual cooling time depending on chondrule type. Chondrule collisional histories and isotopic fractionation data, chondrule-matrix complementarity, fine-grained rim accumulation, and petrological and parent body location arguments (Krot et al., 2005) suggest that chondrules formed in the Solar Nebula (Wood, 1963) in strong, localized, repeatable heating events. The shock wave model for chondrule formation can accommodate these observational constraints and reproduce heating and cooling rates required to form chondrule textures (Iida et al., 2001; Desch & Connolly, 2002; Cielsa & Hood, 2002; Miura & Nakamoto, 2006). One plausible source of chondrule-producing shocks is a global spiral wave (Wood, 1996). Harker & Desch (2002) suggest that spiral waves could also explain thermal processing at distances as large as 10 AU, which may be necessary to explain observations of comets (Wooden et al., 2005) and recent _Stardust_ results (e.g., McKeegan et al., 2006). It has been suggested that bursts of GIs may be able to produce the required shock strengths (Boss & Durisen, 2005) and provide a source of turbulence and mixing (Boss, 2004b; Boley et al., 2005). Global spiral shocks are appealing because they fit many of the constraints above. They may be repeatable, depending on the formation mechanism for the spiral waves; they are global, but produce fairly local heating; they can form chondrules in the disk; and they can work in the inner disk as well as the outer disk.
### Fragmentation
Knowing under what conditions protoplanetary disks can fragment is crucial to understanding disk evolution inasmuch as a fragmented disk may produce gravitationally bound clumps. This has become known as the _disk instability_ hypothesis for the formation of gas giant planets (Kuiper, 1951; Cameron, 1978; Boss, 1997, 1998). The strength of GIs is regulated by the cooling rate in disks (Tomley et al., 1991, 1994; Pickett et al., 1998, 2000, 2003), and if the cooling rate is high enough in a low-\\(Q\\) disk, a disk can fragment (Gammie, 2001). Gammie quantified that a disk will fragment when \\(t_{\\rm cool}\\Omega\\lesssim 3\\) for a disk with a \\(\\Gamma=2\\), where \\(\\Gamma\\) is the two-dimensional adiabatic index, such that \\(\\int pdz=P\\sim\\Sigma^{\\Gamma}\\), where \\(p\\) is the gas pressure and \\(z\\) is the vertical direction in the disk. Here, \\(t_{\\rm cool}\\) is the local cooling time and \\(\\Omega\\) is the angular speed of the gas. This criterion was approximately confirmed in 3D disk simulations by Rice et al. (2003) and Mejia et al. (2005). Rice et al. (2005) showed through 3D disk simulations that this fragmentation criterion depends on the 3D adiabatic index \\(\\Gamma_{1}\\) and, for \\(\\Gamma_{1}=\\gamma=5/3\\) or \\(7/5\\), the fragmentation limit occurs when \\(t_{\\rm cool}\\Omega\\lesssim 6\\) or 12, respectively. These results show that a change by a factor of about 1.2 in \\(\\gamma\\) has a factor of two effect on the critical cooling time. In addition, these results indicate that the cooling time must be roughly equal to the dynamical time of the gas for the disk to be unstable against fragmentation when \\(\\gamma=5/3\\).
Do such prodigious cooling rates occur in disks when realistic opacities are used with self-consistent radiation physics? This question is heavily debated in the literature (e.g., Nelson et al. 2000; Boss 2001, 2004a, 2005; Rafikov 2005, 2007; Boley et al. 2006, 2007b; Mayer et al. 2007; Stamatellos & Whitworth 2008). The simulations to date use a wide variety of numerical methods, including very different approximations for radiation physics, and only two groups have published results of radiative transfer tests appropriate for disks (Boley et al. 2007b; Stamatellos & Whitworth 2008).
Nelson et al. (2000) used 2D SPH simulations with radiation physics to study protoplanetary disk evolution. Because their simulations were evolved in 2D, they assumed that the disk at any given moment was in vertical hydrostatic equilibrium. Using a polytropic vertical density structure and Pollack et al. (1994) opacities, they cooled each particle according to an appropriate effective temperature. In their simulations, the cooling rates are too low for fragmentation. In contrast, Boss (2001, 2005) employed radiative diffusion in his 3D grid-based code; fragmentation occurs in his simulated disks. Besides the difference in dimensionality of the simulations, Boss assumed a fixed temperature structure for Rosseland mean optical depths less than 10, as measured along the radial coordinate (cf recent flux-limited simulations by Boss 2008). Boss (2002) found that the fragmentation in his disks is insensitive to the metallicity of the gas and attributed this independence to fast cooling by convection (Boss 2004a). However, it must be noted that Nelson et al. (2000) assumed a vertically polytropic density structure. Because the specific entropy \\(s\\sim\\ln K\\), where \\(p=K\\rho^{\\gamma}\\) is the polytropic equation of state and \\(\\rho\\) is the gas density, the Nelson et al. approximation effectively assumes efficient convection.
Except for extremely massive and extended disks (Stamatellos et al. 2007), recent simulations with radiative physics by Cai et al. (2006), Boley et al. (2006, 2007b), and Stamatellos & Whitworth (2008) find long cooling rates and no efficient cooling by convection. Cai et al. also show that the strength of GIs is dependent on the metallicity. Furthermore, Boley & Durisen (2006) suggest that shock bores, which can cause a rapid vertical expansion in the post-shock region of spiral shocks, could be misidentified as \"convective\" flows. In both contrast and support of these studies, Mayer et al. (2007) use SPH simulations with 3D flux limited diffusion and find that fragmentation only occurs once the mean molecular weight of the gas \\(\\mu\\gtrsim 2.7\\). However, the simulations presented by Mejia (2004), Cai et al. (2006), and Boley et al. (2006) unintentionally were evolved with \\(\\mu\\approx 2.7\\) due to an error in the inclusion of He in the opacity tables, and their disks do not fragment. The issue of fragmentation in radiatively cooled disks thus remains unsettled.
### Current Study
For this study, we adopt the hypothesis, which we refer to as the _unified theory_, that bursts of GI activity in dead zones drive the FU Ori phenomenon and produce chondrule-forming shocks. This hypothesis is an amalgamation of ideas presented in Wood (1996, 2005), Gammie (1996), Armitage et al. (2001), and Boss & Durisen (2005). In order to investigate this scenario, we designed a numerical experiment to evolve a massive, highly unstable disk with an initial radial extent between 2 and 10 AU. Commensurately, we investigate the stability of these massive, gravitationally unstable disks against fragmentation to assess the feasibility of the disk instability hypothesis for gas giant formation.
## 2 Expectations
The shock speed \\(u_{1}\\) for a fluid element entering a global, logarithmic spiral wave with pitch angle \\(i\\) is approximately described by
\\[u_{1}\\approx 30\\ {\\rm km\\ s}^{-1}\\left(\\frac{M}{M_{\\odot}}\\right)^{1/2}\\left( \\frac{r}{{\\rm AU}}\\right)^{-1/2}\\left|1-\\left(\\frac{r}{r_{p}}\\right)^{3/2} \\right|\\sin i, \\tag{1}\\]
where \\(r_{p}\\) is the corotation radius of the spiral pattern and where we have assumed that the gas azimuthal motion is Keplerian and \\(v_{r}\\) is negligible. When \\(r_{p}\\gg r\\), the shock speed limits to the Keplerian speed times \\(\\sin i\\); whether spiral waves with large \\(r_{p}\\) produce chondrules is mainly dependent on the pitch angle of the spiral wave. In contrast, when \\(r\\gg r_{p}\\), the speed increases as \\(r\\), and even shallow pitch angles can produce chondrules. For example, consider the MMSN, where the midplane \\(\\rho(r)=1.4\\times 10^{-9}\\left(r/1\\ {\\rm AU}\\right)^{-11/4}{\\rm g\\ cm}^{-3}\\)(Hayashi et al., 1985). Figure 1 shows the \\(u_{1}\\)-\\(\\rho\\) plane with heavy, solid curves indicating shock speeds for \\(r_{p}=1\\), 2.5, 5, and \\(\\infty\\) AU and with \\(i=10\\) and \\(30^{\\circ}\\). The colored regions on the plot highlight where chondrule formation is expected in a MMSN (yellow) and a 10\\(\\times\\)MMSM (orange). The chondrule-forming curves are based on the results of Desch & Connolly (2002), which we summarize by \\(u_{1}\\approx-\\left(11+2\\log\\rho\\left({\\rm g\\ cm}^{-3}\\right)\\right)\\pm 0.5 {\\rm km\\ s}^{-1}\\). The boundaries set by these curves are meant to be illustrative, not definitive. Spiral shocks along shallow pitch angle spiral waves (\\(i\\approx 10^{\\circ}\\)) are mostly if not entirely out of the chondrule-forming range between \\(r=1\\) and 5 AU for shocks inside corotation. However, spirals with corotation in the inner disk can produce chondrule-forming shocks for a wide range of radii, depending on the actual \\(r_{p}\\). If the spiral waves have very open pitch angles (\\(i\\gtrsim 30^{\\circ}\\)), then spiral waves with corotation near \\(r=5\\) AU can produce chondrule-forming shocks near the asteroid belt and at cometary distances. The mass of the disk does not change these general behaviors. A major caveat for this simple-minded approach is that the fluid elements' motions may be poorly described by equation (1) if vertical and radial excursions induced by shock bores (Boley & Durisen, 2006) cannot be neglected.
The possibility of producing chondrules in the asteroid belt and at comet distances during the same burst is attractive. Moreover, we would also like to use these simulations to study other phenomena such as radiation transport, convection, and disk fragmentation in the 2-10 AU region. For these reasons, we tried to design a numerical experiment to investigate the connection between GIs, FU Orionis events, and chondrule formation that is biased toward producing strong shocks with a corotation near 5 AU and with open pitch angles.
## 3 Methodology
### Initial Conditions
If chondrule-producing shocks cannot be created in simulations biased in their favor, then it would present a serious problem for GIs as the source of chondrule processing. The reader should keep in mind that the model, at this stage, is not necessarily meant to be representative of a typical disk, although the massive disk we present may be plausible for early Class I objects (e.g., L1551; Osorio et al., 2003). To create the initial conditions (ICs), consider a disk that is vertically polytropic, is axisymmetric, has a constant \\(\\gamma\\), and is Keplerian (\\(\\kappa=\\Omega\\)). Also assume that self-gravity is negligible. With these assumptions, the vertical disk structure can be calculated by using the following equations:
\\[\\rho(z) = \\rho_{0}\\left(1-z^{2}/h^{2}\\right)^{1/(\\gamma-1)}, \\tag{2}\\] \\[h^{2} = \\frac{2\\gamma K}{\\Omega^{2}(\\gamma-1)}\\rho_{0}^{\\gamma-1},\\mbox{ and}\\] (3) \\[\\rho_{0} = \\left(\\Sigma\\Omega\\left(\\frac{\\gamma-1}{2\\gamma\\pi K}\\right)^{1/ 2}\\frac{\\Gamma[(3\\gamma-1)/(2\\gamma-2)]}{\\Gamma[\\gamma/(\\gamma-1)]}\\right)^{2 /(\\gamma+1)}, \\tag{4}\\]where \\(K\\) for \\(p=K\\rho^{\\gamma}\\) is the polytropic coefficient at any given \\(r\\), \\(G\\) is the gravitational constant, \\(\\Omega\\) is the Keplerian angular speed, and \\(h\\) is the disk scale height. Using equations (2) through (4), one can show that
\\[\\Sigma(r) = \\pi^{-(3\\gamma+1)/4}\\left(\\frac{2}{\\gamma-1}\\left(\\frac{\\Gamma[ \\gamma/(\\gamma-1)]}{\\Gamma[(3\\gamma-1)/(2\\gamma-1)]}\\right)^{2}\\right)^{(1- \\gamma)/4}\\] \\[\\times \\left(\\gamma K(r)\\right)^{1/2}(GQ(r))^{-(\\gamma+1)/2}\\,\\Omega(r)^ {\\gamma}.\\]
To select a model, one needs to specify power laws for \\(\\Sigma\\) and \\(Q\\), \\(\\Sigma\\) and \\(K\\), or \\(K\\) and \\(Q\\). For calculating \\(Q\\), the midplane sound speed \\(c_{s}\\) is used. Note that when \\(Q=\\) constant, there exists a lower limit for \\(q\\), \\(\\Sigma\\sim r^{-q}\\), where the radial entropy gradient remains positive. Because the specific entropy \\(s\\sim\\ln K\\), equation (5) can be solved to show that \\(ds/dr\\sim(3\\gamma-2q)/r\\) for constant \\(Q\\). The critical surface density power law is then \\(q_{c}=5/2\\) for \\(\\gamma=5/3\\) and \\(q_{c}=21/10\\) for \\(\\gamma=7/5\\). When \\(q>q_{c}\\), radial thermal convection may set in for a constant \\(Q\\) disk. However, the radial entropy gradient is not the only stability criterion for radial thermal convection in disks (Klahr & Bodenheimer 2003).
Equations (2) through (5) are used to generate the ICs for this study. The disk is massive, 0.09 \\(M_{\\odot}\\), with \\(\\Sigma\\sim r^{-2}\\), initially between about 2 and 10 AU in radius and with a \\(Q\\sim 2\\) everywhere. Although the surface density power law is steep, it is consistent with the Nice-MMSN (Desch 2007).1 Because the analytic model ignores self-gravity, assumes a constant \\(\\gamma\\) (see SS3.2 below), and has sharp disk truncation edges, the model must be relaxed to a new equilibrium structure in the hydro code. Once the ICs are generated, they are loaded into our code and evolved without cooling at very low azimuthal resolution, essentially two-dimensionally. The radial and vertical momenta are damped to allow the disk to relax calmly to an equilibrium configuration, which takes about ten orbits at 10 AU. To create a mass concentration in the disk, mass is then added to the disk linearly in time with a Gaussian profile. The specific internal energy is held constant. The peak of the Gaussian is centered on 5 AU. The radial FWHM of the Gaussian is chosen to be 3 AU, which is roughly the size of the most unstable radial wavelength \\(\\lambda_{u}\\approx 2\\pi c_{s}/Q\\kappa\\approx 2\\pi h/Q\\approx 0.2\\pi r\\) (Binney & Tremaine 1987). Mass is added until nonaxisymmetry is visible in midplane density images, which takes approximately 190 yr, i.e., 6 orp of evolution (1 orp is defined as 1 outer rotation period at 10 AU for this model, which is about 33 yr). The total mass added is about 0.08 \\(M_{\\odot}\\), and so if one were to imagine a corresponding accretion rate, it would be about \\(4\\times 10^{-4}\\)\\(M_{\\odot}\\) yr\\({}^{-1}\\). It is probably unphysical to add so much mass so quickly to the disk without increasing the temperature. However, we remind the reader that the study is a numerical experiment. Figures 2 and 3 illustrate the density distributions of the disk before and after the mass accretion.
The mass buildup increases the total mass of the simulated disk to \\(\\sim 0.17\\) M\\({}_{\\odot}\\) and it decreases \\(Q\\) below unity in a narrow region around 5 AU just before strong GI activation. Because the ring is grown over 6 orp, the instability is not obviously overshot. The mass-weighted average \\(Q_{\\rm av}\\) over the FWHM of the Gaussian centered at 5 AU is approximately unity when the ring becomes unstable. This implies that \\(Q\\) must approach unity for a spatial range of at least the most unstable radial wavelength before GIs will activate in a disk.
### Chymera
The disk is evolved with the CHYMERA code (Boley 2007), which employs the Boley et al. (2007b) radiation physics algorithm. CHYMERA is an Eulerian code that solves the hydrodynamics equations on a fixed, cylindrical grid. The code includes self-gravity, a ray+flux-limited diffusion hybrid scheme, and a variable first adiabatic index \\(\\Gamma_{1}\\). This code has been extensively tested. In response to criticisms of Boss's (2001, 2002, 2004a, 2005) simulations by Boley et al. (2007a), Boss (2007) questions the accuracy of the Indiana University code, which is the predecessor of CHYMERA. Boss raises concerns that we address here.
_Potential Solver:_ CHYMERA uses a direct Poisson solver to calculate the potential due to mass on the grid. To provide the boundary conditions for the direct solver, a spherical harmonics expansion for \\(l=|m|\\lesssim 10\\) is used to calculate the potential on the grid boundary (e.g., Pickett et al. 2003; Boley et al. 2007b). Boss argues that the boundary solver used in CHYMERA may be too low-order to capture clump formation. Tests by Pickett et al. (2003) and Mejia et al. (2005) and the results from the Wengen Test 4 code comparison (WT4; Mayer et al. 2008, in prep.) demonstrate that this is not the case. During the initial analyses of WT4, the potential solver was rigorously tested in an attempt to explain some deviations noticed between CHYMERA and other codes. These deviations were finally attributed to differences in the initial conditions, and new simulations with consistent ICs show very good agreement between codes. To test the potential solver in CHYMERA, a high-order Bessel function expansion boundary solver (Cohl & Tohline 1999) was employed with Legendre half-integer polynomials expanded out to \\(m=30\\), which showed convergence to better than \\(10^{-6}\\) when compared with \\(m=40\\). The spiral structure and clump formation in the simulation with this high-\\(m\\) boundary solver were indistinguishable from the normal lower-order spherical harmonics expansion. The Wengen test is very demanding, with strong nonaxisymmetric structure and a highly flattened disk, and yet the lower-order spherical harmonics expansion for \\(l=|m|\\lesssim 10\\) used in CHYMERA (e.g., Boley et al. 2007b) captures clump formation.
_Resolution & Numerical Heating:_ The resolution for the high-resolution simulations presented in this paper are at higher resolution than the base simulations presented for Wengen Test 4. As will be demonstrated here, CHYMERA shows fragmentation when it is expected based on experimentally determined fragmentation criteria (Gammie 2001; Rice et al. 2005). The numerical heating reported by Boley et al. (2006) affected a small region of their inner disk, and is understood to be a resolution effect. The simulations presented here are at high enough resolution, as determined by the Boley et al. (2007b) tests, that this effect should be negligible if present at all.
_Radiative Transfer Boundary Conditions:_ CHYMERA has been shown to relax to analytic solutions for the flux through an atmosphere as well as the temperature profile for high, moderate, and low optical depths. The code allows convection when it should, and does not when it should not (Boley et al. 2007b).
For the treatment of H\\({}_{2}\\), we use a frozen ortho/para ratio of 3:1 in this study for reasons presented in Boley et al. (2007a). The grain size distribution is assumed to follow the ISM power law, \\(dn\\sim a^{-3.5}da\\) (D'Alessio et al. 2001), where the maximum grain size \\(a_{\\rm max}\\) is chosen to be 1 mm to account for grain growth (D'Alessio et al. 2006). To transition smoothly between the low and high optical depth regimes, a weighted combination between Rosseland and Planck mean opacities \\(\\kappa_{w}\\) is used \\(\\kappa_{w}=(\\kappa_{\\rm R}\\tau_{R}+\\kappa_{P}/\\tau_{R})/(\\tau_{R}+1/\\tau_{R})\\), where \\(\\tau_{R}\\) is the vertical midplane Rosseland optical depth. Different methods for interpolating \\(\\kappa_{R}\\) and \\(\\kappa_{P}\\)_along the ray_ are being explored, but they have not been implemented here. The particles are assumed to be well-mixed with the gas. The effect of dust settling is heuristically considered by evolving the disk in several ways: standard opacity and \\(10^{-2}\\), \\(10^{-3}\\), and \\(10^{-4}\\) of the standard opacity, referred to as the \\(10^{-2}\\)\\(\\kappa\\), \\(10^{-3}\\)\\(\\kappa\\), and \\(10^{-4}\\)\\(\\kappa\\) simulations. This choice in opacities varies the vertical midplane Rosseland mean optical depths from about \\(\\tau=10^{4}\\) to unity, and spans the limits where advection of photons becomes important (\\(\\tau v/c>1\\); Krumholz et al. 2007) and where radiative cooling becomes most efficient (\\(\\tau\\sim 1\\)). No external radiation is assumed to be shining onto the disk, except for a 3 K background temperature. Although such rapid dust settling and a naked disk are likely to be unrealistic, the assumptions are made to bias the disk toward strong shocks inasmuch as irradiation and opacity affects GI strength (Cai et al. 2006, 2008). For a base comparison, an adiabatic simulation, i.e., where shock heating is allowed but not cooling, is evolved for about 33 yr.
The simulations are evolved at two resolutions: \\((r,\\ \\phi,\\ z)=256\\times 512\\times 64\\) and \\(512\\times 1024\\times 128\\) cells, respectively, above the midplane. The lower resolution simulations (512sim) have a grid spacing of 0.05 AU per cell in \\(r\\) and \\(z\\) and the higher 0.025 AU (1024 sim). For both resolutions, \\(r\\Delta\\phi\\approx\\Delta r\\) at about \\(r=4\\) AU. These parameters are summarized in Table 1. For each disk, 1000 fluid tracer elements are randomly distributed in \\(r\\) between about 3 and 7 AU, in \\(\\phi\\) over \\(2\\pi\\), and in \\(z\\) roughly within the scale height of the disk.
Due to inefficient cooling in the standard simulations, high enough temperatures in the disk are reached after 80 and 55 yr for the 512 and 1024 simulations, respectively, to create radiative timescales that are too short to resolve; the simulations are stopped. As described below, the reduced opacity simulations are able to cool much more efficiently, and the disk temperatures remain manageable with the time-explicit algorithms used in CHYMERA. However, the reduced opacity simulations are stopped after about 110 yr due to the development of a strong, one-arm spiral, which is treated incorrectly with our fixed-star assumption.
### Fluid Element Tracer
The hydrodynamics scheme in CHYMERA is Eulerian, and does not give direct information on the histories of tracer fluid elements. In order to derive detailed and statistical shock information and to capture the complex gas motions in unstable disk simulations, we have combined a tri-Akima spline interpolation algorithm with a Runge-Kutta integrator (e.g., Press, 1986) for tracing a large sample of fluid elements during the simulation. Velocities and thermodynamic quantities, e.g., temperature and density, are interpolated once each time step. Because the hydrodynamics code explicitly solves the equation of motion for the gas, the algorithm only needs time and velocity information to integrate the positions of the fluid elements. The Runge-Kutta integrator was intended to be fourth-order accurate; unfortunately, an implementation error when combining the integration scheme into CHYMERA resulted in a first-order accurate scheme in time. Because tracing fluid element histories is an important point of this paper, we reran a stretch of one of the simulations with a second-order fluid element tracing routine. After about one orbit at \\(r\\sim 5\\) AU, the mean fractional differences, \\((x_{1}-x_{2})/x_{2}\\), between the first and second-order schemes are \\(-3\\times 10^{-4}\\), \\(-2\\times 10^{-4}\\), and \\(-10^{-3}\\) for \\(r\\), \\(\\phi\\), and \\(z\\), respectively. The sample standard deviations are \\(4\\times 10^{-3}\\), \\(10^{-2}\\), and \\(10^{-1}\\) for \\(r\\), \\(\\phi\\), and \\(z\\), respectively. The mean differences are small, and show that there are no obvious systematic effects. The sample deviations indicate that the variations between the the two schemes are also marginal for the radial and azimuthal directions. There is considerably larger scatter in the vertical direction, where spiral shocks can create complicated vertical flows (Boley & Durisen, 2006). Because the fluid elements are used to gather shock information, the scatter in the vertical positions between the two schemes is not a major concern for this study. The effect is similar to redistributing the fluid elements vertically, keeping the same \\(r\\) and \\(\\phi\\), every few orbits.
An Akima spline is similar to a natural cubic spline, but typically yields better results for curves with sudden changes (see Akima, 1970), as are expected in shock profiles. Although the spline fits a curve to a one-dimensional set of data points, the interpolation can be extended to data in three dimensions. First, consider a cubic volume with data at the vertices. A value anywhere in the volume can be approximated by seven linear interpolations: four to calculate values between each vertex in a particular direction, two to calculate the values along the projections of the desired point onto the interpolated lines, and a final interpolation through the point of interest. Extending this from a simple tri-linear fit to a tri-Akima spline is relatively straightforward. Instead of using the eight nearest points that enclose a volume, one uses the 125 closest points, with five data points used for every Akima spline fit. The central data point is the closest data value to the point of interest. We use the GNU Scientific Library Akima spline algorithm for performing fits.
## 4 Results
### Structure
As these disks become gravitationally unstable, low-order spiral waves develop. These waves drive a sudden increase in mass flux throughout the disk and drive strong spiral shocks. Following Mejia et al. (2005), we refer to this phase of disk evolution as a burst of GI activity. Surface density plots for the 512 simulations are shown in Figures 4 and 5 at \\(t\\approx 33\\) and 77 yr, respectively. The images at \\(t\\approx
\\(\\int\\Sigma\\sin(m\\phi)d\\phi\\) (cf. Boley et al. 2006 who use the volume density). In addition, define \\(A_{+}=\\sum_{m=2}^{32}A_{m}\\). As shown in Table 1, \\(\\langle A_{+}\\rangle\\) tends to increase with both resolution and decreased opacity. This is also seen in the Fourier component spectrum, shown in Figure 8, where squares indicate the 1024 simulations and crosses lines the 512 simulations. The \\(10^{-3}\\)\\(\\kappa\\) simulation is not shown for readability, but if falls between the \\(10^{-2}\\) and \\(10^{-4}\\)\\(\\kappa\\) simulations and has the least amount of divergence between the 512 and 1024 simulations at large \\(m\\). For all runs, the Fourier \\(m=1\\) through 6 dominate, and the power in the \\(A_{m}\\) spectra drops off steeply around \\(m\\sim 10\\). The 1024 simulations have a shallower profile at large \\(m\\), so resolution effects probably play a role for \\(m\\gtrsim 10\\), even at resolutions of LMAX=512. The convergence of the \\(A_{m}\\) profile for large \\(m\\) is a topic for future investigation (Steinman-Cameron et al., in preparation).
### Energy Budget
The energetics of the disk simulations are shown in Figures 9 and 10, where cumulative energy loss by radiation (blue) and cumulative energy dissipation by shocks (red) are plotted. These quantities are integrated during the evolution, so they represent the actual heating and cooling in these disks. For both resolutions of the standard case, the shock heating clearly dominates over the cooling; the disk is heating up over the entire evolution. Energy is transported inefficiently for the highly optically thick disk, and the disk evolves like an adiabatic simulation. When the opacity is reduced by a factor of 100, the cooling becomes much more important than it is in the standard simulation, with radiative energy losses becoming comparable to heating by shocks. In addition, the shock heating rate becomes somewhat larger during the burst. The 1024 standard and \\(10^{-2}\\)\\(\\kappa\\) show additional heating. This heating is quite strong in the 1024 standard simulation, and it must be stopped even earlier than the 512 standard due to high temperatures, as discussed above. The 1024 \\(10^{-2}\\)\\(\\kappa\\) is very similar to its 512 counterpart. There is additional heating and cooling during the burst, but the heating and cooling curves for both resolutions roughly track each other. The adiabatic curve, which is only shown to 33 yr (1 orp), is difficult to see because it tracks the standard cases extremely closely.
When the opacity is reduced by a factor of \\(10^{3}\\), the radiative cooling is even more efficient and surpasses shock heating. The \\(10^{4}\\) opacity reduction shows the strongest shock heating and the fastest disk cooling. The 1024 \\(10^{-4}\\)\\(\\kappa\\) simulation curves track the 512 curves, with the burst contributing the most to the offset of the curves. Unlike the higher opacity simulations, for \\(10^{-4}\\)\\(\\kappa\\), there is less shock heating and less total cooling at higher resolution. The 1024 \\(10^{-3}\\)\\(\\kappa\\) simulation roughly tracks the 512 counterpart, with the strongest deviation near 66 yr (2 orp). The adiabatic curve is shown again as a reference. As one would expect from analytic arguments, the opacity has a profound effect on disk cooling and, consequently, on spiral shocks.
The effect of opacity on energy losses is also demonstrated in Figures 11 and 12. In these figures, brightness temperature maps for the 1024 simulations are shown for similar times as in Figure 6, where \\(T_{b}=(\\pi I_{+}/\\sigma)^{1/4}\\) and \\(I_{+}\\) is vertical outward intensity as calculated by the ray solution in CHYMERA. In addition, we define a cooling temperature \\(T_{c}=(\\int_{0}^{\\infty}\\left[-\
abla\\cdot{\\bf F}\\right]dz/\\sigma)^{1/4}\\), which is the temperature that corresponds to a given column's effective flux if all of the energy were to leave the column vertically. If the column is being heated, \\(T_{c}\\) is set to zero and appears as black in the images.
As the opacity is lowered, the spiral structure becomes more clearly outlined, and the disk becomes brighter because the photons can leave from hotter regions. The brightness maps also demonstrate that sustained fast cooling by convection is absent. If hot gas near an optically thick midplane is quickly transported to altitudes where \\(\\tau\\sim 1\\), convection can, in principle, enhance disk cooling. However, the efficacy of convective cooling is controlled by the rate at which that energy can be radiated from the photosphere of the disk. If localized convective flows were responsible for fast cooling in the optically thick disks, one would expect to find strong, localized \\(T_{b}\\) and \\(T_{c}\\) enhancements. For example, the standard and \\(10^{-2}\\)\\(\\kappa\\) simulations would need to have regions as bright as the \\(10^{-4}\\)\\(\\kappa\\) simulation. There are thin regions along the spiral shocks in the \\(10^{-2}\\)\\(\\kappa\\) simulation that show large \\(T_{c}\\), but they do not correspond to enhancements in \\(T_{b}\\) as well. This suggests that the energy from the spiral shocks is not transported efficiently to altitudes where it can be quickly lost from the disk. This assessment is supported by the net radiative heating in regions surrounding the strong shocks, e.g., the black outlines (areas experiencing net radiative heating) around the thin, hot spiral waves. The corresponding energy flux that is needed for convection to be sustained and to cool the disk does not occur (see also Boley et al. 2006, 2007b, Nelson 2006, & Rafikov 2007).
The importance of the opacity for disk cooling is also shown in Figures 13 through 15. On these plots, three quantites are shown: the Toomre \\(Q\\), the mass-weighted \\(\\Gamma_{1}\\), and the \\(\\zeta=t_{\\rm cool}\\Omega/f(\\Gamma_{1})\\) profiles. For the \\(t_{\\rm cool}\\) curves, the cooling times are calculated by dividing the azimuthally and vertically integrated internal energy by the azimuthally and vertically integrated radiative cooling rate for each annulus in the disk. The \\(f(\\Gamma_{1})\\) is the critical value of \\(t_{\\rm cool}\\Omega\\), below which fragmentation is expected (Gammie 2001), for the corresponding \\(\\Gamma_{1}\\). Rice et al. (2005) demonstrated that \\(f(7/5)\\approx 12\\) and \\(f(5/3)\\approx 6\\). Based on these values, we assume \\(f(\\Gamma_{1})\\approx-23\\Gamma_{1}+44\\) for the stability analysis presented here. One should keep in mind that \\(f(\\Gamma_{1})\\) is not a strict threshold and that the relation is approximate, especially for simulations that permit the cooling time to evolve with the disk (Johnson & Gammie 2003; Clarke et al. 2008). Regardless, it serves as a general indicator for disk fragmentaiton.
As Figures 13 through 15 demonstrate, the disk is approaching a state of constant \\(Q\\) for a wide range of radii, as expected in an asymptotic phase (Mejia et al. 2005). The mass-weighted average \\(Q\\)s between 3 and 6 AU are given in Table 1 and tend to decrease, as expected, when the opacity decreases. They do not seem to be greatly affected by resolution. The variation in the average \\(Q\\) with opacity is roughly consistent with the \\(A_{+}\\) measurements. One should keep in mind that the \\(Q\\) profiles represent snapshots of the disk, while the \\(A_{+}\\) measurements are time averages. The standard simulation is not shown in the stability plots, but the analysis of the 512 standard disk shows that it is highly stable against fragmentation.
Because the cooling times fluctuate rapidly, we average the integrated cooling rates and the internal energies for the, roughly, 65- and 70/80-year snapshots. The cooling time profiles for the appropriate \\(\\Gamma_{1}\\) only drop substantially below unity over extended regions where \\(Q\\) is high; these disks are stable against fragmentation. As the opacity is lowered, the cooling times decrease as well, with the \\(10^{-4}\\)\\(\\kappa\\) simulation close to the fragmentation limit. It should also be noted that the only disk that behaves like a constant \\(\\Gamma_{1}\\) disk is the standard opacity simulation (not shown), and for all other simulations, a constant \\(\\Gamma_{1}\\) is inappropriate (see Boley et al. 2007a).
Mass in these disks is redistributed efficiently during the activation of GIs. Time-averaged mass fluxes are calculated by differencing the mass inside a cylinder at two different times. This represents the average mass flux as calculated by the second-order hydrodynamics scheme, and it is independent from the fluid element tracer. The inward and outward accretion rates vary, and in all simulations, can be well above \\(10^{-4}\\)\\(M_{\\odot}\\) yr\\({}^{-1}\\) (Fig. 16).
### Shocks
Due to our resolution, shocks are spread over scales of about \\(10^{12}\\) cm or larger, whereas 1D calculations of chondrule-producing shocks give structures \\(\\sim 10^{10}\\) cm (Desch & Connolly 2002). To overcome this difficulty, we measure the pressure difference between the pre- and post-shock regions to calculate the Mach number \\(\\cal M\\). Using the pressure and temperature histories for each fluid element, a possible shock is identified whenever the \\(dT/dt\\) changes from negative to positive then back to negative. The pre-shock flow is taken to be the first sign switch, and the post-shock flow is the second sign switch. Let \\(\\eta=\\left(p_{2}-p_{1}\\right)/p_{1}\\) be the fractional pressure change, where \\(p_{1}\\) and \\(p_{2}\\) are the pre- and post-shock pressures, respectively, and where \\(\\gamma\\) is the average of the pre- and post-shock first adiabatic index. If\\({\\cal M}^{2}=\\left(\\gamma+1\\right)\\eta/\\left(2\\gamma\\right)+1\\geq 2\\), the event is counted as a shock. Once \\({\\cal M}\\) is determined, the pre-shock velocity \\(u_{1}={\\cal M}c_{s1}\\) is calculated, where \\(c_{s1}\\) is the pre-shock adiabatic sound speed of the gas. The shock strengths are also derived using the ratio of the post-shock to pre-shock temperatures: \\(T_{2}/T_{1}=\\left(2\\gamma{\\cal M}^{2}-\\left(\\gamma-1\\right)\\right)\\left(\\left( \\gamma-1\\right){\\cal M}^{2}+2\\right)/\\left(\\left(\\gamma+1\\right)^{2}{\\cal M}^ {2}\\right)\\). Both approaches should give the same answer whenever the shocks are cleanly defined and adiabatic. However, If radiative cooling becomes very efficient, then assuming adiabatic shock conditions may be incorrect. Furthermore, secondary waves or shocks (Boley & Durisen 2006) may make identifying pre- and post-shock regions very difficult. Neither method is clearly favored, and both will only provide crude estimates.
We estimate a shock to be chondrule-forming when \\(u_{1}\\) lies in a 1 km s\\({}^{-1}\\) band between 5 km s\\({}^{-1}\\) and 11 km s\\({}^{-1}\\) and when the pre-shock density is between \\(\\log\\rho\\)( g cm\\({}^{-3}\\)) = -8.5 and -10.5. This estimate is based on the Desch & Connolly (2002) 1D calculations, similar to the relation used to locate chondrule-producing shocks in Figure 1. Chondrule-forming shocks should be thought of as having the potential to form chondrules, but may not yield chondritic material due to incompatible dust to gas ratios, too high or low cooling rates, and fractionation.
Table 2 indicates shock information as extracted from the pressure histories of the fluid elements. Estimating shock strengths with the temperature ratio yields comparable numbers except for the chondrule-forming shocks (see below). Column two shows approximately the total number of detected shocks (TS) with \\({\\cal M}^{2}\\geq 2\\), and column three displays the average number of shocks per fluid element. Columns four (TS Mass) and five (CS Mass) show the total dust mass (see below) that goes through a shock with \\({\\cal M}^{2}\\geq 2\\) and the total dust mass that encounters a chondrule-forming shock, respectively. To estimate how much dust is processed in shocks, we assign each fluid element a mass by calculating the total disk mass within some \\(\\Delta r\\) and distributing that material evenly among all fluid elements in that \\(\\Delta r\\). A gas to solids ratio of 100 is assumed everywhere for the dust processing calculations. Although this is inconsistent with the growth and settling of solids assumed in six of the simulations, we do not worry about this detail inasmuch as the midplane will process more solids per shock and the high-altitude shocks will process less. Finally, column six shows the total number (Total FE) of fluid elements that remain after the 70 yr time period, i.e., the elements that were not accreted onto the star or fluxed into background density regions.
We check whether the total number of detected shocks is reasonable by evaluating the expected number of shocks in a Keplerian disk with \\(m\\)-arm spiral waves. If the corotation radius for the pattern is \\(r_{p}\\), the number of shocks during some time period \\(\\Delta t\\) for a fluid element orbiting at \\(r\\) is
\\[N_{S}=\\frac{m(GM_{\\rm star})^{1/2}}{2\\pi r^{3/2}}\\Big{|}1-\\left(r/r_{p}\\right) ^{3/2}\\Big{|}\\Delta t. \\tag{6}\\]Evaluating equation (6) for 1000 fluid elements evenly distributed in annuli between 2 and 8 AU, with \\(r_{p}=4\\) AU, with \\(m=3\\), and with \\(\\Delta t=60\\) yr yields approximately 8000 shocks. So the number of detected shocks in these simulations is reasonable.
The total number of shocks per fluid element are roughly consistent for each simulation except for the \\(10^{-4}\\)\\(\\kappa\\) runs. The more flocculent spiral structure in these simulations not only produces a larger number of shocks in the disk, but also creates candidate chondrule-producing shocks for four fluid elements; the 1024 \\(10^{-3}\\)\\(\\kappa\\) simulation has one fluid element that experiences a possible chondrule-forming shock. Several thousand \\(M_{\\oplus}\\) of dust are pushed through shocks in each simulation, implying that almost all dust experiences a shock with \\({\\cal M}^{2}\\geq 2\\). In addition, a few \\(M_{\\oplus}\\) of dust experience chondrule-forming shocks in the \\(10^{-4}\\)\\(\\kappa\\) runs. We remind the reader that these are only crude, order of magnitude estimates. When the temperature ratio is used, shocks with chondrule-forming strengths are _undetected_, but the total dust mass pushed through shocks remains the same to about one percent.
## 5 Discussion and Conclusions
In this section, we review the implications of the results of this study for disk fragmentation, the effects of opacity on disk cooling, the FU Ori phenomenon, and chondrule formation. We remind the reader that the simulations presented here are meant to be a numerical experiment that explores the possible connection between chondrules, FU Ori outbursts, and bursts of GI-activity. Moreover, this experiment provides a systematic study of the effects of opacity on disk cooling, which complements the Cai et al. (2006) metallicity study and provides a test bed for disk fragmentation criteria.
### Fragmentation
After the onset of the burst, none of the disks fragments, and only the 512 \\(10^{-4}\\)\\(\\kappa\\) simulation shows dense knot formation during the peak of the burst (10-30 yr). These knots do not break from the spiral wave even in the 1024 simulation, and so clump formation does not seem to be missed due to poor resolution. One should also keep in mind that for these simulations, the disk is first relaxed to equilibrium with standard opacity and then the opacity is suddenly dropped by a factor of \\(10^{4}\\) at \\(t=0\\). In effect, the dust settling is treated as instantaneous. Knot formation may not occur in a disk with more realistic settling timescales. As discussed below, this is also a caveat for the chondrule formation results. On the other hand, it does suggest that disk fragmentation might occur inside 10 AU under even more extreme, but perhaps unphysical, conditions than we have modeled.
The stability of these disks against fragmentation is supported by Figures 13 through 15. The cooling rates for the standard and \\(10^{-2}\\)\\(\\kappa\\) simulations are too low to cause disk fragmentation. The \\(10^{-3}\\) and \\(10^{-4}\\)\\(\\kappa\\) simulations do have areas where \\(\\zeta=t_{\\rm cool}\\Omega/f(\\Gamma_{1})\\lesssim 1\\), but the disks approach stability against GIs in those regions (\\(Q\\gtrsim 1.7\\); Durisen et al. 2007). As noted in SS4.2, \\(\\zeta\\lesssim 1\\) is not a strict instability criterion, but it does serve as an estimate for disk stability against fragmentation. For no simulation is \\(\\zeta\\) well below unity where \\(Q\\) is also \\(\\lesssim 1.7\\).
Lowering the opacity increases the cooling rates. The \\(10^{-4}\\)\\(\\kappa\\) simulations exhibit the most rapid cooling because the midplane optical depths are near unity, which results in the most efficient radiative cooling possible. _Changes in dust opacity have a profound effect on disk cooling_. Although not modeled, if the opacity were to continue to drop such that the midplane optical depth becomes well below unity, cooling would once again become inefficient. In addition, supercooling of the high optical depth disks (standard and \\(10^{-2}\\)\\(\\kappa\\)) by convection does not occur. Based on our results, we believe that hydraulic jumps as a result of shock bores (Boley & Durisen 2006) rather than convection are a better explanation for the upwellings around spiral arms reported by Boss (2004a) and Mayer et al. (2007). These findings are consistent with analytic arguments by Rafikov (2005, 2007), with numerical studies of disk fragmentation criteria by Gammie (2001), Johnson & Gammie (2003), and Rice et al. (2005), and with global disk simulations where tests of the radiation algorithm and/or careful monitoring of radiative losses were performed (Nelson et al. 2000; Boley et al. 2006, 2007b; Stamatellos & Whitworth 2008).
The radiative algorithm used in CHYMERA has passed a series of radiative transfer tests that are relevant for disk studies, including permitting convection when expected. Our conclusions regarding fragmentation are based on mulitple analyses: surface density rendering (Figures 4-7), an energy budget analysis (Figures 9 and 10), cooling temperature and brightness maps (Figures 11-12 ), and a cooling time stability analysis (Figures 13-15). It is also important to point out that these results do not contradict Stamatellos et al. (2007) or Krumholz et al. (2007), who find fragmentation in massive, extended disks (\\(>100\\) AU).
It is also pertinent to demonstrate that the CHYMERA code can detect fragmentation when cooling rates are high and \\(Q\\) is low. Figure 17 shows a snapshot for a simulation similar to the 512 \\(10^{-4}\\)\\(\\kappa\\) simulation, but with the divergence of the fluxes artificially increased by a factor of two. In the normal simulation, kinks in the spiral waves form during the onset of the burst. As discussed above, the disk is very close to the fragmentation limit, but the knots do not break from the spiral wave. Figure 15 suggests, although for a later time, that increasing the cooling rates by a factor of two would drop \\(\\zeta\\) well below unity in low-\\(Q\\) regions.
Figure 17 confirms that the disk fragments with such enhanced cooling. However, we remind the reader that the \\(10^{-4}\\)\\(\\kappa\\) simulation has the optimal optical depth for cooling (\\(\\tau\\sim 1\\)). We cannot imagine any physical process that could cause a factor of two enhancement in cooling.
Three clumps form, one for each spiral wave, between 4 and 5 AU. The location of clump formation is consistent with the prediction by Durisen et al. (2008) that a spiral wave is most susceptible to fragmentation near corotation. One of the clumps survives for several orbits and eventually passes through the inner disk boundary. Resolution is always a concern for simulations. Because these results are consistent with analytic fragmentation limits and numerical fragmentation experiments, we conclude that CHYMERA can detect fragmentation at the resolutions employed for this study.
As discussed above, when the opacity is abruptly decreased by a factor of \\(10^{4}\\), the disk does approach fragmentation-like behavior. A similar effect is reported by Mayer et al. (2007), who find that their disk only fragments when the mean molecular weight is suddenly switched from \\(\\mu=2.4\\) to \\(\\mu=2.7\\). However, the simulation presented by Boley et al. (2006) was accidentally run with \\(\\mu=2.7\\), and Cai et al. (2006, 2008) purposefully ran their simulations at the high \\(\\mu\\) for comparison with the Boley et al. results. None of the studies reported disk fragmentation. This may indicate that when a disk approaches fragmentation shortly after a sudden switch in a numerical parameter, e.g., opacity here and \\(\\mu\\) in Mayer et al., the fragmentation may be numerically driven rather than physically, especially because sudden changes in cooling rates make disks more susceptible to fragmentation (Clarke et al. 2008). Regardless, the results do indicate that disk fragmentation by GIs may be possible under very extreme conditions. Whether such conditions are physically possible or realistic remains to be shown. Based on our results here and in earlier papers, disk fragmentation for \\(r\\lesssim 10\\)s of AU appears to be a yet-unproven exception rather than the norm.
### FU Ori Outbursts
In these simulations, the strong bursts of GI activity provide high mass fluxes (\\(\\dot{M}\\gtrsim 10^{-4}\\)\\(M_{\\odot}\\) yr\\({}^{-1}\\), see Fig. 16) throughout each disk. Even though corotation is at \\(r\\sim 4\\) AU for the major spiral arms, the 2 AU region of the disk is strongly heated. Fluid elements approach peak temperatures of \\(T\\sim 1000\\) K in all simulations (Fig. 18). It is not difficult to speculate that if a larger extent of the disk were modeled, the temperatures due to shocks would be large enough to ionize alkalis thermally and possibly sublimate dust (1400-1700 K). If such a condition is met, then an MRI could activate and rapidly carry mass into the innermost regions of the disk. From these simulations it appears to be plausible, but by no means proven, that a burst of GI activity as far out as 4 AU can drive mass into the inner regions of the disk and create strong temperature fluctuations, which may then be responsible for a thermal instablility. Although we are simulating very massive disks, we note that such systems may exist during the Class I YSO phase. Additional studies need to be conducted, preferably with a self-consistent buildup of a dead zone, to address the efficiency of this mechanism in low mass disks.
There are at least three observable signatures for this mechanism. First, if a GI-bursting mass concentration at a few AU ultimately results in an FU Ori phenomenon, then one would expect to see an infrared precursor from the GI burst, with a rise time of approximately tens of years. Second, one would also expect for a large abundance of molecular species, which would normally be frozen on dust grains, to be present in the gas phase during the infrared burst due to shock heating (Hartquist 2007, private communication). Third, approximately the first ten AU of the disk should have large mass flows if the burst takes place near \\(r\\sim 4\\) to 5 AU. We speculate based on these results that if the burst were to take place at 1 AU, then high outward mass fluxes should be observable out to a few AU.
### Dust Processing
Each simulation shows that a large fraction of material goes through shocks with \\({\\cal M}^{2}\\geq 2\\). Although these shocks are weak, their abundant numbers may result in the processing of dust to some degree everywhere in the 2-10 AU region. In fact, such processing may be necessary for prepping chondritic precursors for strong-shock survival (Ciesla 2007, private communication).
Based on the arguments presented in SS2, the intent was to produce spirals with large pitch angles by constructing a disk biased toward a strong, sudden GI-activation near 5 AU. Even with this bias, the pitch angles of the spirals remain small, with \\(i\\approx 10^{\\circ}\\). Why are the pitch angles so small? According to the WKB approximation, \\(\\cot i=\\mid k_{r}r/m\\mid\\) (Binney & Tremaine 1987), where \\(k_{r}\\) is the radial wavenumber for some \\(m\\)-arm spiral. The most unstable wavelength for axisymmetric waves (Binney & Tremaine 1987) is roughly \\(\\lambda_{u}\\approx 2\\pi c_{s}/Q\\kappa\\approx 2\\pi h/Q\\) for disk scale height \\(h\\). For a disk unstable to nonaxisymmetric modes, \\(\\lambda_{u}\\) corresponds to some \\(m\\)-arm spiral (e.g., Durisen et al. 2008). By relating \\(k_{r}=2\\pi\\beta m/\\lambda_{u}\\),
\\[\\cot i\\approx\\beta Qr/h \\tag{7}\\]
in the linear WKB limit (\\(\\mid k_{r}r/m\\mid\\gg 1\\)), where \\(\\beta\\) is a factor of order unity that relates \\(\\lambda_{u}\\) to the \\(m\\)-arm spiral. Because \\(\\beta Qr/h\\sim 10\\) in gravitationally unstable disks, the linear WKB analysis may be marginally applicable. Equation (7) predicts that linear spiral wavesin these disks should have pitch angles \\(i\\approx 6\\) to \\(11^{\\circ}\\) for \\(\\beta\\) between 1 and 1/2, respectively. It appears that this estimate for \\(i\\) extends accurately to the nonlinear regime, a result we did not expect.
The GI spirals are efficient at heating the disk and transporting angular momentum, but _not_ at producing chondrules. Only for the \\(10^{-4}\\)\\(\\kappa\\) simulation are multiple candidate chondrule-producing shocks detected, and there is only one chondrule-producing shock in the 1024 \\(10^{-3}\\)\\(\\kappa\\) simulation. These low opacity disks have relatively flocculent spiral morphologies, including kinks in spiral waves. The \\(10^{-4}\\)\\(\\kappa\\) simulations are on the verge of fragmentation. When interpreting these results, it is important to remember that the opacities were lowered abruptly and in a quite unphysical way. In addition, these detections are based on the fractional pressure change \\(\\eta\\). If the temperature change is used, no chondrule-producing shocks are detected. This discrepancy may be due to efficient radiative cooling, which may make the assumption of adiabatic shock conditions incorrect, and/or to confusion with additional waves induced by shock bores. For the rest of this section, we assume that the pressure difference is the reliable shock identifier in order to discuss the implications of detecting chondrule-forming shocks in these disks. A thermal and spatial history for a fluid element that experiences a possible chondrule-forming shock is shown in Figure 19.
To estimate the occurrence of a chondrule-forming shock, we employ a generous \\(u_{1}\\)-\\(\\rho\\) criterion (Fig. 20). We do not take into account the optical depth criterion of Miura & Nakamoto (2006) on grounds that the large scale over which these shocks take place can allow for chondrules to equilibrate with their surroundings (Cuzzi & Alexander, 2006). However, one should be aware that the optical depth criterion of Miura & Nakamoto will likely exclude all chondrule-producing shocks inside 4 AU in the \\(u_{1}\\)-\\(\\rho\\) plane for these simulations. All candidate chondrule-forming shocks occur between \\(r\\sim 3\\) and 5 AU and at altitudes that are roughly less than a third of the gas scale height. Because a large degree of settling is assumed, these shocks are located in regions that may be consistent with the dusty environments in which chondrules formed (Wood, 1963; Krot et al., 2005).
We estimate that \\(\\sim 1\\)\\(M_{\\oplus}\\) of dust is processed through chondrule-forming shocks. Because these shocks are limited to the onset of the GI burst, a few\\(\\times 10\\)\\(M_{\\oplus}\\) of dust would be processed in the \\(10^{-4}\\)\\(\\kappa\\) disks if they went through about ten outbursts. If more outbursts take place than those that lead to an FU Orionis event, then more chondritic material could be produced. In order to produce these shocks, the disk was pushed toward fragmentation by suddenly dropping the opacity by a factor of \\(10^{4}\\). We conclude that for bursts of GIs near 4 or 5 AU to produce chondrules, the disk must be close to fragmentation.
The Unified Theory
The hypothesis behind the work summarized here is that bursts of GI activity, dead zones, the FU Ori phenomenon, and chondrule formation are linked. The general picture is that mass builds up in a dead zone due to layered accretion and that GIs erupt once \\(Q\\) becomes low. This activation of the instability causes a sudden rise in the mass accretion rate that heats up the disk inside 1 AU to temperatures that can sustain thermal ionization and an MRI. The MRI shortly thereafter activates the thermal instability (Bell & Lin., 1994; Armitage et al., 2001; Zhu et al., 2007). In addition, the strong shocks process dust and form chondritic material in the asteroid belt and at comet distances.
Given the results of our simulations, we think the Boss & Durisen conjecture that dead zones bursting at \\(r\\gtrsim\\) 4-5 AU can produce chondrules is _unlikely_, unless the disks are on the verge of fragmentation. Because the conditions for fragmentation inside \\(r\\sim 10\\)s of AU are very difficulty to achieve, we conclude that this is not viable as a general chondrule-formation mechanism. As a result, we return to Figure 1. Based on our simple, analytic argument in equation (1), chondrule formation can take place in the asteroid belt and at cometary distances for GI-bursting dead zones between about 1 and 3 AU, even with \\(i\\approx 10^{\\circ}\\). As suggested by Wood (2005), chondritic parent bodies may be representative of temporal as well as spatial formation differences. Bursts that occur roughly between 1 and 3 AU seem to be able to accommodate this scenario, and could drive the FU Ori phenomenon (Armitage et al., 2001). As the disk evolves, the location of the dead zone can vary, with multiple bursts occurring at a few AU. Accretion rates between \\(10^{-6}\\) and \\(10^{-7}\\)\\(M_{\\odot}\\) yr\\({}^{-1}\\) could build up a 0.01 \\(M_{\\odot}\\) dead zone every \\(10^{4}\\) to \\(10^{5}\\) years, respectively. Some of these bursts may drive the FU Ori phenomenon and produce chondrules, but others may only be responsible for chondrule-formation events. In this scenario, the disk need not fragment, and there should be between about 10 and 100 separate chondrule-formation events. Future work is required to ascertain the plausibility of this modified version of the unified theory.
We would like to thank F. Ciesla, S. Desch, and L. Hartmann for fruitful discussions. We would also like to thank the anonymous referee for comments and suggestions that helped improve this manuscript. A.C.B.'s contribution was supported by a NASA GSRP fellowship, and R.H.D.'s contribution was supported by NASA grant NNG05GN11G. This work was supported in part by the IU Astronomy Department IT facilities, by systems made available by the NASA Advanced Supercomputing Division at NASA Ames, by systems obtained by Indiana University through Shared University Research grants through IBM, Inc., to Indiana University, and by dedicated workstations provided to the Astronomy Department by IU's University Information Technology Services.
## References
* (1) Akima, H. 1970, JACM, 17, 4
* (2) Armitage, P. J., Livio, M., & Pringle, J. E. 2001, MNRAS, 324, 705
* (3) Balbus, S. A. & Hawley, J. F. 1991, ApJ, 376, 214
* (4) Bell, K. R., & Lin, D. N. C. 1994, ApJ, 427, 987
* (5) Binney, J., & Tremaine, S. 1987, Galactic dynamics (Princeton: Princeton Univ. Press, 1987)
* (6) Bizzarro, M., Baker, J. A., Haack, & H. 2004, Nature, 431, 275
* (7) Boley, A. C. 2007, Ph.D. Thesis, Indiana University
* (8) Boley, A. C., & Durisen, R. H. 2006, ApJ, 641, 534
* (9) Boley, A. C., Durisen, R. H., & Pickett, M. K. 2005, in ASP Conf. Ser. 341, Chondrites and the protoplanetary disk, 839
* (10) Boley, A. C., Durisen, R. H., Nordlund, A, & Lord, J. 2007b, ApJ, 665, 1254
* (11) Boley, A. C., Hartquist, T. W., Durisen, R. H., & Michael, S. 2007a, ApJ, 656, L89
* (12) Boss, A. P. 1997, Science, 276, 1836
* (13) -. 1998, ApJ, 503, 923
* (14) -. 2001, ApJ, 562, 367
* (15) -. 2002, ApJ, 567, L149
* (16) -. 2004a, ApJ, 610, 456
* (17) -. 2004b, ApJ, 616, 1265
* (18) -. 2005, ApJ, 629, 535
* (19) -. 2007, ApJ, 661, L73
* (20) -. 2008, arXiv:0801.4371
* (21) Boss, A. P., & Durisen, R. H. 2005, ApJ, 621, L137
* (22) Cai, K., Durisen, R. H., Michael, S., Boley, A. C., Mejia, A. C., Pickett, M. K., & D'Alessio, P. 2006, ApJ, 636, L149
* (23)* () Cai, K., Durisen, R. H., Boley, A. C., Pickett, M. P., Mejia, A. C. 2007, ApJ, 673, 1138
* () Cameron, A. G..W. 1978, M&P, 18, 5
* () Cielsa, F. J., & Hood, L. L. 2002, Icarus, 158, 281
* () Clarke, C. J., Jarper-Clark, E., & Lodato, G. 2007, MNRAS, 381, 1543
* () Cohl, H. S., & Tohline, J. E. 1999, 527, 86
* () Cuzzi, J. N., & Alexander, C. M. O'D. 2006, Nature, 441, 483
* () D'Alessio, P., Calvet, N., & Hartmann, L. 2001, ApJ, 553, 321
* () D'Alessio, P., Calvet, N., Hartmann, L., Franco-Hernandez, R., & Servin, H. 2006, ApJ, 638, 314
* () Desch, S. J. 1998, Ph.D. Thesis, University of Illinois at Urbana-Champaign
* () -. 2004, ApJ, 608, 509
* () -. 2007, ApJ, 671, 878
* () Desch, S. J., & Connolly, H. C., Jr. 2002, M&PSA, 37, 183
* () Durisen, R. H., Boss, A., Mayer, L., Nelson, A., Quinn, T., & Rice, K. 2007, in Protostars and Planets V, ed. B. Reipurth, D. Jewitt, & K. Keil (Tucson: Univ. Arizona Press), 607
* () Durisen, R. H., Hartquist, T. W., & Pickett, M. K. 2008, ApJ, in press
* () Durisen, R. H., Gingold, R. A., Tohline, J. E., & Boss, A. P. 1986, ApJ, 305, 281
* () Fleming, T. P., & Stone, J. M. 2003, ApJ, 585, 908
* () Gammie, C. F. 1996, ApJ, 457, 355
* () -. 2001, ApJ, 553, 174
* () Green, J. D., Hartmann, L., Calvet, N., Watson, D. M., Ibrahimov, M., Furlan, E., Sargent, B., & Forrest, W. J. 2006, ApJ, 648, 1099
* () Harker, D. E., & Desch, S. J. 2002, ApJ, 565, 109
* () Hartmann, L., D'Alessio, P., Calvet, N., & Muzerolle, J. 2006, ApJ, 648, 484
* ()Hartmann, L., & Kenyon, S. J. 1996, ARA&A, 34, 207
* () Hayashi, C., Nakazawa, K., Nakagawa, Y. 1985, in Protostars and Planets II, ed. D. C. Black, & M. S. Matthews (Tucson: Univ. Arizona Press), 1100
* () Iida, A., Nakamoto, T., Susa, H., & Nakagawa, Y. 2001, Icarus, 153, 430
* () Johnson, B. M., & Gammie, C. F. 2003, ApJ, 597, 131
* () Klahr, H. H., & Bodenheimer, P. 2003, ApJ, 582, 869
* () Kley, W., & Lin, D. N. C. 1999, ApJ, 518, 833
* () Krot, A. N., Scott, E. R. D., & Reipurth, B. 2005, Chondrites and the Protoplanetary Disk, ASPC Series, 341.
* () Krumholz, M. R., Klein, R. I., McKee, C. F., & Bolstad, J. 2007, ApJ, 667, 626
* () Krumholz, M. R., Klein, R. I., & McKee, C. F. 2007, ApJ, 656, 959
* () Kuiper, G. P. 1951, PNAS, 37, 1
* () Levison, H. F., Morbidelli, A., Van Laerhoven, C., Gomes, R., & Tsiganis, K. 2007, arXiv:0712.0553
* () Lynden-Bell, D., & Kalnajs, A. J. 1972, MNRAS, 157, 1
* () Mayer, L., Lufkin, G., Quinn, T., & Wadsley, J. 2007, ApJ, 661, 77
* () McKeegan, K. D., et al. 2006, Science, 314, 1724
* () Mejia, A. C. 2004, Ph.D. Thesis, Indiana University
* () Mejia, A. C., Durisen, R. H., Pickett, M. K., & Cai, K. 2005, ApJ, 619, 1098
* () Miura, H., & Nakamoto, T. 2006, ApJ, 651, 1272
* () Nelson, A. F. 2006, MNRAS, 373, 1039
* () Nelson, A. F., Benz, W., & Ruzmaikina, T. V. 2000, ApJ, 529, 357
* () Oishi, J. S., Mac Low, M., Menou, K. 2007, arXiv:astro-ph/0702549
* () Osorio, M., D'Alessio, P., Muzerolle, J., Calvet, N., & Hartmann, L. 2003, ApJ, 586, 1148
* () Pickett, B. K., Cassen, P. M.,, Durisen, R. H., & Link, R. P. 1998, ApJ, 504, 468
* ()-. 2000a, ApJ, 529, 1034
* Pickett et al. (2003) Pickett, B. K., Mejia, A. C., Durisen, R. H., Cassen, P. M., Berry, D. K., & Link, R. P. 2003, ApJ, 590, 1060
* Pollack et al. (1994) Pollack, J. B., Hollenbach, D., Beckwith, S., Simonelli, D. P., Roush, T., & Fong, W. 1994, 421, 615
* Press et al. (1986) Press, W. H., Flannery, B. P., & Teukolsky, S. A. 1986, Numerical recipes. The art of scientific computing (Cambridge: University Press)
* Rafikov (2005) Rafikov, R. R. 2005, ApJ, 621, L69
* Rafikov (2007) -. 2007, ApJ, 662, 642
* Rice et al. (2003) Rice, W. K. M, Armitage, P. J., Bate, M. R., & Bonnell, I. A. 2003, MNRAS, 339, 1025
* Rice et al. (2005) Rice, W. K. M, Lodato, G., & Armitage, P. J. 2005, MNRAS, 364, L56
* Roberts et al. (1979) Roberts Jr., W. W., Huntely, J. M., & van Albada, G. D. 1979, ApJ, 233, 67
* Russell et al. (2005) Russell, S. S., Krot, A. N., Huss, G. R., Keil, K., Itoh, S., Yurimoto, H., & Macpherson, G. J. 2005, in ASP Conf. Ser. 341, Chondrites and the protoplanetary disk, 317
* Sano et al. (2000) Sano, T., Miyama, S. M., Umebayashi, T., & Nakano, T. 2000, ApJ, 543, 486
* Stamatellos et al. (2007) Stamatellos, D., Whitworth, A. P., & Ward-Thompson, D. 2007, MNRAS, 379, 1390
* Stamatellos & Whitworth (2008) Stamatellos, D., & Whitworth, A. P. 2008, A&A, in press
* Stepinski (1992) Stepinski, T. F. 1992, Icarus, 97, 130
* Tomley et al. (1991) Tomley, L., Cassen, P., & Steiman-Cameron, T. 1991, ApJ, 382, 530
* Tomley et al. (1994) Tomley, L., Steiman-Cameron, T. Y., Cassen, P. 1994, ApJ, 422, 850
* Toomre (1964) Toomre, A. 1964, ApJ, 139, 1217
* Wood (1963) Wood, J. A. 1963, Icarus, 2, 152
* Wood (1996) -. 1996, Meteoritics Planet. Sci., 31, 641
* Wood (2005) -. 2005, in ASP Conf. Ser. 341, Chondrites and the protoplanetary disk, 953
* Wooden et al. (2005) Wooden, D., Harker, D. E., & Brearley, A. J. 2005, in ASP Conf. Ser. 341, Chondrites and the protoplanetary disk, 774
* Wood et al. (2005)Zhu, Z., Hartmann, L., Calvet, N., Hernandez, J., Muzerolle, J., & Tannirkulam, A. 2007, ApJ, 669, 483
\\begin{table}
\\begin{tabular}{c c c c c} \\hline Sim. Name & Resolution \\(r,\\ \\phi,\\ z\\) & Duration & \\(Q_{\\rm av}\\) & \\(\\langle A_{+}\\rangle\\) \\\\ \\hline \\hline Adiabatic & 256, 512, 64 & 33 yr & & \\\\
512 Standard & 256, 512, 64 & 80 yr & 1.7 & 1.0 \\\\
1024 Standard & 512, 1024, 128 & 55 yr & – & \\\\
512 \\(10^{-2}\\)\\(\\kappa\\) & 256, 512, 64 & 110 yr & 1.7 & 1.3 \\\\
1024 \\(10^{-2}\\)\\(\\kappa\\) & 512, 1024, 128 & 110 yr & 1.8 & 1.6 \\\\
512 \\(10^{-3}\\)\\(\\kappa\\) & 256, 512, 64 & 110 yr & 1.5 & 2.1 \\\\
1024 \\(10^{-3}\\)\\(\\kappa\\) & 512, 1024, 128 & 110 yr & 1.5 & 2.2 \\\\
512 \\(10^{-4}\\)\\(\\kappa\\) & 256, 512, 64 & 110 yr & 1.4 & 2.1 \\\\
1024 \\(10^{-4}\\)\\(\\kappa\\) & 512, 1024, 128 & 110 yr & 1.4 & 2.4 \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Simulation information. The name of the simulation indicates the fraction of the standard opacity that is used during the evolution of the disk. \\(Q_{\\rm av}\\) is the average \\(Q\\) between 3 and 6 AU for a snapshot near 70/80 yr, depending on the simulation (see text). The \\(A_{+}\\) measurement is the sum of the time-averaged Fourier components between roughly 55 and 80 yr.
\\begin{table}
\\begin{tabular}{c c c c c} \\hline Sim. & TS & \\(\\frac{\\rm TS}{\\rm FE}\\) & TS Mass (M\\({}_{\\oplus}\\)) & CS Mass (M\\({}_{\\oplus}\\)) & Total FE \\\\ \\hline \\hline
512 standard & 8700 & 9 & 4(3) & 0 & 992 \\\\
512 \\(10^{-2}\\)\\(\\kappa\\) & 8600 & 9 & 4(3) & 0 & 990 \\\\
1024 \\(10^{-2}\\)\\(\\kappa\\) & 9500 & 10 & 4(3) & 0 & 962 \\\\
512 \\(10^{-3}\\)\\(\\kappa\\) & 8300 & 9 & 4(3) & 0 & 978 \\\\
1024 \\(10^{-3}\\)\\(\\kappa\\) & 9500 & 10 & 4(3) & \\(<1\\) & 964 \\\\
512 \\(10^{-4}\\)\\(\\kappa\\) & 9300 & 10 & 4(3) & 3 & 939 \\\\
1024 \\(10^{-4}\\)\\(\\kappa\\) & 11000 & 12 & 5(3) & 2 & 913 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Shock information for the time period between 10 and 70 yr. All estimates are based on the fractional pressure change \\(\\eta\\) (see text). TS indicates the total shocks encountered by all fluid elements. TS/FE gives the average number of shocks for a fluid element. TS Mass is a rough estimate of the total dust mass pushed through shocks in each disk, and CS Mass is the total dust mass that encounters a chondrule-forming shock. Finally, Total FE indicates the number of fluid elements that remain in the simulated disk at the end of the 60 yr time period. The 1024 standard simulation is omitted because it was stopped after 55 yr.
Figure 1: Expected shock speeds \\(u_{1}\\) based on equation (1). In the legend, \\(u_{1}(1,10)=u_{1}(r_{p}=1\\;{\\rm AU}\\), \\(i=10^{\\circ})\\) for corotation radius of the spiral pattern \\(r_{p}\\) and pitch angle \\(i\\). The colored regions highlight where chondrule formation is expected based on the Desch & Connolly (2002) shock calculations (see text), with the yellow region appropriate for a MMSN density distribution and the orange for the same density distribution but with \\(10\\times\\) the mass. Shocks that occur inside corotaton for \\(i=10^{\\circ}\\) will not produce chondrules between 1 and 5 AU. However, chondrules can be produced by these low pitch angle spirals in shocks outside corotation. If the pitch angle is fairly open, such as \\(i\\approx 30^{\\circ}\\), then a spiral wave with a corotation near 5 AU can produce chondrules in the asteroid belt and at comet distances.
Figure 2: Surface density profiles for the ICs with (solid) and without (dashed) the density enhancement. The peak enhancement occurs near log(4.5 AU) \\(\\approx 0.65\\).
Figure 3: Volume density contours for the initial model with (right) and without (left) the density enhancement. The normalization \\(r_{0}=10\\) AU.
Figure 4: Surface density plots for the 512 simulations around 33 yr.
Figure 5: Surface density plots for the 512 simulations around 77 yr.
Figure 6: Surface density plots for the 1024 simulations around 33 yr.
Figure 7: Surface density plots for the 1024 simulations around 78 yr. The 1024 standard simulation is only run to about 55 yr (see text).
Figure 8: \\(A_{m}\\) profiles for the the 512 simulations (crosses) and the 1024 simulations (squares). The standard simulation does not have a 1024 counterpart because it was stopped at about 55 yr (see text). These profiles demonstrate that the low-order structure dominates in the disk, with the Fourier components dropping off quickly near \\(m\\sim 10\\). The lower opacity simulations have stronger \\(A_{m}\\)s, but there do seem to be some resolution effects present at large \\(m\\). The \\(10^{-3}\\)\\(\\kappa\\) simulation is not shown for readability; it falls between the \\(10^{-2}\\) and \\(10^{-4}\\)\\(\\kappa\\) simulations and the 512 and 1024 resolutions follow each other closely.
Figure 9: Cumulative energy losses by radiation (blue) and heating by shocks (red) for the standard and \\(10^{-2}\\)\\(\\kappa\\) simulations. The standard simulations are stopped early because the temperatures in the disk become too high to evolve due to inefficient cooling. The shock heating in the adiabatic (no cooling) simulation is indicated by the dark, dot-dash line. It is difficult to see because it follows the standard curves closely. For clarification, a \\(\\Lambda\\) is shown adjacent to the cooling curves and a \\(\\Gamma\\) is shown next to the heating curves.
Figure 10: Similar to Figure 9, but for the \\(10^{-3}\\) and \\(10^{-4}\\)\\(\\kappa\\) simulations.
Figure 11: \\(T_{b}\\) and \\(T_{c}\\) maps for the 1024 standard and \\(10^{-2}\\)\\(\\kappa\\) simulations. As the opacity is lowered, the disks become much more efficient at cooling. See also Figure 12.
Figure 12: Same as Figure 11, but for the 1024 \\(10^{-3}\\) and \\(10^{-4}\\)\\(\\kappa\\) simulations.
Figure 14: Same as Figure 13, but for the 1024 \\(10^{-3}\\)\\(\\kappa\\) simulation.
Figure 15: Same as Figure 13, but for the 1024 \\(10^{-4}\\)\\(\\kappa\\) simulation.
Figure 16: Time-averaged mass fluxes during the 0-55 yr period for the 512 Standard and the 1024 reduced opacity simulations. For each simulation, the mass fluxes are at FU Ori outburst levels. Positive values here correspond to net inflow, and negative values net outflow.
Figure 17: Similar to the 512 \\(10^{-4}\\)\\(\\kappa\\) simulation, but with the cooling artificially enhanced by a factor of two. The fragmentation is consistent with analytic predictions.
Figure 18: Initial \\(r\\) and minimum, average, and maximum temperature plots for fluid elements in the 512 Standard and the 1024 reduced opacity simulations. The temperature variations are quite large, approaching maximum temperatures of 1000 K. The values on the abscissa are due to the fluid elements that were lost from the simulated disk.
Figure 19: A thermal and spatial history for a fluid element in the the 1024 \\(10^{-4}\\)\\(\\kappa\\) simulation. The fluid element shows strong variations in radius and height as it passes through shocks, with a net inflow due to gravitational torques. Strong variations in the pressure and temperature are also present. The strong shock near 30 yr is a potential chondrue-forming shock based on the pressure profile. However, the temperature peak is not as high as one would expect for the same pressure jump in a 1D adiabatic shock. This discrepancy may be due to efficient radiative cooling or to additional wave effects caused by shock bores.
Figure 20: Shocks on the \\(u_{1}\\)-\\(\\rho\\) plane for the 1024 \\(10^{-4}\\)\\(\\kappa\\) simulation, as determined from the fractional pressure change. The green area shows the chondrule-forming region, roughly based on the results of Desch & Connolly (2002). The green strip should not be thought of as definitive, and slight changes in the strip’s location could identify more chondrule-forming shocks, particularly at low \\(u_{1}\\). If the pre- and post-shock temperature ratio is used to determine \\(u_{1}\\), no chondrule-forming shocks are detected and most shocks have a \\(u_{1}<4\\) km s\\({}^{-1}\\). | Using analytic arguments and numerical simulations, we examine whether chondrule formation and the FU Orionis phenomenon can be caused by the burst-like onset of gravitational instabilities (GIs) in dead zones. At least two scenarios for bursting dead zones can work, in principle. If the disk is on the verge of fragmention, GI activation near \\(r\\sim 4\\) to 5 AU can produce chondrule-forming shocks, at least under extreme conditions. Mass fluxes are also high enough during the onset of GIs to suggest that the outburst is related to an FU Orionis phenomenon. This situation is demonstrated by numerical simulations. In contrast, as supported by analytic arguments, if the burst takes place close to \\(r\\sim 1\\) AU, then even low pitch angle spiral waves can create chondrule-producing shocks and outbursts. We also study the stability of the massive disks in our simulations against fragmentation and find that although disk evolution is sensitive to changes in opacity, the disks we study do not fragment, even at high resolution and even for extreme assumptions.
accretion, accretion disks - hydrodynamics - instabilities - planetary systems: protoplanetary disks - solar system: formation | Give a concise overview of the text below. |
arxiv-format/0806_2355v1.md | Constraining the density dependence of nucleon symmetry energy with heavy-ion reactions and its astrophysical impact
######
000 000 24th Winter Workshop on Nuclear Dynamics
South Padre, Texas, USA
April 5-12, 2008
of nucleon symmetry energy with heavy-ion reactions and its astrophysical impact
Bao-An Li\\({}^{1}\\), Lie-Wen Chen\\({}^{2}\\), Che Ming Ko\\({}^{3}\\), Plamen G. Krastev\\({}^{1}\\) and Aaron Worley\\({}^{1}\\)
\\({}^{1}\\) Department of Physics, Texas A&M University-Commerce,
Commerce, Texas 75429-3011, USA
\\({}^{2}\\) Institute of Theoretical Physics, Shanghai Jiao Tong University,
Shanghai 200240, P.R. China
\\({}^{3}\\) Cyclotron Institute and Department of Physics, Texas A&M University,
College Station, Texas 77843-3366, USA
Symmetry Energy, Equation of State, Neutron-Rich Nuclear Matter, Neutron Stars, Gravitational Waves
21.65.Cd., 21.65.Ef,25.70.-z,21.30.Fe,21.10.Gv,21.60-c.
## 1 EOS of neutron-rich nuclear matter partially constrained by heavy-ion reactions
The EOS of isospin asymmetric nuclear matter can be written within the well-known parabolic approximation as
\\[E(\\rho,\\delta)=E(\\rho,\\delta=0)+E_{\\rm sym}(\\rho)\\delta^{2}+{\\cal O}(\\delta^{4}), \\tag{1}\\]where \\(\\delta\\equiv(\\rho_{n}-\\rho_{p})/(\\rho_{p}+\\rho_{n})\\) is the isospin asymmetry with \\(\\rho_{n}\\) and \\(\\rho_{p}\\) denoting, respectively, the neutron and proton densities, \\(E(\\rho,\\delta=0)\\) is the EOS of symmetric nuclear matter, and \\(E_{\\rm sym}(\\rho)\\) is the density-dependent nuclear symmetry energy. The latter is very important for many interesting astrophysical problems [1, 2], the structure of rare isotopes [3] and heavy-ion reactions [4, 5, 6, 7, 8]. However, the density dependence of the nuclear symmetry energy has been the most uncertain part of the EOS of neutron-rich matter. Fortunately, comprehensive analyses of several isospin effects including the isospin diffusion [9, 10] and isoscaling [11] in heavy-ion reactions and the size of neutron skin in heavy nuclei [12] have allowed us to constrain the density dependence of the symmetry energy at sub-saturation densities within approximately \\(31.6(\\rho/\\rho_{0})^{0.69}\\) and \\(31.6(\\rho/\\rho_{0})^{1.05}\\) as labelled by \\(x=0\\) and \\(x=-1\\), respectively, in the lower panel of Fig.1[13, 14]. While these constraints are only valid for sub-saturation densities and still suffer from some uncertainties, compared to the early situation they represent a significant progress in the field. Further progress is expected from both the parity violating electron scattering experiments [15] at the Jefferson lab that will help pin down the low density part of the symmetry energy and heavy-ion reactions with high energy radioactive beams at several facilities that will help constrain the high density behavior of the symmetry energy [8].
For many astrophysical studies, the EOS is usually expressed in terms of the pressure as a function of density and isospin asymmetry. Shown in Fig. 1 are the pressures for two extreme cases: symmetric (upper panel) and pure neutron matter (lower panel). The green area in the density range of \\(2-4.6\\rho_{0}\\) is the experimental constraint on the pressure \\(P_{0}\\) of symmetric nuclear matter extracted by Danielewicz, Lacey and Lynch from analyzing the collective flow data from relativistic heavy-ion collisions [6]. It is seen that results from mean-field calculations using the phenomenological momentum-dependent (MDI) interaction [16], the Dirac-Brueckner-Hartree-Fock approach with the Bonn B potential (DBHF) [17], and the variational calculations by Akmal, Pandharipande, and Ravenhall (APR)[18] are all consistent with this constraint. For pure neutron matter, its pressure is \\(P_{\\rm PNM}=P_{0}+\\rho^{2}dE_{\\rm sym}/d\\rho\\) and depends on the density dependence of nuclear symmetry energy. Since the constraints on the symmetry energy from terrestrial laboratory experiments are only available for densities less than about \\(1.2\\rho_{0}\\) as indicated by the green and red squares in the lower panel, which is in contrast to the constraint on the EOS of symmetry nuclear matter that is only available at much higher densities, the most reliable estimate of the EOS of neutron-rich matter can thus be obtained by extrapolating the underlying model EOS for symmetric matter and the symmetry energy in their respective density ranges to all densities. Shown by the shaded black area in the lower panel is the resulting best estimate of the pressure of high density pure neutron matter based on the predictions from the MDI interaction with x=0 and x=-1 as the lower and upper bounds on the symmetry energy and the flow-constrained EOS of symmetric nuclear matter. As one expects and consistent with the estimate in Ref. [6], the estimated error bars of the high density pure neutron matter EOS is much wider than the uncertainty range of the Figure 1: (Color online) Pressure as a function of density for symmetric (upper panel) and pure neutron (lower panel) matter. The green area in the upper panel is the experimental constraint on symmetric matter. The corresponding constraint on the pressure of pure neutron matter obtained by combining the flow data and an extrapolation of the symmetry energy functionals constrained below \\(1.2\\rho_{0}\\) by the isospin diffusion data is the shaded black area in the lower panel. Results taken from Refs. [6, 19].
EOS of symmetric nuclear matter. For the four interactions indicated in the figure, their predicted EOS's cannot be distinguished by the estimated constraint on the high density pure neutron matter. In the following, the astrophysical consequences of this partially constrained EOS of neutron-rich matter on the mass-radius correlation, moment of inertia, and the elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars are briefly discussed. More details of our studies on these topics can be found in Refs. [19, 20, 21, 22, 23].
Nuclear constraints on the mass-radius correlation, moment of inertia, elliptical deformation and gravitational radiation of rapidly rotating neutron stars
The partially constrained EOS of neutron-rich nuclear matter has important ramifications on properties of neutron stars. As a first example, in Fig. 2 we show the mass-radius correlations for the two fastest rotating neutron stars known as of today. These pulsars spin at 716 [24] and 1122 Hz [25], respectively. However, based only on the observational data available so far, their properties have not yet been fully understood. The analysis of their properties based on the EOS and symmetry energy constrained by the terrestrial laboratory data is thus especially interesting. Setting the observed frequency of the pulsar as the Kepler frequency, corresponding to the highest possible frequency for a star before its starts to shed mass at the equator, one can obtain an estimate of its maximum radius as a function of mass \\(M\\),
\\[R_{\\rm max}(M)=\\chi\\left(\\frac{M}{1.4M_{\\odot}}\\right)^{1/3}\\ {\\rm km}, \\tag{2}\\]
Figure 2: (Color online) Gravitational mass versus equatorial radius for neutron stars rotating at \\(\
u=716\\) Hz and \\(\
u=1122\\) Hz. Taken from Ref. [22].
with \\(\\chi=20.94\\) for rotational frequency \\(\
u=716\\) Hz and \\(\\chi=15.52\\) for \\(\
u=1122\\) Hz. The maximum radii are shown with the dotted lines in Fig. 2. It is seen that the range of allowed masses supported by a given EOS for rapidly rotating neutron stars becomes narrower than the one of static configurations. This effect becomes stronger with increasing frequency and depends upon the EOS. Since predictions from the \\(x=0\\) and \\(x=-1\\) EOSs represent the limits of the neutron star models consistent with the experimental data from terrestrial nuclear laboratories, one can predict that the mass of the neutron star rotating at 1122 Hz is between 1.7 and 2.1 solar mass [22].
Another interesting example is the gravitational radiation expected from elliptically deformed pulsars. Gravitational waves (GWs) are tiny disturbances in space-time and are a fundamental, although not yet directly confirmed, prediction of General Relativity. Gravitational wave astrophysics would open an entirely new non-electromagnetic window to the Cosmos, making it possible to probe physics that is hidden or dark to current electromagnetic observations [26]. Elliptically deformed pulsars are among the primary possible sources of the GWs. Very recently the LIGO and GEO collaborations have set upper limits on the GWs expected from 78 radio pulsars [27]. Gravitational waves are characterized by a strain amplitude \\(h_{0}\\) which can be written as
\\[h_{0}=\\chi\\frac{\\Phi_{22}\
u^{2}}{r}, \\tag{3}\\]
with \\(\\chi=\\sqrt{2048\\pi^{5}/15}G/c^{4}\\). In the above equation, \\(r\\) is the distance between the pulsar and the detector, and the \\(\\Phi_{22}\\) is the quadrupole moment of the mass
Figure 3: Gravitational-wave strain amplitude as a function of the neutron star mass. The error bars between the \\(x=0\\) and \\(x=-1\\) EOSs provide a limit on the strain amplitude of the gravitational waves to be expected from these neutron stars, and show a specific case for stellar models of \\(1.4M_{\\odot}\\). Taken from ref.[19].
distribution. For slowly rotating neutron stars, one has [28]
\\[\\Phi_{22,max}=2.4\\times 10^{38}g\\ cm^{2}\\left(\\frac{\\sigma}{10^{-2}}\\right)\\left( \\frac{R}{10km}\\right)^{6.26}\\left(\\frac{1.4M_{\\odot}}{M}\\right)^{1.2}. \\tag{4}\\]
In the above expression, \\(\\sigma\\) is the breaking strain of the neutron star crust which is rather uncertain at present time and lies in the wide range \\(\\sigma=[10^{-5}-10^{-2}]\\)[29]. In our estimate, we use the maximum breaking strength, i.e. \\(\\sigma=10^{-2}\\). In Fig. 3 we display the GW strain amplitude, \\(h_{0}\\), as a function of stellar mass for three selected millisecond pulsars which are relatively close to Earth (\\(r<0.4kpc\\)) and have rotational frequencies below 300 Hz. It is interesting to note that the predicted \\(h_{0}\\) is above the design sensitivity of LIGO detector. The error bars in Fig. 3 between the \\(x=0\\) and \\(x=-1\\) EOSs provide a constraint on the _maximal_ strain amplitude of the gravitational waves emitted by the millisecond pulsars considered here. The specific case shown in the figure is for neutron star models of \\(1.4M_{\\odot}\\). Depending on the exact rotational frequency, distance to detector, and details of the EOS, the _maximal_\\(h_{0}\\) is in the range \\(\\sim[0.4-1.5]\\times 10^{-24}\\). These estimates do not take into account the uncertainties in the distance measurements. They also should be regarded as upper limits since the quadrupole moment (Eq. (4)) has been calculated with \\(\\sigma=10^{-2}\\) (where \\(\\sigma\\) can go as low as \\(10^{-5}\\)).
To emit GWs a pulsar must have a quadrupole deformation. The latter is normally characterized by the ellipticity which is related to the neutron star maximum quadrupole moment \\(\\Phi_{22}\\) and the moment of inertia via [28]
\\[\\epsilon=\\sqrt{\\frac{8\\pi}{15}}\\frac{\\Phi_{22}}{I_{zz}}, \\tag{5}\\]
For slowly rotating neutron stars, one can use the following empirical relation [30]
\\[I_{zz}\\approx(0.237\\pm 0.008)MR^{2}\\left[1+4.2\\frac{Mkm}{M_{\\odot}R}+90\\left( \\frac{Mkm}{M_{\\odot}R}\\right)^{4}\\right] \\tag{6}\\]
This expression is shown to hold for a wide class of equations of state which do not exhibit considerable softening and for neutron star models with masses above \\(1M_{\\odot}\\)[30].
Fig. 4 displays the neutron star moment of inertia (left panel) and ellipticity (right panel). It is interesting to mention that a fiducial value of \\(I_{zz}=10^{45}\\)g cm\\({}^{2}\\) is normally assumed in the literature. Our calculations indicate that the \\(I_{zz}\\) is strongly mass dependent. This observation is consistent with previous calculations. Moreover, the ellipticity decreases with increasing mass. The magnitude is above the lowest upper limit of \\(4\\times 10^{-7}\\) estimated for the PSR J2124-3358 [27]. Interestingly, essentially all obervables depend strongly on the EOS of neutron-rich matter. In particular, the MDI EOSs, adopting the same symmetric matter EOS but different density dependence of the symmetry energy, sets useful nuclear boundaries for these gravitational wave observables.
In summary, the heavy-ion physics community has made significant progress in constraining the EOS of neutron-rich nuclear matter in recent years. In particular, comprehensive analyses of several isospin effects including the isospin diffusion and isoscaling in heavy-ion reactions and the size of neutron skins in heavy nuclei have allowed us to constrain the symmetry energy at sub-saturation densities within approximately \\(31.6(\\rho/\\rho_{0})^{0.69}\\) and \\(31.6(\\rho/\\rho_{0})^{1.05}\\). While the currently existing data only allowed us to constrain the symmetry energy and thus the EOS of neutron-rich matter in a narrow range, it can already help to put some useful constraints on several interesting observables in astrophysics, such as the mass-radius correlation, moment of inertia, and the elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars. With the parity violating electron scattering experiments and heavy-ion reactions with high energy radioactive beams, it will be possible in the future to map out accurately the entire density dependence of the symmetry energy.
## Acknowledgments
This work was supported in part by the US National Science Foundation under Grant No. PHY-0652548, PHY-0757839 and PHY-0457265, the Research Corporation under Award No. 7123, the Advanced Research Program of the Texas Coordinating Board of Higher Education under grant no. 003565-0004-2007, the Welch Foundation under Grant No. A-1358, the National Natural Science Foundation of China under Grant Nos. 10575071 and 10675082, MOE of China under project NCET-05-0392, Shanghai Rising-Star Program under Grant No.06QA14024, the SRF for ROCS, SEM of China, and the National Basic Research Program of China (973 Program) under Contract No. 2007CB815004.
Figure 4: Neutron star moment of inertia (left panel) and Ellipticity (right panel). Taken from ref.[19].
## References
* [1] J.M. Lattimer and M. Prakash, Phys. Rep., **333**, 121 (2000); Astr. Phys. Jour. **550**, 426 (2001); Science Vol. **304**, 536 (2004)
* [2] A. W. Steiner, M. Prakash, J.M. Lattimer and P.J. Ellis, nucl-th/0410066, Phys. Rep. **411**, 325 (2005).
* [3] B.A. Brown, Phys. Rev. Let. **85**, 5296 (2000).
* [4] B.A. Li, C.M. Ko and W. Bauer, topical review, Int. J. Mod. Phys. E**7**, 147 (1998).
* [5]_Isospin Physics in Heavy-Ion Collisions at Intermediate Energies_, Eds. B. A. Li and W. Uuo Schrodear (Nova Science Publishers, Inc, New York, 2001).
* [6] P. Danielewicz, R. Lacey and W.G. Lynch, Science **298**, 1592 (2002).
* [7] V. Baran, M. Colonna, V. Greco and M. Di Toro, Phys. Rep. **410**, 335 (2005).
* [8] B.A. Li, L.W. Chen and C.M. Ko, Phys. Rep. (2008) in press, arXiv:0804.3580 [nucl-th].
* [9] M.B. Tsang _et al._, Phys. Rev. Lett. **92** 062701 (2004).
* [10] T.X. Liu _et al._, Phys. Rev. C **76**, 034603 (2007).
* [11] D. Shetty, S.J. Yennello, G.A. Souliotis, Phys. Rev. C **75**, 034602 (2007).
* [12] A.W. Steiner, B.A. Li, Phys. Rev. C **72**, 041601 (R) (2005).
* [13] L.W. Chen, C.M. Ko, B.A. Li, Phys. Rev. Lett. **94**, 032701 (2005).
* [14] B.A. Li, L.W. Chen, Phys. Rev. C **72**, 064611 (2005).
* [15] C. J. Horowitz _et al._, Phys. Rev. C **63**, 025501 (2001).
* [16] C.B. Das, S. Das Gupta, C. Gale, and B.A. Li, Phys. Rev. C **67**, 034611 (2003).
* [17] P.G. Krastev and F. Sammarruca, Phys. Rev. C **74**, 025808 (2006).
* [18] A. Akmal, V.R. Pandharipande, and D.G. Ravenhall, Phys. Rev. C **58**, 1804 (1998).
* [19] P.G. Krastev, B.A. Li and A. Worley, arXiv:0805.1973 [astro-ph].
* [20] B.A. Li, A.W. Steiner, Phys. Lett. B **642** 436 (2006).
* [21] P. Krastev, B.A. Li, Phys. Rev. C **76** 055804 (2007).
* [22] P. Krastev, B.A. Li, A. Worley, Astrophys. J. **676**, 1170 (2008).
* [23] A. Worley, P. Krastev, B.A. Li, arXiv:0801.1653 [astro-ph]; Astrophys. J. (2008) in press.
* [24] J.W.T. Hessels, S.M. Ransom, I.H. Stairs, P.C.C. Freire, V.M. Kaspi, F. Camilo, Science **311**, 1901 (2006).
* [25] P. Kaaret, J. Prieskorn _et al._, Astrophys. J. **657**, L97 (2007).
* [26] E. E. Flanagan and S. A. Hughes, New J. Phys. **7** (2005) 204 (2005).
* [27] B. Abbott _et al._ [LIGO Scientific Collaboration], Phys. Rev. Lett. **94** (2005) 181103; Phys. Rev. D**76**, 042001 (2007).
* [28] B. J. Owen, Phys. Rev. Lett. **95** (2005) 211101.
* [29] B. Haskell, N. Andersson, D. I. Jones, and L. Samuelsson, Phys. Rev. Lett. **99**, 231101 (2007).
* [30] J. M. Lattimer and B. F. Schutz, Astrophys. J. **629**, 979 (2005). | Recent analyses of several isospin effects in heavy-ion reactions have allowed us to constrain the density dependence of nuclear symmetry energy at sub-saturation densities within a narrow range. Combined with constraints on the Equation of State (EOS) of symmetric nuclear matter obtained previously from analyzing the elliptic flow in relativistic heavy-ion collisions, the EOS of neutron-rich nuclear matter is thus partially constrained. Here we report effects of the partially constrained EOS of neutron-rich nuclear matter on the mass-radius correlation, moment of inertia, elliptical deformation and gravitational radiation of (rapidly) rotating neutron stars. | Condense the content of the following passage. |
arxiv-format/0806_2424v1.md | # Developing Bayesian Information Entropy-based Techniques for Spatially Explicit Model Assessment
Kostas Alexandridis and Bryan C. Pijanowski
Manuscript received June 2008. This work was supported in part by a Purdue Research Foundation (PRF) Fellowship, the US National Science Foundation (Grant WCR-0233648), the EPA STAR Biological Classification Program, the Great Lakes Fisheries Trust, and the Purdue University Department of Forestry and Natural Resources.
_K. Alexandridis_ is a Research Scientist (Regional Futures Analyst) with the Commonwealth Scientific and Industrial Research Organization (CSIRO), of Sustainable Ecosystems, CSIRO Davies Laboratory, University Drive, Douglas, QLD 4814, Australia (phone: +61 7 4753 8630; fax: +61 7473 8650; e-mail: [email protected]).
_R. C. Pijanowski_, is an Associate Professor with Purdue University, Department of Forestry and Natural Resources, West Lafayette, IN, USA (e-mail: [email protected]).
######
Neural network applications, Uncertainty, Complex Systems; Bayesian Information.
## I The need for spatially complex stochastic modeling assessment
The aim of this approach is to explore and develop advanced spatial assessment methods and techniques for land use intelligent modeling. Traditional statistical accuracy assessment techniques, although essential for validating observed and historical land use changes, often fail to capture the stochastic character of the modeling dynamics. The research presented here provides a comprehensive guide for assessing additional informational entropy value of model predictions at the spatially explicit domain of knowledge. It proposes a few alternative metrics and indicators that encapsulate the ability of the modeler to extract higher-order information dynamics from simulation experiments. The term _information entropy_, originates from the information-theoretic concept of entropy, conceived by Claude Shannon on his famous two articles of 1948 in Bell System Technical Journal [1], and expanded later in his book \"Mathematical Theory of communication\" [2]. Since the mid-20th century, the field of information theory has experienced an unprecedented development, especially following the expansion of computer science in almost every scientific field and discipline. The concept of entropy in information systems theory allow us to allocate quantitative measurements of uncertainty contained within a random event (or a variable describing it) or a signal representing a process [3, 4].
The literature on assessing spatially explicit models of land use change has made substantial steps during the last few years. Many of the metrics and assessment techniques in the past have been treating land use predictions as complex signals, and models themselves often are treated as measurement instruments, not different from signal-measurement devise assessment in physical experiments [5]. Spatially explicit methods and assessment techniques are used in many remote sensing applications [6]; wildlife habitat models [7]; predicting presence, abundance and spatial distribution of populations in nature [8]; analyzing the availability and management of natural resources [9, 10]. In a more theoretical level of analysis, spatially explicit methods of model assessment have been used for testing hypotheses in landscape ecological models [11, 12]; address statistical issues of uncertainty in modeling [10]; or analyze landscape-specific characteristics and spatial distributions [13].
Methodologies and techniques as the ones referenced above often maintain and preserve traditional statistical approaches to modeling assessment. Most likely, they test the limitations and assumptions of statistical techniques originally designed for analyzing data and variables that do not exhibit spatially explicit variation. The majority of studies wherespatially explicit methodologies are used tend to involve relatively simple or linear statistical analyses [14, 15]. While in the recent years assessment of modeling complexity has been an issue of analysis [16, 17], has yet to include spatial complexity and assessment of stochasticity as essential elements of evaluation and analysis. Spatial complexity by itself is not often enough to fully describe and represent the complex system dynamics of coupled human and natural systems. The introduction of spatial complexity in advanced dynamic modeling environments requires the involvement of stochasticity as an essential element of the modeling approach. It rests between traditional spatial assessment and game-theoretic approaches to modeling. The level of uncertainty and incomplete information embedded on the components of a coupled human-biophysical system often necessitates the introduction of stochasticity as a measurable dimension of complexity [18, 19]. Stochastic modeling is widely introduced in modeling complex natural and ecological phenomena [20], population dynamics [21], spatial landscape dynamics [22], intelligence learning and knowledge-based systems [23], economic and utility modeling [24, 25], decision-making, Bayesian and Markov modeling [26, 27] and many other associated fields in science and engineering applications.
A natural extension of the related techniques and methodologies is the development and introduction of spatially explicit, stochastic methods of accuracy assessment for intelligent modeling. In recent years, methods, techniques, and measures of informational entropy exceeded the single dimensionality of traditional statistical techniques (i.e., measuring uncertainty on single random events or variables) and begun analyzing multi-dimensional signals. The concept of spatial entropy [28, 29] presents analysis of informational entropy patterns in two-dimensional spatial systems. Within these lines, the remaining of the paper introduces some alternative metrics that aim to assist and enhance the power of our inferential mechanisms in modeling such systems.
## II A case-study: ANN Simulations in South-Eastern Wisconsin region
The study is based on modeling historical urban spatial dynamics using Artificial Neural Network (ANN) simulations for a large spatial region of South-Eastern Wisconsin (SEWI) in the Midwestern region of U.S. The details of the simulation can be found in a recent paper by Pijanowski et al. [30], where the modeling dynamics and a comprehensive description of the LTM modeling mechanism and experimental design is deployed. Description of the LTM model is also provided in Pijanowski et al. [31, 32].
### _Sampling Methodology_
The project area involves a seven-county region in the South-Eastern Wisconsin (SEWI) region, and includes the city of Milwaukee and its wider suburban area [33]. The land use changes that occurred in the SEWI region during the period 1963-1990 is considerable. Most of the urban growth has taken place in the suburban metropolitan Milwaukee region, and the areas around medium and large cities in the region (Fig. 1). The county of Waukesha, in the west side of the city of Milwaukee has absorbed the majority of suburban changes, but important urban and suburban changes have occurred in the remaining counties both at the North (Washington and Ozaukee counties) and South (Walworth, Racine and Kenosha counties) of the city of Milwaukee.
The large size of the area under study, and the ability to perform extensive training and learning simulations using the LTM model, makes computationally impossible to simulate the entire region as a whole. Instead, a comprehensive sampling methodology has been implemented. The regional extent of the SEWI area has been divided into equal-area square boxes of 2.5 square kilometers (or 6889 cells of 30 m\\({}^{2}\\) resolution). The square sampling boxes vary on both number of cells that experienced urban change during the 1963-1990 period, and the amount of exclusionary land zones (urban zones in 1963, paved roads, water bodies, protected areas, etc.). Both parameters affect the modeling performance and the ability to assess comparatively the accuracy of the modeling predictions. Thus, random sampling scheme has been implemented for this modeling exercise ensuring comparative assessment of the quantities and spatial patterns of land use change in the region. First, the regional sampling boxes has been ranked and classified using a combined index of both proportion of urban change and proportion of exclusionary zone within the sampling box. The yielded combined ranking index takes account of both changes within the sampled boxes and represents the ratio between the percentage of urban change and the percentage of variation in exclusionary zone areas across the sampled boxes1:
Footnote 1: In LTM model, an exclusionary zone is defined as the map area where model pattern training and simulation are not implemented, i.e., areas with no suitability for transitional change. Examples of these zones include map areas
\\[I_{s}=\\frac{\\%\\Delta(urban)}{\\%\\Delta(exclusionary)} \\tag{1}\\]
where \\(s\\) = number of area sampling boxes in the landscape.
Fig. 1: Land use changes in the SEWI region, 1963-1990.
Footnote 1: The _Semi_ is a set of _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and _Semi_ are the _Semi_ and \\(S\\) are the _Semi_ and \\(S\\) are the _Semi_ and \\(S\\) are the \\(S\\) and \\(S\\) are the \\(S\\) and \\(S\\) are the \\(S\\) and \\(S\\) are the \\(S\\) and \\(S\\) are the \\(S\\) and \\(S\\) respectively.
From the continuous sampling index values derived from the previous step, two threshold values of the sampling index have been used to define three classification index regions for random sampling. The sampling boxes have been assigned into three sequential classification pool groups (group A, B and C), according to the following rules (thresholds):
\\[\\text{Sampling Pool Group}:\\begin{cases}\\text{A}&\\text{if}&I_{i}\\geq 0\\\\ \\text{B}&\\text{if}&I_{i}\\geq\\sqrt[]{2}\\\\ \\text{C}&\\text{if}&I_{i}\\geq 1\\end{cases} \\tag{2}\\]
The sampling pool classification in equation (2) follows a nested hierarchical scheme, that is, the prospective sampling pool of each consequent group is contained in the previous one (i.e., sampling pool for group C is fully contained in group B's sampling pool, and sampling pool for group B is fully contained within group A's sampling pool). Such classification scheme allow the testing of the effects of increased exclusionary zone area to the model performance in the simulations. Sampling pool group A contains all boxes in the sampling region. Sampling pool group B contains only sampling boxes that have no more than double the percentage of exclusionary area than the percentage of urban change area. Finally, sampling pool group C contains only the sampling boxes that have no more than equal or more percentage of urban change area than exclusionary zone area within them. The members (sampling boxes) of each sampling pool group (A, B, and C) have been ranked and assigned to 30th quintiles according to their ascending proportion of urban change within the sample box. From each 30-tile, one sampling box has been randomly selected using a random number generator algorithm. The seed of the random number generator has been renewed before each sampling operation. The final outcome of the random sampling procedure, was three sampling groups (varying on the ascending ratio of urban to exclusionary zone area), containing thirty 2.5 square kilometer sampling boxes each (varying on the percent of urban change).
The sampled boxes for area groups A, B and C are shown in the following Fig. 2.
### _Simulation Modeling Parameterization_
The LTM model requires three levels of parameterization: (a) the simulation drivers of land use change; (b) the training and testing neural network pattern creation; and, (c) the network simulation parameter definition. Details on the theoretical neural network simulation parameterization of the LTM model are reported in the literature by Pijanowski et al. [31, 32]. Explicit description of the modeling enterprise in the SEWI region are also reported in the Pijanowski et al. [30] paper. In short, eight simulation distance drivers of land use change has been used to parameterize the LTM simulations (urban land in 1963, historical urban centers in 1900, rivers, lakes and water bodies, highways, state and local roads, and lake Michigan). The simulation model uses every other cell (50% of the cells) as neural network training pattern, and the entire region for network model testing. Finally, the network is trained for 500,000 training cycles, by resetting and iterating the network node's weight configuration every 100 training cycles, and outputting the network node structure and the mean square error of the network convergence every 100 cycles in a file. Thus for each of the 90 sampled boxes, the simulation output a total of 5,000 network files and MSE values for a grand total of 450,000 simulation result files. For simplicity of presentation, in each of the sampled boxes, forty-four of these network output files are selected for visualization of the results. Due to the nature of the neural network learning dynamics, learning patterns follow a negative exponential increase through training iterations. Thus, a negative exponential visualization scale has been chosen to visualize the results (more frequent samples in lower network training cycles, less frequent samples in higher training cycles).
The simulation results for urban change predictions in 1990 are assessed against historical land use changes in 1990 from existing data provided by the Southeastern Wisconsin Regional Planning Commission [33].
## III Methodology and Results of the Simulation Accuracy Assessment
The paper by Pijanowski et al. [30], reports three relative conventional statistical metrics for the quantitative accuracy assessment of the model performance. Namely, the _percent correct_ metric (PCM), the _Kappa_ metric (K), and the _area under the receiver operator characteristic curve_ (AROC) metric. The PCM metric is a simple proportional measure of comparison, while the K and AROC metrics take into account the confusion matrix and the omission and commission errors of the simulation. In addition to these three conventional metrics2, two more alternative metrics are presented here. Namely, the _Bayesian predictive value of a positive and negative classification_ (PPV and NPV) metrics, and the _Bayesian conversion factor_ (C\\({}_{\\text{b}}\\)) metric. These alternative metrics measure a stochastic level of information entropy in the simulated land use change system. They represent
Fig. 2: SEWI random sampling groups A, B and C.
different aspects or dimensions of the predictive value of information that is embedded in the simulation model results, and thus, can enhance our understanding of both simulation dynamics, and the dynamics of the land use change system.
Footnote 1: The model is a simple model for the simulation of a single-species system, which is a model for the simulation of a single-species system.
Footnote 2: The model is a model for the simulation of a single-species system, which is a model for the simulation of a single-species system.
### _Basic Definitions_
The notion of spatial accuracy assessment utilizes three major assumptions. The first assumption has to do with the underlying process in hand. In any given landscape, two theoretical observers (e.g., a simulation model and an observed historical map, or a simulation model and another simulation model, or an observed historical map and an alternative historical map) are assumed to observe properties of the same underlying process (the \"real\" land use change). The second assumption has to do with the observers themselves. They are assumed to face a theoretical level of uncertainty (regardless the degree of, small or large). A simulation model is facing uncertainty on its predicted landscape as a part of the problem formulation (and thus a trivial assumption), but observed historical landscapes are also subjects to an implicit degree of uncertainty (i.e., measurement errors, remote sensing classification errors, etc.). These degrees of uncertainty are not necessarily equal between the two observers. The third assumption involves the assessment process itself. It assumes that the two observers acquire their observations (classification) independently from each other. In other words, the historically observed land use map and the simulation results are independent (or, the modeling predictions are not a function of the real change in the maps). The independence assumption is easily to assume in the case of assessing a simulated and an observed landscape, but it becomes nontrivial when non-parametric analysis is used to compare two modeled landscapes, in cases where the same model with different configuration is used.
A parametric approximation of spatial accuracy assessment is based on the notion of a confusion matrix [34, 35] shown in Table 1. For binary land use changes (i.e., presence-absence of transition), the confusion matrix is a 2\\(\\times\\)2 square matrix with exhaustive, and mutually exclusive elements.
A nonparametric approximation of spatial accuracy assessment employs the use of the confusion matrix in somewhat more complex forms. It assesses the sensitivity coefficient as the observed fraction of agreement between the two assessed landscapes, or, in other words, the probability of correctly predicting a transition when this transition actually occurred in the observed historical data. Symbolically (S=simulated, R=real),
\\[Sensitivity=\\frac{TP}{TP+FN}=p(S=1\\mid R=1) \\tag{3}\\]
Similarly, the specificity coefficient in equation (4) represents the observed fraction of agreement between two assessed maps, or, in other words, the probability of correctly predicting an absence of transition, when this transition is actually absent from the historically observed data. Symbolically,
\\[1-Specificity=\\frac{TN}{TN+FP}=p(S=0\\mid R=0) \\tag{4}\\]
A theoretical _perfect_ agreement between the two observers would require that,
\\[p(S=1\\mid R=1)=p(S=0\\mid R=0) \\tag{5}\\]
\\(or\\), \\(Sensitivity=1-Specificity\\)
The degree of deviation from the rule as defined in equation (5), represents the degree of deviation from a perfect agreement between the two classifications, or the degree of disagreement between a modeled (simulated) and an observed (historical) landscape transition. The binary character of the classification schemes requires the two transition classifications to be exhaustive and mutually exclusive. The theory of statistical probabilities suggests that a random (fully uncertain) classification between the probabilities denoted by sensitivity and specificity coefficients would be:
\\[Sensitivity=Specificity=\\frac{1}{2} \\tag{6}\\]
In other words, for each classification threshold (e.g., amount of urban change) in our assessment, a given cell has an equal (prior) chance (50%) to undergo a land use change transition, not unlike the tossing of a coin.
### _Bayesian Predictive Value of Positive and Negative Classification metric (PPV / NPV)_
#### Ii-B1 Diagnostic Odds Ratio (DOR)
From the definitions of sensitivity and specificity in the previous session, we can compute the _likelihood ratio_ metric [36, 37]. In a binary (Boolean) classification scheme, there are two forms of likelihood ratios: the _likelihood ratio of a positive classification_ (LR+), and the _likelihood ratio of a positive classification_ (LR-). The likelihood ratios are connected with the levels of sensitivity and specificity directly [38]:
\\[LR+=\\frac{sensitivity}{1-specificity}\\quad and\\quad LR-=\\frac{1-sensitivity}{specificity} \\tag{7}\\]
The likelihood ratios obtained for a binary classification can be used to compute the value of an index for diagnostic
inference, namely, the _diagnostic odds ratio_ (DOR) index. The DOR represents simply the ratio of the positive to the negative likelihoods:
\\[DOR=\\frac{LR+}{LR-}=\\frac{sensitivity\\cdot specificity}{(1-specificity)\\cdot(1- sensitivity)} \\tag{8}\\]
The DOR can be interpreted as an unrestricted measure of the classification accuracy [38], but suffers from serious limitations, since both LR+ and LR- are sensitive to the threshold value (cut-off point) of the classification [39]. Thus, DOR can be used as a measure of the classification accuracy in cases where, (a) the threshold value of the binary classification is somewhat balanced (around 0.5), or; (b) when comparing classification schemes that have the same threshold value (e.g., in the case of simulation runs that are unbalanced but face similar threshold values). In the case of the SEWI region simulation runs, the DOR can be used to compare classification performance across training cycles (same areas, and same classification thresholds), but not across area groups or different simulation boxes. The results shown in Fig. 3a, signify the importance of pattern learning (training) process of improving the classification accuracy in the SEWI region experimental simulations.
#### Iii-A2 Bayesian Predictive Values
In place of the simple and practically limited DOR index to assess the robust spatial model accuracy, a Bayesian framework of assessment can be used. It uses the likelihood ratios (LR+ and LR-), to estimate a posterior probability classification based on the information embedded in the dataset. Strictly speaking, the model accuracy obtained by the confusion matrix (and consequently the sensitivity and specificity values), represents a _prior_ probabilistic assessment of the model's accuracy. This assessment is subject to the threshold value of the classification scheme. Obtaining a classification scheme that is robust enough to allow us to estimate model accuracy for a range of thresholds, requires the computation of the conditional estimates [40]. This represents a _posterior_ probabilistic assessment of the model's accuracy, and can be achieved using Bayes' Theorem. Computing the posterior Bayes probabilities for a positive and negative classification can be achieved using a general equation form:
\\[PPV=p(x_{*}\\mid c)=\\frac{p(x_{*})\\cdot p(c)}{p(x_{*})\\cdot p(c)+p(x_{-})\\cdot p (1-c)} \\tag{9}\\]
and,
\\[NPV=p(x_{*}\\mid 1-c)=\\frac{p(x_{*})\\cdot p(1-c)}{p(x_{*})\\cdot p(1-c)+p(x_{*}) \\cdot p(c)} \\tag{10}\\]
where,
PPV: the Bayes predictive value of a positive classification metric; NPV: the Bayes predictive value of a negative classification metric; \\(x_{*},x_{*}\\): the positive and negative values of the classification, and; \\(c\\): the _prevalence_ threshold for which a value is positive if it is larger or equal from (computed using a ML nonparametric estimation).
The PPV and NPV values can be computed from the _sensitivity_ and _specificity_ values (and thus from the confusion matrix) as follows:
\\[PPV=\\frac{sens\\cdot prev}{sens\\cdot prev+(1-sens)\\cdot(1-prev)} \\tag{11}\\]
and,
\\[NPV=\\frac{(1-spec)\\cdot(1-prev)}{(1-spec)\\cdot(1-prev)+sens\\cdot prev} \\tag{12}\\]
The results for the PPV and NPV metrics obtained for the SEWI region and the three simulation area groups are shown in the following Fig. 3b,c. Simulation area group C has consistently the higher PPV and the lowest NPV throughout the training exercise, a fact that signifies a higher model performance level than the ones achieved by simulation area groups A and B.
Measuring and treating PPV and NPV as separate metrics of model performance is a rather trivial operation, and it is not a very useful or informational tool in assessing spatial model accuracy. However, by combining the PPV and NPV metrics into a single graph, we can illustrate the dominance relationships and dynamics over an expected prevalence threshold value (i.e., _prevalence_\\(=\\)0.5, denoting an uninformative prior for the Bayesian classification). Fig. 4 shows the dominance relationships between PPV and NPV for increasing LTM training cycles. In simulation area group A, the model accuracy is based mainly on the dominant negative classification (although this dominance fades over the training process). The accuracy in simulation area group B is based on an unstable equilibrium between positive and negative classification (especially between 20,000 and 250,000 training cycles), although the overall accuracy is still supported by a dominant negative classification scheme. The model accuracy in simulation area C depends on a more desired classification scheme, since after the first 10,000 cycles model accuracy depends consistently on a positive classification.
Fig. 3: (a) Diagnostic Odds Ratio (DOR) index across LTM training cycles in the SEWI region; (b) Bayes probability of change given a positive classification (PPV); (c) Bayes probability of change given a negative classification (NPV) metric results by simulation group in the SEWI region.
The analysis of the latter results is based on an expected, i.e., balanced prevalence threshold. In reality, the relationship between Bayesian predictive values and prevalence is nonlinear and it is defined by the posterior Bayesian estimator properties, namely the posterior density estimation [36, 41].
To understand the role of the posterior Bayesian estimation, a theoretical problem formulation is provided in Fig. 5. Part (a) of the figure provides a hypothetical prior density estimation of a binary classification scheme across a continuous range of classification thresholds (prevalence). For a given transitional change (e.g., presence of land use change), the prevalence threshold ranges from zero (purely negative) to one (purely positive). The left density curve represents the absence of a transition (negative classification), while the right density curve represents the presence of a transition (positive classification). As explained in the first section, when we lack any additional information about the classification threshold, the best uncertain choice (maximum entropy classification), is to assume an equal probability between the two classes (present, absent). In most of the cases involving spatial accuracy assessment, an uncertain prior is the best choice. Unlike the ROC curve method, where accuracy is assessed using a nonparametric estimation (without the use of a distribution function), the Bayesian estimation is based in a parametric assessment of the classification accuracy (or, at least a semiparametric assessment). In such an uncertain classification, we can vary only the spread of the distribution (i.e., the width of the density distribution) for each of the classes, but not the location of the threshold. As a consequence, the amount and proportions of the false negative (FN) and false positive (FP) allocations are affected only by the difference on the mean value of each of the transitions to the threshold. The more this difference is positive, the more likely it is for the transition to be present, while the more the difference is negative, the more likely it is for the transition to be absent.
Bayesian estimation allows us to estimate the probability densities of the classifications by adjusting the \"true\" height and \"true\" width of the density distributions. In Fig. 5b, the changes in the density distributions for the threshold classes shifts the threshold prevalence value disproportional to the size and spread of each of the distributions. The posterior Bayesian density estimates allow us to evaluate the mean and variance of a new, \"informative\" prevalence threshold (shown with dotted line and shaded areas in Fig. 5b).
In the SEWI region, the relationship between prevalence and the level of the PPV/NPV is shown in Fig. 6. The y-axis of the graph represents the prevalence level (classification threshold), while the x-axis represents the level of the predictive value (PPV or NPV). The points that belong to the PPV and NPV are color-coded. The data points correspond to all sampled simulation runs (44 sampled training cycles for each of the 90 boxes in groups A, B and C, a total of 3,960 simulation run results).
Fig. 4: Dominance relations between Bayes PPV and NPV metrics: (a) area group A; (b) area group B; (c) area group C.
Fig. 5: Properties of the Bayesian estimation in binary classification scheme: (a) prior density estimation; (b) posterior density estimation.
We can perform a nonparametric estimation of the probability density function in the data, by using a kernel density estimator. The solid lines in Fig. 6 represent the results of the _Epanechnikov_ stochastic kernel estimation [42]. The general equation of the kernel density function is [43, 44]:
\\[\\hat{f}_{{}_{K}}(x)=\\frac{1}{Nh}\\sum_{i=1}^{N}K\\left(\\frac{x-X_{i}}{h}\\right) \\tag{13}\\]
where,
\\(\\hat{f}_{{}_{K}}\\) : an unknown continuous probability density function; \\(h\\): a smoothing parameter; \\(K(z)\\) : a symmetric kernel function, and; \\(N\\): the total number of independent observations of a random sample \\(X_{N}\\).
The equation for the Epanechnikov kernel density function is [43]:
\\[K(z)=\\begin{cases}\\frac{3}{4\\sqrt{5}}(1-\\frac{1}{5}z^{2})&\\text{if}\\quad- \\sqrt{5}\\leq z\\leq\\sqrt{5}\\\\ 0&\\text{otherwise}\\end{cases} \\tag{14}\\]
The choice of the Epanechnikov kernel density estimator is based on the high efficiency on minimizing the _asymptotic mean integrated square errors_, AMISE [45, 46], and it is often used in Neural Network computational learning [47].
In the SEWI region data, the underlying question that the analysis attempts to address is for which prevalence threshold value the \"true\" predictive value (and accuracy) of the modeled transitional classification becomes equal to the \"true\" absence of such transaction? Graphically, the solution can be found by varying the height of the y-axis reference line (horizontal dotted lines in Fig. 6) over a fixed level of predictive value, where \\(PPV=NPV=0.5\\) (vertical dotted line). The y-axis coordinate for which the two kernel density estimated lines meet represents the prevalence threshold that maximizes the posterior probability of our model accuracy predictions.
Mathematically, the optimal prevalence threshold of the posterior probability distribution exists where:
\\[\\hat{f}_{{}_{K}}(x_{*})=\\hat{f}_{{}_{K}}(x_{*}) \\tag{15}\\]
The difference between the prior and posterior estimation is shown in the vertical distance between the y-axis reference lines at the 0.5 prevalence threshold and the one at the meeting point of the two kernel density functions (\\(\\sim\\)0.172 in the entire SEWI regions' simulation data). The posterior estimation allows us to threshold at a lower classification level, and thus enhancing the accuracy of our predictions.
#### Iii-B3 Bayesian Convergence Factor metric (\\(C_{b}\\))
It is possible to derive an alternative accuracy metric that combines the two Bayesian predictive values, PPV and NPV in a single, unified coefficient. The use of such a coefficient to measure classification and model accuracy is that allows us to estimate not only a unique prevalence threshold, but also an optimal prevalence region for which our estimated accuracy is high for both positive and negative classifications. The analysis provided in the previous paragraph in the case of PPV and NPV metrics depends mainly on the choice of the kernel density estimation function and the continuous interval _bandwidth_ used [43, 48], or any other probability density function used for estimation. A unified Bayesian coefficient that measures the level of convergence between positive and negative predictive values permits us to derive a more robust prevalence region that tends to smooth the effect of density estimation selection. In other words, it provides us with a more global measure of model and classification assessment.
We can call this coefficient _Bayes convergence factor_, \\(C_{b}\\). A simple form of the factor can be defined as:
\\[C_{{}_{b}}=\\begin{cases}1-\\left(PPV-NPV\\right)&\\text{if}\\quad PPV\\geq NPV\\\\ 1-\\left(NPV-PPV\\right)&\\text{if}\\quad PPV<NPV\\end{cases} \\tag{16}\\]
A higher level of the Bayes convergence factor thus denotes higher probability of convergence between a positive and a negative predictive value or probabilities of change. Because of the probability properties of such a coefficient, and the fact that always \\(PPV+NPV\\leq 1\\) (the probability of change cannot exceed 1.0), the range of the \\(C_{b}\\) coefficient will be: \\(0\\leq C_{b}\\leq 1.0\\). This simple form of the Bayes convergence factor is shown in the theoretical curve \\(C_{b}\\) (\\(A\\)) of Fig. 7a. We can see that the allocation of the positive and negative classification probabilities in the \\(C_{b}\\) function represents a form of a _triangular_ density function with minimum value of zero, maximum value of 1.0, and mean value of 0.5. A triangular density function provides a minimal amount of information about the relationship, configuration and pattern between the positive and negative predictive values in a model. As shown in Fig. 7a, these predictive values by themselves may be better represented by non-linear relationships (e.g., kernel density
Fig. 6: Relationship between prevalence (y-axis) and the Bayes predictive values, PPV and NPV (x-axis). The solid lines represent the Epanechnikov kernel density estimation for PPV and NPV. The dotted reference lines identify the difference between expected and predicted prevalence thresholds.
estimators). Thus, a better convergence factor can be found that reflects a degree of nonlinearity in the modeling classification assessment.
An alternative form of the Bayes convergence factor can be symbolically calculated using a _Normal density distribution function_, adjusted to a continuous scale between 0 and 1.0. The equation of the Normal density function is,
\\[\\hat{f}_{{}_{N}}(x,\\mu,\\sigma)=\\frac{1}{\\sqrt{2\\pi\\sigma}}\\cdot e^{\\frac{\\left( \\frac{\\left(\
u-\\mu\\right)^{2}}{2\\sigma^{2}}\\right)}{2\\sigma^{2}}} \\tag{17}\\]
For a Normal distribution with \\(x=0\\) and \\(\\sigma=0.5\\), we can model the behavior of the mean value, by setting,
\\[\\mu=PPV-NPV \\tag{18}\\]
and thus,
\\[\\hat{f}_{{}_{N}}(0,PPV-NPV,0.5) =\\frac{1}{\\sqrt{2\\pi}(0.5)}\\cdot e^{-\\left(\\frac{\\left(0.5-PPV- NPV\\right)^{2}}{2\\left(0.5\\right)^{2}}\\right)} \\tag{19}\\] \\[=0.797885\\cdot e^{-\\left(2PPV-NPV\\right)^{2}}\\]
We can adjust for the coefficient scale (0 to 1.0), by multiplying the previous equation by a normalization factor,
\\[\\frac{1}{\\sqrt{2\\pi}\\sigma}=0.797885 \\tag{20}\\]
The _adjusted Normal_ form of the Bayes convergence factor, can expressed as:
\\[C_{{}_{b}}=e^{-\\frac{2\\left(PPV-NPV\\right)^{2}}{2\\left(0.5\\right)}} \\tag{21}\\]
The adjusted normal density distribution function of the \\(C_{{}_{b}}\\) coefficient can be seen in the curve \\(C_{{}_{b}}\\) (_B_) of Fig. 7a, and in our data can be estimated by a Normal or Epanechnikov kernel density function.
The previous two forms of the \\(C_{{}_{b}}\\) metric assume implicitly that the combined effect of the positive and negative classification process in our model is symmetric toward achieving a better model (and classification) accuracy. It is appropriate for modeling changes where the presence of a transition implies the absence of a negative transition. In many spatial modeling processes simulating binary change that implicit assumption cannot be made easily. For example, a model (such as LTM) that simulates land use change is parameterized and learns to recognize patterns on drivers of change related to a positive land use transition effect only. Model training and testing based on drivers of transitional presence, do not necessarily convey information on the probability of absence of such a transition, as it is likely that other or additional drivers of the absence of the transition may be in effect over an ensemble of landscapes. Consequently, we can derive a better form of the Bayes conversion function by assuming a biased or asymmetric join distribution among the predictive value of positive and negative classification. Such an asymmetry would favor more positive than negative classifications, assuming that the model learns more about the transitional patterns from a combination of a high positive and low negative predictive value, rather than from a high negative and low positive predictive value (since the sum of the predictive values equals 1). The later is especially important in estimating empirical distributions derived from unbiased real-world data, such as in the SEWI case study. The amount of area that undertakes urban land use transition in the data is considerably less than the amount of area that observes an absence of such transition, and implementing an asymmetric Bayesian prior distribution would assign more weight in the positive (presence of transition) than in the negative (absence of transition) land areas.
We can formulate such a conversion function from modifying the mean central tendency of the previous form, \\(C_{{}_{b}}\\) (_B_). In other words, by simulating a different mean for the adjusted Normal distribution function. We can call this form, _adjusted asymmetric Normal density distribution_, and for the same numerical parameters, \\(x=0\\), and \\(\\sigma=0.5\\), we can simulate the behavior of the mean value,
\\[\\mu^{\\prime}=\\alpha-\\left(PPV-NPV\\right) \\tag{22}\\]
where, \\(\\alpha\\) is the degree of asymmetry of our distribution (\\(0\\leq\\alpha\\leq 1.0\\)). In other words, the parameter \\(\\alpha\\) denotes the degree of bias in terms of a theoretical _least-cost function_, or the relative informational balance in our model from a positive to negative predictive value.
The new asymmetric normal distribution will be,
\\[\\hat{f}_{{}_{N}}(0,\\alpha-\\left(PPV-NPV\\right),0.5) =\\frac{1}{\\sqrt{2\\pi}\\left(0.5\\right)}\\cdot e^{\\frac{-\\left[ \\left(0-\\left(\\alpha-\\left(PPV-NPV\\right)\\right)\\right]^{2}}{2\\left(0.5 \\right)^{2}}\\right]} \\tag{23}\\] \\[=0.797885\\cdot e^{-\\left(2PPV-NPV-\\alpha\\right)^{2}}\\]
and, after adjusting for scale normalization, the final Bayes convergence factor, will be,
\\[C_{{}_{b}}=e^{-\\left(PPV-NPV-\\alpha\\right)^{2}} \\tag{24}\\]
For varying levels of the parameter \\(\\alpha\\), the shape of the latter convergence factor is shown in Fig. 7b. For \\(a=0\\), the equation yields the _symmetric normal_ form of the convergence factor (i.e., shape \\(C_{{}_{b}}\\) (_B_) in Fig. 7a), while, for \\(a=1.0\\), the equation yields a _full asymmetric normal_ form of the convergence factor (i.e., shape \\(C_{{}_{b}}\\) (_C_) in Fig. 7a). In an experimental dataset, any form of asymmetric normal distribution form of \\(C_{{}_{b}}\\) (i.e., for any parameter \\(\\alpha\\)) can be estimated by a Normal of Epanechnikov kernel distribution function.
Fig. 7: (a) Theoretical distribution density functions for the Bayes convergence metric (Cb): (i) triangular; (ii) adjusted normal; (iii) asymmetric normal. (b) Variations of the asymmetry parameter, \\(a\\), in the expected normal form of the Bayes convergence factor.
The results of the empirical data obtained for the SEWI region simulation runs for the varying degree of asymmetry in estimating the Bayes Convergence Factor, \\(C_{b}\\), are shown in Fig. 8a. We can see that a somewhat moderate level of asymmetry (\\(\\alpha=0.25\\)) performs consistently better throughout the entire model learning process (training cycles), despite the fact that at the 500,000 cycles training cycle level, the Bayes Convergence Factor with \\(\\alpha=0.5\\) performs slightly better. Thus, there is evidence in the SEWI simulation runs that a level of asymmetry in the composition of positive and negative predictive value of our model exists, and thus should be incorporated into our spatial accuracy assessment.
Beyond any visual inspection and inference of our results, it is possible to derive quantitative estimates of the dominance of a level of asymmetry present in our simulation runs. As can be seen in Fig. 8b we can estimate the expected probabilities of transitions, subject to the observed empirical values of transitions present in our simulation data. When all the simulation runs results for the entire SEWI region are examined with respect to their respective observed predictive values, we can estimate such an empirical probability distribution, as a function of an estimated \"true\" mean (location parameter) and standard deviation (scale parameter) of each of the forms of Bayes Convergence Factor, \\(f_{\\mathrm{x}}\\left(\\hat{\\mu}_{C_{\\mathrm{x}}},\\hat{\\sigma}_{C_{\\mathrm{x}}}\\right)\\), using a maximum likelihood estimation (_ML_) method. The results of such estimation for the varying degree of asymmetry in the Bayes convergence factor in the SEWI data are shown in Table 2. Two groups of parameter estimates are included in the analysis: (a) parameter estimates across all SEWI simulation training cycles, indicating a robust model performance; (b) parameter estimates only after 500,000 training cycles in the SEWI simulation runs, indicating a model performance with emphasis on maximizing the information flows in modeling transitional effects in our landscape.
Fig. 9 plots the empirically obtained estimated parameters for location (x-axis) against scale (y-axis). Such a plot can help us select the best asymptotic form of the Bayes convergence factor using a dominance criterion, such as the _mean-variance-robustness_ criterion. A desired probability distribution would have an estimated mean value closer to the 0.5 probability threshold (prevalence). Thus, estimated location parameters closer to 0.5 are dominant. On the other hand, we want our predicted probability distributions to minimize the level of uncertainty in our predictions. Thus, estimated scale parameters with smaller values are dominant. Finally, a desired probability distribution would have relative consistent estimated values of the location and scale parameters in both robust and informational assessments. We can see from Fig. 9 that the only asymmetric form of the Bayes convergence factor that meets all three dominance criteria is the one with \\(\\alpha=0.25\\).
\\begin{table}
\\begin{tabular}{|l l|c|c|} \\hline & & \\multicolumn{2}{c|}{Predicted Values for Bayes} \\\\ & & \\multicolumn{2}{c|}{Convergence Factor} \\\\ & & Estimated & Estimated \\\\ & & location of & scale of \\\\ & & Normal & Normal \\\\ & & Distribution & Distribution \\\\ Level & & Assymmetry & (Mean) & (SD) \\\\ \\hline & a=0 &.5985 &.3027 \\\\ & &.5571 &.3411 \\\\ & &.4764 &.3740 \\\\ & &.3651 &.3685 \\\\ & &.2443 &.3092 \\\\ \\hline & a=0 &.5067 &.3114 \\\\ & &.5537 &.3416 \\\\ & &.5608 &.3861 \\\\ & &.5019 &.4128 \\\\ & &.3834 &.3772 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Estimated values for the location (\\(\\mu\\)) and scale (\\(\\sigma\\)) parameters of the empirical asymmetric Bayes Convergence Factor in the SEWI area.
Figure 8: (a) Estimated mean Bayes convergence factor for varying levels of the asymmetry coefficient in the SEWI region; (b) Estimated probabilities of transition from the empirical values of the Bayes convergence factor (a=0.25). Data points represent the estimated PPV and NPV values in the SEWI region (for all simulation boxes’ sampled training cycles).
We can further enhance the quantitative assessment of the dominant asymmetric form of the \\(C_{b}\\) metric, by computing explicitly the dominance criteria. The three dominance criteria can be combined as,
\\[f_{{}_{N}}(\\hat{\\mu}_{i},\\hat{\\sigma}_{i})\\succ f_{{}_{N}}(\\hat{\\mu}_{j},\\hat{ \\sigma}_{j})\\text{ {if} }\\text{ {and} {only} {if}},\\forall m\
eq k:\\] \\[\\left(\\frac{\\left(0.5-\\hat{\\mu}_{i}\\right)}{\\hat{\\sigma}_{i}} \\right)_{\\!\\!m}-\\left(\\frac{\\left(0.5-\\hat{\\mu}_{i}\\right)}{\\hat{\\sigma}_{i}} \\right)_{\\!\\!k}\\geq \\tag{25}\\] \\[\\left(\\frac{\\left(0.5-\\hat{\\mu}_{j}\\right)}{\\hat{\\sigma}_{j}} \\right)_{\\!\\!m}-\\left(\\frac{\\left(0.5-\\hat{\\mu}_{j}\\right)}{\\hat{\\sigma}_{j}} \\right)_{\\!\\!k}\\]
where, the symbol \"\\(\\succ\\)\" denotes dominant relationships, and,
\\(i,j:\\) unique combinations of location and scale (i.e., asymmetric forms of the \\(C_{b}\\) metric).
\\(m,k:\\) unique groups for testing robustness (i.e., training cycle groupings).
\\(0.5-\\hat{\\mu}_{i}\\succ 0.5-\\hat{\\mu}_{j}:\\) mean (location) criterion
\\(\\hat{\\sigma}_{i}\\succ\\hat{\\sigma}_{j}:\\) variance (scale) criterion
\\(\\left(\\delta_{n}-\\delta_{i}\\right)_{\\!\\!i}\\succ\\left(\\delta_{n}-\\delta_{k} \\right)_{j}:\\) robustness criterion, and \\(\\delta:\\) any value or classification rule.
The results of the dominance criteria in the SEWI results visualized in Fig. 9 are summarized in Table 3. The values of the table cells represent the values of the differences in equation (25). The shaded cells signify the dominant asymmetric form of the Bayes convergence factor to be chosen.
Selecting the appropriate asymmetric form of the Bayes convergence factor allow us to infer additional information about the overall performance of our model. We can measure the deviation from a symmetric normal distribution (expected prior probabilities) that the estimated asymmetric form of the Bayesian convergence factor (observed posterior probabilities) yields. The P-P plots of this assessment are shown in Fig. 10. The thick curve represents the estimated cumulative probability distribution of the asymmetric \\(C_{b}\\) predictive values observed in the SEWI region, and estimated from the simulation data. The estimated parameters (location, scale) are shown in the right side of each graph. The diagonal line represents the expected cumulative probability distribution of a symmetric distribution of predictive values (i.e., the expected predictive values at a prevalence threshold of 0.5). The parts of the predicted cumulative distribution curve that are above the expected one (diagonal) signify an increase in model accuracy that can be obtained from an asymmetric classification, while the parts of the predictive cumulative distribution curve below the expected diagonal line, signify a decrease in model accuracy. The point where the two lines meet (shown as the point of intersection of the reference lines), provide us with an estimated empirical prevalence level (threshold value for classification) that maximizes the modeling accuracy in our data. The net gain (or loss) in predictive value of our model due to the uncertainty in classification is the difference in the area that rests between the expected diagonal line, and the estimated observed curve.
\\begin{table}
\\begin{tabular}{c c|c c} & \\multicolumn{2}{c}{_Robustness Groups_} \\\\ & & _All training_ & _500,000_ \\\\ & & cycles & training cycles \\\\ \\hline \\(\\alpha=0\\) & 1.071 & 0.071 \\\\ & \\(\\alpha=0.25\\) & 16.744 & 15.744 \\\\ & \\(\\alpha=0.5\\) & 0.286 & -0.714 \\\\ & \\(\\alpha=0.75\\) & 0.987 & -0.013 \\\\ & \\(\\alpha=1\\) & 1.596 & 0.596 \\\\ \\end{tabular}
\\end{table}
Table 3: Table 3: Dominance values for assessing the selection of the Cb asymmetric normal form in the SEWI region.
Fig. 9: Estimated location (\\(\\mu\\)) and scale (\\(\\sigma\\)) parameters of selected asymmetric normal forms of the Bayes convergence factor from empirical data in the SEWI region: (a) robust estimates (across all training cycles); (b) maximum information estimates (after 500,000 cycles).
From an initial observation of the models' accuracy within all simulation runs in the SEWI region, shown in sub-graph (a) of Fig. 10, the estimated prevalence threshold (=0.4) does not seem to deviate importantly from the expected one (0.5). Shifting the prevalence threshold would provide a 5.7% increase in the predictive value (informational gain) of the model. But, if we repeat our analysis for the SEWI regions' simulation groups (A, B and C), thus accounting for structural differences in the proportion of urban cells and exclusionary areas, we can see that spatial configuration affects considerably our actual model performance. For simulation group A, shown in sub-graph (b), the model performance is heavily dependent on the negative predictive values (estimated prevalence of 0.53 \\(>\\) 0.5), and produces poor overall model predictive values (Cb=0.436, or a 6.4% decrease in mean predictive value of the model). As the proportion of urban to exclusionary increases in the spatial composition of our simulation maps, the predictive value of the model increases substantially, and the estimated prevalence level decreases. Especially for group C, shown in sub-graph (d), a gain of 19.2% in model performance can be obtained from a shift in model prevalence (from 0.5 to 0.23).
## IV Discussion and Conclusions
The analysis described above reveals the magnitude and multi-dimensionality of the spatial complexity involved in modeling land use change transitions in mixed and asymmetric landscapes in terms of amount and distribution of change. Performing spatial accuracy assessment requires the development and utilization of additional, advanced methods of assessment, related both to the models' predictive value in terms of quantity of change, but also to the performance of classifying the presence or absence of such a transition. It has been shown above that classification accuracy is closely related to the achieved modeling performance, and additional Bayesian metrics have been proposed, described and analyzed using the SEWI region case study. These advanced methods take into advantage the stochastic character of intelligent simulation models such as the LTM model, and can be used for performing model assessment in agent-based models of land use change, or other spatially explicit artificial intelligent modeling. The metrics described in this paper address different aspects of the spatial modeling performance such as assessing the predictive value of the model simulations (_PPV, NPV, DOR_), and estimating empirical convergence curves for enhancing classification accuracy (\\(C_{b}\\)). The proposed metrics and their assessment methodology allows the researcher and analyst to acquire a more holistic assessment of a models' spatial accuracy over space and time, especially in the presence of uncertainty about the transitional model thresholds.
The case study of the SEWI region used to illustrate the usage of the metrics, allow us to make assess the LTM model accuracy for simulating urban changes in the region. All metrics seem to confirm a general emergent model accuracy that appears to converge towards a 70% upper level. We can also see how the amount of urban change and exclusionary zones present in our landscapes dramatically affects the performance of the model. The latter result raises the significance of adjusting the classification prevalence threshold at spatially homogeneous scales in our simulation groups (e.g., implementing different thresholds for groups with different classes of urban change).
The results obtained also allow us to infer that in landscapes where the rate and amount of land use change vary importantly, symmetric spatial transition classification schemes are difficult to obtain. Instead we can enhance model predictions by assuming asymmetric spatial configurations, and by estimating the degree of asymmetry via a spatial _stochastic dominance_ methodology. The practical significance of the proposed additional spatial model assessment metrics is that they can provide an \"informational summary\" of the simulated region or landscape ensembles. The use of the analysis and the performance of the metrics can help us in a multitude of ways. First, to understand and learn how well the model fits to different combinations of presence and absence of transitions in our landscapes, not simply how well the model fits our given data. Second, given that most spatial databases suffer from incomplete information and pre-simulation measurement errors, we can also derive (estimate) a theoretical accuracy that we would expect our model to assess, under the presence of such incomplete information data, and thus partially separate model from measurement errors in spatial simulations. Third, to understand the role and pattern of uncertainty in our simulations and model estimations. We can compare results across simulation runs (and thus quantitative patterns of change) that tend to provide less or more uncertain model performance, and understand the role of spatially-explicit patterns and cell configurations to model training and simulation. Fourth, to compare the
Fig. 10: Normal P-P plots for assessing Bayesian convergence simulation performance of the SEWI region: (a) across all simulation groups; (b) simulation group A; (c) simulation group B; (d) simulation group C.
significance or estimation contribution of transitional presence and absence (change versus no change) to our model performance, and the contribution of the spatial drivers and variables to the explanatory value of our model. Estimating model performance using different combinations of drivers (e.g., instead of groups A, B, C in the SEWI region, use of the same sampled boxes with different drivers, or using training sets with sequentially dropping a driver at a time), could allow us to estimate the differences in informational uncertainty for each driver combination or for single drivers within our simulations. Fifth, to compare measurements of informational uncertainty at different scales of spatial resolution. Pijanowski et al. (2003; 2005) showed the significance of using a scalable window for sensitivity analysis. Assessing model uncertainty of predictions for each of spatial resolutions can also enhance our knowledge about modeling at different spatial scales and selecting scales that produce lower uncertainty estimates.
Finally, the methodology and metrics developed in this paper allows for the development of a dynamic and adaptive modeling methodology. Beyond the aggregate level for which the assessment was performed for the purposes of this paper, it is both methodologically and computationally feasible to assess and adjust model accuracy at a simulation-to-simulation basis, in order to obtain dynamically enhanced simulation results. Especially in the case of agent-based modeling such a model assessment methodology can be inversed and iterated to obtain spatially robust and diverse future landscape configurations that optimize both the amount and degree of information contained in the simulation, and the emergence of stochastically dominant agent strategies.
## Acknowledgment
Acknowledgments here.
## References
* [1] C. E. Shannon, \"A Mathematical Theory of Communication,\" _Bell System Technical Journal_, vol. 27, no. July and October, pp. 379-423 and 623-656, 1948.
* [2] C. E. Shannon, and W. Weaver, _Mathematical Theory of Communication_, Chicago, IL: University of Illinois Press, 1963.
* [3] A. Jessop, _Informed assessments : an introduction to information, entropy, and statistics_, New York: Ellis Horwood, 1995.
* [4] I. Vajda, _Theory of statistical inference and information_, Dordrecht ; Boston: Kluwer Academic Publishers, 1989.
* [5] P. Stoica, Y. Selen, and J. Li, \"Multi-model approach to model selection,\" _Digital Signal Processing_, vol. 14, no. 5, pp. 399-412, 2004.
* [6] T. Nelson, B. Boots, and M. A. Wulder, \"Techniques for accuracy assessment of tree locations extracted from remotely sensed imagery,\" _Journal of Environmental Management_, vol. 74, no. 3, pp. 265-271, 2005.
* [7] G. J. Roloff, J. B. Haufler, J. M. Scott _et al._, \"Modeling Habitat-based Viability from Organism to Population,\" _Predicting Species Occurces: Issues of Accuracy and Scale_, pp. 673-686, Washington DC: Island Press, 2002.
* [8] F. C. James, C. E. McGulloch, J. M. Scott _et al._, \"Predicting Species Presence and Abundance,\" _Predicting Species Occurces: Issues of Accuracy and Scale_, pp. 461-466, Washington DC: Island Press, 2002.
* [9] C. Gonzalez-Rebeles, B. C. Thompson, F. C. Bryant _et al._, \"Influence of Selected Environmental Variables on GIS-Habitat Models Used for Gap Analysis,\" _Predicting Species Occurences: Issues of Accuracy and Scale_, pp. 639-652, Washington DC: Island Press, 2002.
* [10] K. Lowell, and A. Jaton, _Spatial accuracy assessment : land information uncertainty in natural resources_, Chelsea, Mich.: Ann Arbor Press, 1999.
* [11] M. Cablek, D. White, A. R. Kiester _et al._, \"Assessment of Spatial Autocorrelation in Empirical Models in Ecology,\" _Predicting Species Occurences: Issues of Accuracy and Scale_, pp. 429-440, Washington DC: Island Press, 2002.
* [12] J. Wu, and R. Hobbs, \"Key issues and research priorities in landscape ecology: An idiosyncratic synthesis,\" _Landscape Ecology_, vol. 17, no. 4, pp. 355-365, 2002.
* [13] D. W. McKemney, L. A. Venier, A. Heerdegen _et al._, \"A Monte Carlo Experiment for Species Mapping Problems,\" _Predicting Species Occurences: Issues of Accuracy and Scale_, pp. 377-382, Washington DC: Island Press, 2002.
* [14] A. E. Gelfand, A. M. Schmidt, S. Wu _et al._, \"Modelling species diversity through species level hierarchical modelling,\" _Journal of the Royal Statistical Society: Series C (Applied Statistics)_, vol. 54, no. 1, pp. 1-20, 2005.
* [15] G. R. Pontius, Jr., and J. Spencer, \"Uncertainty in Extrapolations of Predictive Land-Change Models,\" _Environment & Planning B: planning and design_, vol. 32, no. 2, pp. 211-230, 2005.
* [16] B. J. L. Berry, L. D. Kiel, and E. Elliot, \"Adaptive agents, intelligence, and emergent human organization: Capturing complexity through agent-based modeling,\" _Proceedings of the National Academy of Sciences_, vol. 99, no. Supplement 3, pp. 7178-7188, 2002.
* [17] M. North, C. Macal, and P. Campbell, \"Oh behaved Agent-based behavioral representations in problem solving environments,\" _Future Generation Computer Systems_, vol. In Press, Corrected Progr, 2005.
* [18] K. T. Alexandridis, and B. C. Pijanowski, \"Assessing Multiagent Parcelization Performance in the MABEL Simulation Model Using Monte Carlo Replication Experiments,\" _Environment and Planning B: Planning and Design_, vol. 34, no. 2, pp. 223-244, 2007.
* [19] D. G. Brown, S. E. Page, R. Riole _et al._, \"Path Dependence and the Validation of Agent-based Spatial Models of Land Use,\" _International Journal of Geographical Information Science_, vol. 19, no. 2, pp. 153-174, 2005.
* [20] M. J. Fortin, B. Boots, F. Csillag _et al._, \"On the Role of Spatial Stochastic Models in Understanding Landscape Indices in Ecology,\" _Oikos_, vol. 102, no. 1, pp. 203-212, 2003.
* [21] I. Demetrius, V. Matthias Gundlach, and G. Ochs, \"Complexity and demographic stability in population models,\" _Theoretical Population Biology_, vol. 65, no. 3, pp. 211-225, 2004.
* [22] J. C. Luijten, \"A systematic method for generating land use patterns using stochastic rules and basic landscape characteristics: results for a Colombian hillside watershed,\" _Agriculture, Ecosystems & Environment_, vol. 95, no. 2-3, pp. 427-441, 2003.
* [23] X. Boyen, and D. Koller, \"Approximate Learning of Dynamic Models\". pp. 396-402.
* [24] J. Dubra, and E. A. Ok, \"A Model of Procedural Decision Making in the Presence of Risk,\" _International Economic Review_, vol. 43, no. 4, pp. 1053-1080, 2002.
* [25] A. Lazrak, and M. C. Quenez, \"A Generalized Stochastic Differential Utility,\" _Mathematics of Operations Research_, vol. 28, no. 1, pp. 154-180, 2003.
* [26] E. Fokoue, and D. M. Titterington, \"Mixtures of Factor Analysers, Bayesian Estimation and Inference by Stochastic Simulation,\" _Machine Learning_, vol. 50 no. 1-2, pp. 73-94, 2003.
* [27] J. P. C. Kleijnen, _An Overview of the Design and Analysis of Simulation Experiments for Sensitivity Analysis_, Discussion Paper 2004-16, Thirtyup University, Center for Economic Research 2004.
* [28] A. I. Aptekarev, J. S. Dehesa, and R. J. Yanez, \"Spatial Entropy of Central Potentials and Strong Asymptotics of Orthogonal Polynomials,\" _Journal of Mathematical Physics_, vol. 35, no. 9, pp. 4423-4428, 1994.
* [29] A. D. Brink, \"Minimum Spatial Entropy Threshold Selection,\" _Vision, Image and Signal Processing, IEE Proceedings_, vol. 142, no. 3, pp. 128-132, 1995.
- 108, 2006.
* [31] B. C. Pijanowski, S. Pithadia, B. A. Shellito _et al._, \"Calibrating a Neural Network-Based Urban Change Model for Two Metropolitan Areas of the Upper Midwest of the United States,\" _International Journal of Geographical Information Sciences,_ vol. 19, no. 2, pp. 197-215, 2005.
* [32] B. C. Pijanowski, B. Shellito, S. Pithadia _et al._, \"Forecasting and assessing the impact of urban sprawl in coastal watersheds along eastern Lake Michigan,\" _Lakes and Reservoirs: Research and Management,_ vol. 7, no. 3, pp. 271-285, 2002.
* [33] SEWRPC, _Aerial Photography and Orthophotography Inventory (1963-2000)_, SouthEastern Wisconsin Regional Planning Commission (SEWRPC). [http://www.scwpc.org/](http://www.scwpc.org/), Waukesha, WI, 2000.
* [34] G. M. Foody, \"Status of land cover classification accuracy assessment,\" _Remote Sensing of Environment,_ vol. 80, no. 1, pp. 185-201, 2002.
* [35] S. Sousa, S. Caeiro, and M. Painho, \"Assessment of Map Similarity of Categorical Maps Using Kappa Statistics: The Case of Sado Estuary.\"
* [36] H. Brenner, and O. Gefeller, \"Variation of sensitivity, specificity, likelihood ratios and predictive values with disease prevalence,\" _Statistics in Medicine,_ vol. 16, no. 9, pp. 981-991, 1997.
* [37] Y. T. Stub, \"Signal Detection in Noises of Unknown Powers Using Two-Input Receivers,\" 1983.
* [38] A. S. Glas, J. G. Lijner, M. H. Prins _et al._, \"The diagnostic odds ratio: a single indicator of test performance,\" _Journal of Clinical Epidemiology,_ vol. 56, no. 11, pp. 1129-1135, 2003.
* [39] M. S. Pepe, H. Janes, G. Longton _et al._, \"Limitations of the Odds Ratio in Gauging the Performance of a Diagnostic, Prognostic, or Screening Marker,\" _Am. J. Epidemiol.,_ vol. 159, no. 9, pp. 882-890, May 1, 2004, 2004.
* [40] J. Schafer, and K. Strimmer, \"An empirical Bayes approach to inferring large-scale gene association networks,\" _Bioinformatics,_ vol. 21, no. 6, pp. 754-764, March 15, 2005, 2005.
* [41] J. E. Smith, R. L. Winkler, and D. G. Fryback, \"The First Positive: Computing Positive Predictive Value at the Extremes,\" _Annals of Internal Medicine,_ vol. 132, no. 10, pp. 804-809, May 16, 2000, 2000.
* [42] J. D. Hart, and T. E. Wehrly, \"Kernel Regression Estimation Using Repeated Measurements Data,\" _Journal of the American Statistical Association,_ vol. 81, no. 396, pp. 1080-1088, 1986.
* [43] B. W. Silverman, _Density Estimation for Statistics and Data Analysis_: CRC Press, 1986.
* [44] J. S. Simooff, _Smoothing methods in statistics_, New York: Springer, 1996.
* [45] R. A. Tapia, and J. R. Thompson, _Nonparametric probability density estimation_, Baltimore: Johns Hopkins University Press, 1978.
* [46] M. P. Wand, and M. C. Jones, _Kernel Smoothing_: Chapman & Hall, 1995.
* [47] P. Smyth, \"Probability Density Estimation and Local basis Function Neural Networks,\" _Computational learning theory and natural learning systems_, S. J. Hanson, T. Petsche, R. L. Rivest _et al._, eds., pp. 233-248, Cambridge, Mass.: MIT Press, 1994.
* [48] B. E. Hansen, \"Sample Splitting and Threshold Estimation,\" _Econometrica,_ vol. 68, no. 3, pp. 575-603, 2000. | The aim of this paper is to explore and develop advanced spatial Bayesian assessment methods and techniques for land use modeling. The paper provides a comprehensive guide for assessing additional informational entropy value of model predictions at the spatially explicit domain of knowledge, and proposes a few alternative metrics and indicators for extracting higher-order information dynamics from simulation tournaments. A seven-county study area in South-Eastern Wisconsin (SEWI) has been used to simulate and assess the accuracy of historical land use changes (1963-1990) using artificial neural network simulations of the Land Transformation Model (LTM). The use of the analysis and the performance of the metrics helps: (a) understand and learn how well the model runs fits to different combinations of presence and absence of transitions in a landscape, not simply how well the model fits our given data; (b) derive (estimate) a theoretical accuracy that we would expect a model to assess under the presence of incomplete information and measurement; (c) understand the spatially explicit role and patterns of uncertainty in simulations and model estimations, by comparing results across simulation runs; (d) compare the significance or estimation contribution of transitional presence and absence (change versus no change) to model performance, and the contribution of the spatial drivers and variables to the explanatory value of our model; and (e) compare measurements of informational uncertainty at different scales of spatial resolution. | Condense the content of the following passage. |
arxiv-format/0806_3388v1.md | # Comparison of the atmosphere above the South Pole,
Dome C and Dome A: first attempt
S. Hagelin,\\({}^{1,2}\\) E. Masciadri\\({}^{1}\\), F. Lascaux\\({}^{1}\\) and J. Stoesz\\({}^{1}\\)
\\({}^{1}\\)INAF Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, I-501 25 Florence, Italy
\\({}^{2}\\)Uppsala Universitet, Department of Earth Sciences, Villavagen 16, S-752 36 Uppsala, Sweden
E-mail: [email protected]; [email protected]
Accepted 2008 April 17, Received 2007 April 3; in original form 2007 November 14
## 1 Introduction
The summits of the Antarctic Plateau are of particular interest for ground-based astronomy because the optical turbulence appears to be confined to a narrow layer close to the ground, (Marks et al., 1999; Lawrence et al., 2004; Agabi et al., 2006). Above this layer the atmosphere is exceptionally clear and the turbulence weak (0.27\\({}^{\\prime\\prime}\\)Lawrence et al. (2004), 0.36\\({}^{\\prime\\prime}\\)Agabi et al. (2006) at Dome C). Measurements have shown that the height of the turbulent layer above the summits is much lower than above other sites on the Plateau, (Lawrence et al., 2004; Marks et al., 1996; Marks, 2002; Marks et al., 1999). More precisely, the height of this layer is much larger above the South Pole (220 m (Marks, 2002) or 270 m (Travouillon et al., 2003)), which lies on a slope, than above Dome C (36 \\(\\pm\\)10 m Agabi et al. (2006)). Above Dome A the turbulence is expected to be even weaker.
The surface winds of Antarctica are among the principal sources of turbulence near the ground. The dominant source of the surface winds is the sloping terrain and the radiative cooling of the surface (Schwerdtfeger, 1984). The radiative cooling produces a temperature inversion that together with the sloping terrain cause a horizontal temperature gradient. This triggers a surface wind alongside the slope of the terrain. Strong wind shears can therefore occur in the boundary between the surface winds and the winds in the free atmosphere, which in general are geostrophic. This is the main source of instability under conditions of extreme stability, as is the case of the Antarctic atmosphere in winter. Above the summits of the Internal Antarctic Plateau the surface winds should be much weaker than elsewhere on the Plateau due to a lack of the principal cause triggering them: a sloping terrain.
These elements justify the enthusiastic interest of astronomers for this site (Storey et al., 2003; Fossat, 2005)Studies of the atmospheric properties of these regions are fundamental for applications to ground-based astronomy. However we need a better quantification of these characteristics, using instruments and measurements as well as models and simulations, in order to fill the gaps of uncertainties or doubts that still remain (Masciadri et al., 2006), extending our attention to a comparative analysis of different sites of the Internal Antarctic Plateau.
It is important to produce statistical estimate of meteorological parameters at all heights from the ground to verify if atmospheric conditions are _always_ advantageousfor astronomical applications. Indeed, it has recently been shown (Geissler & Masciadri 2006), using European Centre for Medium-Range Weather Forecasts (ECMWF) analyses, that in winter the wind speed grows monotonically above \\(\\sim\\)10 km a.s.l. (above sea level), achieving median values of the order of \\(\\sim\\)30 m s\\({}^{-1}\\) at 25 km a.s.l. At this height a variation of \\(~{}~{}\\sim\\)20 m s\\({}^{-1}\\) has been estimated between summer and winter. Such a strong wind might trigger an important decrease of the wavefront coherence time, even in presence of a weak turbulence (see discussion in Geissler & Masciadri (2006)) and, as a consequence, the potentiality of these sites might vanish. It should be therefore interesting to better quantify the median wind speed profile on other sites (with astronomical interest) of the Internal Antarctic Plateau or to retrieve some general indication of the wind speed in the free atmosphere above the Internal Antarctic Plateau.
Besides that, the employment of ECMWF analyses for characterization of the surface layer requires a deeper analysis. Geissler & Masciadri (2006) concentrated their study at heights greater than 30 m thus excluding the surface contribution, assuming that the ECMWF analyses are not optimized for the atmospheric flow near the surface. More recently, studies (Sadibekova et al. 2006) appeared claiming that the ECMWF analyses can be used to quantify and characterize the atmospheric properties with a good level of accuracy down to the surface level. In spite of our conviction that this conclusion is the result of a partial analysis (only summer data) we admit that in Geissler & Masciadri (2006) the authors _assumed_ (and they did not proved) the limits of the ECMWF for the surface layer. We therefore think that is time to provide a dedicated analysis on this subject to know the limit within which we can achieve reliable estimates with General Circulation Models (GCM) i.e. with ECMWF analyses and to identify the domains in which one is forced to employ mesoscale models. The latter are in principle more suitable to better resolve phenomena happening at smaller spatial and temporal scales.
The usefulness of mesoscale models depends on the limitations imposed by the products of the General Circulation Models (that means the ECMWF analyses). It is obvious that, the usefulness of mesoscale models would not be justified for wind speed if ECMWF products could provide answers with a sufficient good accuracy.
Our group is involved in a long term study made with meso-scale models for the simulation of the optical turbulence on the whole troposphere and low stratosphere using the technique described in Masciadri et al. (2004) above the Internal Antarctic Plateau1. This study is therefore prope-deuttic to researches done with such a typology of models.
Footnote 1: Some attempts have been done in the past (Swain & Gallee 2006) even if with different scientific goals.
It is therefore fundamental to provide a clear picture of the limitations of the ECMWF products and at the same time, to try to retrieve the maximum number of useful information we can get from such a kind of products.
In this paper we try a first attempt to quantify, above the three sites with some astronomic interests (the South Pole, Dome C and Dome A), the differences of some critical meteorologic parameters that are directly, or indirectly, related to the characteristics of atmospheric turbulence. We expressly select two sites (South Pole and Dome C) for which measurements are available and one site (Dome A) for which no measurements are available. The reasons for this choice is explained later on (paper's scientific goals).
We use data from the MARS catalogue of the ECMWF and radiosoundings from the South Pole2 and Dome C3. Analyses data are extracted from the three grid points that are nearest to the sites of interest, i.e. Dome A, Dome C and the South Pole. The coordinates of the sites are given in Table 1. We extracted analyses data from MARS at 00:00 UTC for the whole year of 2005 to assure a complete statistical sample covering all seasons. A more detailed description of the analyses data set is given by Geissler & Masciadri (2006).
Footnote 2: [ftp://amrc.ssec.wisc.edu/pub/southpole/radiosonde](ftp://amrc.ssec.wisc.edu/pub/southpole/radiosonde)
Footnote 3: [http://www.climantartide.it](http://www.climantartide.it)
The scientific goals of this paper are:
(1) To carry out a detailed comparison of radiosoundings/analyses of the wind speed and the potential temperature (the main critical parameters defining the stability of the atmosphere) near the surface (the first 150 m) for winter as well as summer above Dome C and the South Pole. This will permit us to quantify the uncertainty between measurements and ECMWF analyses in this vertical slab. The idea is therefore to define the conditions in which the ECMWF can be used to characterize, with a good level of reliability and accuracy, some atmospheric parameters and to use this tool to characterize a site for which no measurements are available. This is of course the interest for a model. Depending on the results of this analysis we will perform comparisons of meteorologic parameters above the South Pole, Dome C and Dome A using ECMWF analyses (Section 2).
(2) Using radiosoundings we will estimate the statistic median values of the wind speed in the first tens of meters above the South Pole and Dome C extended from April to November. We will therefore be able to quantify which site shows the better characteristics for nightly astronomical applications (Section 3).
(3) We extend the study developed by Geissler & Masciadri (2006) above Dome C for the ECMWF wind speed also to the South Pole and Dome A, both located on the Internal Antarctic Plateau but at different latitude and longitude. In this way, we intend to quantify which site is the best for astronomical applications.
\\begin{table}
\\begin{tabular}{l l l} \\hline Site & Lat. & Long. \\\\ \\hline Dome A\\({}^{*}\\) & 80\\({}^{\\circ}\\)22\\({}^{\\prime}\\)00\\({}^{\\prime\\prime}\\)S & 77\\({}^{\\circ}\\)21\\({}^{\\prime}\\)11\\({}^{\\prime\\prime}\\)E \\\\ & 80\\({}^{\\circ}\\)30\\({}^{\\prime}\\)00\\({}^{\\prime\\prime}\\)S & 77\\({}^{\\
Results of this analysis are fundamental to confirm or see in the right perspective the potentialities of Dome C.
(4) We extend the analysis of the Richardson number done by Geissler & Masciadri (2006) for Dome C for heights above 30 m to the three sites (South Pole, Dome C and Dome A) in order to quantify the regions and the periods that are less favorable for the triggering of optical turbulence and to identify the site with the best characteristics (Section 5). This result should represent the first estimate of potentialities of Dome A for astronomical applications and this should mean that we are able to provide some reliable results and conclusions even before some measurements are done on that site. This study has therefore a double interest. Firstly the intrinsic result itself. Secondly this analysis should open the path to a different approach for a fast and reliable classifications of potential astronomical sites.
## 2 ECMWF Analyses versus radiosoundings
In this section the ECMWF analyses are compared to radiosoundings from Dome C and the South Pole in order to investigate the reliability of the ECMWF data over an antarctic site. The radiosoundings and the ECMWF analyses used for this comparison refer to the year 2006. This year was chosen because of the richer sample of available radiosoundings. The reason why the comparison of analyses from different sites, discussed later on in this paper, are from 2005 is because we wish to investigate a homogeneous data set. In February 2006 the number of vertical levels in the ECMWF model changed from 60 to 91.
When studying the difference between radiosoundings and analyses particular interest is paid to the first tens of meters since this is where we expect the largest turbulent activity and this range was not studied by Geissler & Masciadri (2006). The median of the difference in wind speed and temperature between ECMWF analyses and radiosoundings, for summer (December, January and February) and winter (June, July and August) are shown in Fig. 1. A total number of 73 nights has been used in summer and 75 in winter. In the free atmosphere we observe that the radiosoundings do not reach as high altitudes in winter as they do in summer. A similar effect has been observed above the South Pole in our previous study (Geissler & Masciadri 2006) This effect makes it difficult to study the reliability of the ECMWF data at high altitudes in winter. In this season the balloons frequently explode. This happens, highly probably, because, due to the low pressure in the high part of the atmosphere, the balloons increase their size and they finally explode. The elastic material of the balloons is much more fragile in winter, when the temperature is much lower than in summer in the high part of the atmosphere.
In local summer the median difference in the wind speed never exceeds 1 m s\\({}^{-1}\\). Closest to the ground the difference is even smaller, below 1 m s\\({}^{-1}\\). During local winter the median difference in wind speed never exceeds 0.5 m s\\({}^{-1}\\) in the upper atmosphere, though the radiosoundings only reach \\(\\sim\\)10 km above the ground. The median difference is larger closest to the ground (\\(\\sim\\) 3 m s\\({}^{-1}\\)) in winter.
The median absolute temperature difference in summer is below 2 K throughout the whole 20 km. In the proximity of the surface this difference is of the order of 1 K, closest to the surface even less. During the winter the median difference is similar to that of the summer in the high part of the atmosphere. In the first 100 m the median difference is significantly larger, more than 6 K nearest the surface.
Figure 2 shows the same outputs of Fig.1 but calculated for South Pole. We report just the first 150 m because we had already calculated (Geissler & Masciadri 2006) the difference ECMWF analyses/ radiosoundings above 150 m at South Pole. For the wind speed the conclusion is similar to what observed at Dome C. The wind speed difference remains within 1 m s\\({}^{-1}\\) in the first 150 m in summer. However, analyses show a tendency in overestimating (\\(\\sim\\) 2 m s\\({}^{-1}\\)) the wind speed in the very low levels in winter even if this effect is slightly weaker than above Dome C. The median absolute temperature difference in summer is similar to what observed above Dome C and contained within \\(\\sim\\) 1 K. Near the ground the ECMWF analyses are almost 2 K warmer than the radiosoundings. This same trend is observed also in winter. However, in this season, the analyses are visibly much warmer (\\(\\sim\\) 6 K) than the radiosoundings near the surface. The statistic uncertainty \\(\\sigma/\\sqrt{N}\\) for the wind speed is of the order of 0.2 m s\\({}^{-1}\\) all along the 20 km with a maximum of 0.4 m s\\({}^{-1}\\) at 5 km in summer. The statistic uncertainty for the absolute temperature is of the order of 0.3 K and slightly larger (\\(\\sim\\) 0.6 K) in the first 30 m in winter. The precision of the tempearture provided by radiosoundings is \\(\\sim\\) 0.2 K in the boundary layer and \\(\\sim\\) 0.4 K in the free atmosphere while the precision of the wind speed4 is of the order of 0.15 m s\\({}^{-1}\\).
Footnote 4: More precisely, the velocity uncertainty is defined in the technical specification as the standard deviation of the differences in twin soundings.
Summarizing, we should say that the wind speed is almost well reconstructed by the ECMWF analyses with exception of the surface in winter where ECMWF analyses show a tendency in overestimating of 2-3 m s\\({}^{-1}\\). For a typical wind speed of 3 m s\\({}^{-1}\\) this corresponds to a large discrepancy. The absolute temperature, is in general warmer from the ECMWF analyses than from the radiosoundings near the surface in winter achieving a difference of the order of \\(\\sim\\) 6 K. The wind speed and temperature show similar trends above the two sites with exception of the absolute temperature in winter. At South Pole, the temperature from ECMWF analyses appears warmer on the whole 150 m while, at Dome C, only in the first 20 m.
To check if any biases are present in the measurement provided by radiosoundings in the first vertical grid point (critical region for radiosoundings) we calculated (Table 2) the median values of wind speed and absolute temperature in the three central winter months obtained from the Automatic Weather Station (AWS)5. **(An automatic weather station is an automated version of the traditional weather station, to enable measurements from remote areas)** and we compared these outputs with those obtained from radiosoundings in the first grid point. Data of 2004 are used for South Pole because no AWS data from 2006 were available. The median wind speed from radiosoundings in first vertical grid point is 2.8 ms\\({}^{-1}\\) and 5.7Figure 1: The median of the difference of the absolute temperature and the wind speed between ECMWF analyses and radiosoundings (ECMWF - radiosoundings) for summer (December, January, February) and winter (June, July, August) at Dome C in 2006.
Figure 2: The median of the difference of the absolute temperature and the wind speed between ECMWF analyses and radiosoundings (ECMWF - radiosoundings) for summer (December, January, February) and winter (June, July, August) at South Pole in 2006.
ms\\({}^{-1}\\) respectively at Dome C and South Pole. These values match in excellent way with AWS measurements: 3 ms\\({}^{-1}\\) and 5.7 ms\\({}^{-1}\\) respectively at Dome C and South Pole in the same periods. We conclude therefore that the measurement of the first grid point from the radiosoundings are reliable because perfectly in agreement with AWS measurements. We note that a similar median wind speed of 2.6 ms\\({}^{-1}\\) has been calculated at Dome C on a climatologic scale (1984-2003) by Aristidi et al. (2005) in winter. Hudson & Brandt (2005)calculated at the South Pole a median wind speed of \\(\\sim\\) 5 ms\\({}^{-1}\\) in winter on a climatologic scale (1994-2003). The median value calculated in our paper on a time scale of 1 year have therefore almost no climatologic trends. If we look at the absolute tempearture in Table 2 we conclude that the radiosoundings are \\(\\sim\\) 1 K colder than the AWS at Dome C and \\(\\sim\\) 2 K at South Pole.
Going back to Fig.1 and Fig.2, this conclusion supports and confirms the thesis that we are in front of an overestimate produced by the ECMWF analysis and not to an artifact in the measurements (radiosoundings). However, the overestimate is slightly smaller than what predicted for the temperature. This means an overestimate of 4-5 K instead of 6 K.
Fig.1 and Fig.2 therefore indicate that the ECMWF analyses accurately describe the state of the free atmosphere above Dome C and the South Pole. In the boundary layer, the ECMWF data shows a tendency in overestimating the wind speed and the absolute temperature, particularly in
\\begin{table}
\\begin{tabular}{l c c c c c c} \\hline & \\multicolumn{3}{c}{v\\({}_{0}\\) (ms\\({}^{-1}\\))} & \\multicolumn{3}{c}{T\\({}_{0}\\) (K)} \\\\ & 1\\({}^{st}\\) quartile & median & 3\\({}^{rd}\\) quartile & 1\\({}^{st}\\) quartile & median & 3\\({}^{rd}\\) quartile \\\\ \\hline & \\multicolumn{3}{c}{Radisoundings} & \\multicolumn{3}{c}{Radiosoundings} \\\\ Dome C (2006) & 1.5 & 2.8 & 3.8 & 204.4 & 208.3 & 213.1 \\\\ South Pole (2004) & 4.6 & 5.7 & 7.2 & 205.8 & 209.7 & 213.6 \\\\ & \\multicolumn{3}{c}{AWS} & \\multicolumn{3}{c}{AWS} \\\\ Dome C (2006) & 2.0 & 3.0 & 4.5 & 204.8 & 209.0 & 213.6 \\\\ South Pole (2004) & 3.6 & 5.7 & 7.4 & 206.7 & 211.4 & 215.0 \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: The median wind speed and absolute temperature for the winter months (JJA) from the lowest level of the radiosoundings compared to values obtained with AWS. First and third quartile are also given.
Figure 3: The gradients of the potential temperature and the wind speed near the surface for Dome C and the South Pole in July 2006. The dashed lines refer to Dome C and the solid lines refer to the South Pole. Thick lines are ECMWF analyses and thin lines are radiosoundings. In the top plots (a and b) the radiosoundings are interpolated with a step of 100 m, in the bottom plots (c and d) the step is 10 m.
winter. Results we obtained indicated that ECMWF analyses should be treated with more caution.
These conclusions also confirmed the thesis we had expressed in the Introduction concerning the Sadibekova et al. (2006) paper who claimed for a good agreement between radiosoundings and ECMWF analyses even in the surface.
In spite of the fact that they used ERA-reanalyses (a product having a lower resolution than the MARS catalog used in our study) the agreement between radiosoundings and analyses in their data matches well with our findings and predict a good agreement between radiosoundings and ECMWF analyses in summer. Our analysis, extended to winter, reveals that, in this season, the agreement is far from being good and the sharp changes in wind speed and temperature closest to the surface measured by the radiosoundings are not well reconstructed by the ECMWF analyses data.
To provide the most comprehensive and compact comparison of ECMWF analyses and radiosoundings above Dome C and the South Pole in the proximity of the surface we prefer to focus now on the two key parameters defining the thermodynamic stability, i.e. the gradients of the potential temperature and of the wind speed. Indeed, only a study of the simultaneous systematic effects on both quantities can tell us if we can use ECMWF analyses to quantify the thermodynamic stability in the surface layer.
Fig. 3 shows the median gradient of the potential temperature (left) and the median of the square of the gradient of the wind speed (right) in the first 150 m with a vertical resolution of 100 m (a and b) and of 10 m (c and d). As we can expect, the radiosoundings show a sharper gradient than the analyses near the surface. We can observe that the ECMWF analyses are able to identify that the gradients above Dome C are larger than above South Pole. Unfortunately, a precise quantification is missing and, even in the case of the best vertical resolution (cases \\(c\\) and d), the offset produced by analyses with respect to radiosoundings on the two parameters (\\(\\partial\\theta/\\partial z\\) and \\((\\partial v/\\partial z)^{2}\\)) is not comparable above the two sites (Dome C and the South Pole). This implies that the ECMWF analyses do not smooth out the potential temperature and wind speed gradients in a similar way above the two sites.
Knowing that the Richardson number depends on the ratio of \\(\\partial\\theta/\\partial z\\) and \\((\\partial v/\\partial z)^{2}\\) we conclude that it is pretty risky to draw any conclusion on a comparative analyses of the Richardson number in the surface layer between different sites calculated with the ECMWF analyses. As a consequence we can state that we can retrieve a ranking of the three sites with respect to the thermal and the dynamic gradient in an independent way but we cannot retrieve a ranking of the three sites with respect to the Richardson number in the surface layer using ECMWF analyses. We have therefore to limit us to a comparative analysis of the \\(\\partial\\theta/\\partial z\\) and \\((\\partial v/\\partial z)^{2}\\). In section 5 we will perform the Richardson number comparison in the free atmosphere where we showed the ECMWF analyses are reliable.
For a reliable study of the Richardson number in the surface layer we need, at present, radiosoundings. A forthcoming paper on this topic is in preparation.
(\\partial\\theta/\\partial z\\) and \\((\\partial v/\\partial z)^{2}\\) at the South Pole, Dome C and Dome A
As a consequence of the conclusions obtained in the previous section, we present here the 'thermal' and 'dynamic' properties of the surface layer in an independent way.
The change of the potential temperature with height indicates the thermal stability of the atmosphere. A positive gradient is defined as stable conditions, the vertical displacement of air is suppressed and thus is the production of dynamic turbulence.
The absence of sunlight in the antarctic night and the consequent radiative cooling of the snow surface creates a strong temperature inversion close to the ground. The monthly median of the gradient of the potential temperature in the first 150 m for the three sites, calculated with the ECMWF analyses, is shown in Fig. 4. From February to October all the gradients are positive and indicate a thermally stable stratified atmosphere near the surface. The inversion above Dome A (thick lines) is particularly intense when compared to Dome C (thin lines) and to the South Pole (dash-dotted lines) during these months. The value of the gradient at the lowest level is significantly larger for Dome A almost all the year. From March to August there is a very sharp change in the slope of the gradient of Dome A at around 20 m above the surface.
During the summer months all three sites have a neutral stratification near the surface, i. e. \\(\\partial\\theta/\\partial z\\approx 0\\). Vertical motion of the air is not suppressed but neither is it encouraged. A small perturbation can trigger dynamic turbulence.
The median of the gradient of the square of the wind speed in the first 50 m above the ground is shown in Fig. 5. The gradient of the wind speed at the lowest level is largest at Dome A for every month (except in June when it is slightly larger above Dome C).
## 3 Radiosoundings: the surface wind speed
Above the summits of the Internal Antarctic Plateau the surface winds are expected to be weaker than elsewhere on the Plateau. Fig.6 shows the median wind speed near the surface measured with radiosoundings at the South Pole (dashed lines) and Dome C (solid lines) from April to November. We observed that while it is true that the wind speed at the very lowest level is weaker at the peak (Dome C) than at the slope (South Pole), there is a sharp increase in the wind speed above Dome C in the first few tens of meters. At the height of 10/20 m, from May to November i.e. in the winter, the wind speed is higher above Dome C than above South Pole. Above this height the wind speed at Dome C is either higher than or very similar to the one observed above the South Pole.
In the center of the winter (June, July and August), the wind speed above Dome C reaches the 8 ms\\({}^{-1}\\) at 20 m and 9 ms\\({}^{-1}\\) at 30 m. The sharp change of the wind speed in the first 10/20 m matches with our expectations of a large wind speed gradient. This is indeed a necessary condition to justify the presence of optical turbulence in the surface (Agabi et al., 2006) in spite of a very stable thermal conditions (i.e. a positive gradient of the potential temperature).
Trinquet et al. (2008), in a small sample of radiosoundings in winter, observed a wind speed of 5 ms\\({}^{-1}\\) at 20 m and 8 ms\\({}^{-1}\\) only at 70 m. Our results, obtained with a complete statistical sample in winter, tells us that the Trinquet et al. estimate is too optimistic and we should expect a larger wind speed at low heights.
We conclude that in the first 10/20 m the mechanical vibrations, that might be derived from the impact of the atmospheric flow, flowing at 8-9 ms\\({}^{-1}\\) on a telescope structure, are probably more critical above Dome C than above the South Pole and should be carefully taken into account in the design of astronomical facilities.
## 4 ECMWF analyses: the weakest wind speed in the troposphere and the lower stratosphere
Fig. 7 shows the monthly median of the wind speed from ground level up to 25 km a.s.l. for Dome A (blue lines), Dome C (green lines) and the South Pole (red lines). During summer the wind speed is quite weak in the upper part of the atmosphere for all three sites. A maximum is observed at roughly 8 km a.s.l. From December to April the median wind speed is never larger than 15 m s\\({}^{-1}\\) in the whole 20 km and above all the sites. As the winter approaches the wind speed in the upper atmosphere increases. From April to November the wind speed increases monotonically above 10 km a.s.l. and the highest wind speed is more likely to be located in the highest part of the atmosphere. However, the rate at which the wind speed grows up is not the same above the three sites. The smallest increase rate is observed at the South Pole and the maximum rate is found above
Figure 4: The monthly median of the gradient of the potential temperature for 2005, Dome A (thick lines), Dome C (thin lines) and the South Pole (dashed lines).
Dome C. Differences are far from being negligible. In this slab the median wind speed is, at Dome C, almost twice that of Dome A or the South Pole.
Geissler & Masciadri (2006) found in their investigation a similar wind speed trend above Dome C from ECMWF analyses for the years 2003 and 2004. The wind speed trend above Dome C is therefore confirmed in different years. However, as announced in the Introduction, the goal in our paper is the comparison of Dome C with other sites to evaluate which are the best for astronomical applications.
The different wind speed gradient rate observed in Fig. 7 above different sites is highly probable related to the synoptic circulation of the Antarctic regions. Indeed, the jet stream, that characterizes the vertical wind speed profile of mid-latitude sites at the tropopause level, is absent here. On the other hand, the Polar Vortex creates strong high-altitude winds surrounding the Polar High in winter Schwerdtfeger (1984). Wind speed increases monotonically above 10 km. The result observed in Fig.7 can be explained with the fact that the South Pole is located near the centre of the polar vortex and consequently the influence of the polar vortex is weak at this site. Dome A and Dome C are situated further from the centre of the continent, and thus from the centre of the polar vortex. It is to be expected that the wind speed is larger above these sites. The farther from the polar high (centre of the Polar Vortex) is the site, the greater is the wind speed strength above 10 km.
According to this explanation the wind speed strength in the upper atmosphere should be related to the distance of the site from the centre of the polar vortex. If we could know the exact position of the polar high we would have a perfect tool to identify, _a priori_, the site with the weakest wind speed above 10 km in winter. Unfortunately the Polar Vortex is not a perfect cone and the centre of the vortex is not located at the same coordinates at all heights. To verify the sensitivity to this effect we included in our sample a further site (Dome F, (77.31 S, 39.7 E), \\(h\\)=3810 m) and we added the median wind speed in the same Fig.7. In this
Figure 5: The monthly median of the square of the gradient of the wind speed in the first 50 m for the year 2005. Dome A (thick lines), Dome C (thin lines) and the South Pole (dashed lines).
picture it is evident that South Pole and Dome C have, respectively, the weakest and the the greatest wind speed above 10 km. The wind speed above Dome A and Dome F is mostly comparable and it is more difficult to state which site have the weakest wind speed. Dome A is slightly better if we consider a statistical analysis of the whole year but the difference is not so important. This indicates that the polar high fluctuates probably in a region located between the South Pole, Dome A and Dome F as indicates the dashed region in Fig.8.
## 5 The Richardson number for the south pole, Dome C and Dome A
The Richardson number is an indicator of the stability of the atmosphere:
\\[Ri=\\frac{g}{\\theta}\\frac{\\partial\\theta/\\partial z}{(\\partial v/\\partial z)^{2}} \\tag{1}\\]
where g is the gravitational acceleration (9.8 m s\\({}^{-2}\\)), \\(\\theta\\) is the potential temperature and v is the wind speed. The atmosphere is considered to have a stable stratification when the Richardson number is larger than a critical value, typically 0.25. If the Richardson number is less than the critical value the stratification is classified as unstable. The smaller the Richardson number is the higher is the probability of triggering of turbulence.
The comparison of the Richardson number above different sites allows us to make a relative estimate of which site is characterized by a higher or lower probability to trigger turbulence. In Geissler & Masciadri (2006)-Fig. 14 the median of the inverse of the Richardson number for Dome C are shown in different slabs and periods of the year. From such results it is possible to retrieve a comparative analysis that necessarily is qualitative. In other words we can say if a region shows a higher or smaller probability to trigger turbulence but we do not have a reference to quantify these differences nor can we conclude whether these differences are negligible or not. In order to provide insights on this question we calculated (Fig. 9) the median of the inverse of the Richardson number (1/Ri) from Dome C (thick solid lines) and from Mt. Graham (thin dashed lines), taken as an example of a typical mid-latitude site, for each month during the whole year 2005. The inverse of the Richardson number is shown instead of the Richardson number itself because the inverse shows a better dynamic. The smaller 1/Ri is, the more stable is the atmosphere. The 1/Ri is smaller above Dome C than above Mt Graham almost everywhere. This result is co
Figure 6: The wind speed near the ground at Dome C (solid lines) and the South Pole (dashed lines), April to November 2006.
herent with the optical turbulence measurements observed above the two sites (Egner, Masciadri & McKenna (2007), Agabi et al. (2006)) and for this reason it definitely confirms the method proposed by Geissler & Masciadri (2006) to study 1/Ri as a qualitative relative indicator to rank sites with respect to the probability to trigger turbulence. During September and October, when the median wind speed is remarkably strong at high altitudes at Dome C (see Fig. 7), 1/Ri is actually larger than for the mid-latitude site. This confirms, from a qualitative and quantitative point of view, that the high part of the atmosphere in September and October is a region to be monitored carefully because the probability to trigger instabilities is comparable and even larger than above mid-latitude sites. A strong increase in the wind speed is most certainly the cause of the high 1/Ri over Dome C at such altitudes.
The vertical median profiles of 1/Ri for the three Antarctic sites (Dome A, Dome C and the South Pole) are shown in Fig. 10. In local summer the profiles from the different sites are similar to each other. The 1/Ri has a maximum at ground level and a smaller peak somewhere slightly above 6 km a.s.l. Above 10 km a.s.l. the atmosphere is very stable for all the three sites. From April/May the instability above 10 km increases. At the end of the winter (September and October) the instability in the upper part of the atmosphere is even larger than the maximum value observed near 6 km for Domes A and C in summer. The instability of Dome C is more pronounced than that of Dome A. Above the South Pole 1/Ri shows the best conditions (i.e. the most stable) over the whole year.
## 6 Conclusions
In this paper we provide a first comparison of the atmospheric characteristics above three different sites on the Internal Antarctic Plateau: Dome C, Dome A and the South
Figure 7: The monthly median wind speed for 2005. Dome A (blue line), Dome C (green line), South Pole (red line) and Dome F (black line).
Pole. More precisely we try to answer the specific questions defined in the introduction.
(1) The comparison of the ECMWF analyses with the radiosoundings shows that the analyses can accurately describe the atmosphere above the Internal Antarctic Plateau in the whole range from 10 m to 20 km above the surface. During no season does the median difference of the wind speed exceed 1 m s\\({}^{-1}\\) above the first 10 m. The median difference of the absolute temperature is within 2 K in the same vertical slab. In the surface layer the wind speed discrepancy between analyses and radiosoundings is slightly larger (2-3 m s\\({}^{-1}\\)) while ECMWF analyses show a tendency to overestimate the absolute temperature measured by radiosoundings in the lowest level in winter (\\(\\Delta\\)T \\(\\sim\\)-4-5 K). A statistic analysis reveals that most of the radiosoundings explode in winter at about 10-12 km. This does not allow us to estimate the reliability of the ECMWF analyses at these high altitudes.
(2) We proved that the ECMWF analyses do not produce accurate estimates in the surface layers confirming what was only assumed by Geissler & Masciadri (2006). This result represents an answer to our original question expressed in the Introduction. The ECMWF intrinsically have limitations for the characterization of the atmospheric flow in this vertical slab. This justify the employment of meso-scale models but, at the same time, also tells us that it will be fundamental to prove that meso-scale models can do better than General Circualtion Models (GCM) in this vertical slab.
(3) We could conclude that above all the three sites the potential temperature in the surface layer is extremely stable even if the ECMWF analyses generally underestimate its gradient when compared to measurements obtained by radiosoundings. Such an effect is particularly evident in winter. Dome A is by far the site with the steepest gradient of potential temperature and wind speed if compared to the South Pole and Dome C.
(4) We proved that the median wind speed in the first meters above the ground is weaker at Dome C than at the
Figure 9: The monthly median for 2005 of the inverse of the Richardson number (1/Ri) for Dome C (thick solid lines) and Mt. Graham (thin dashed lines).
South Pole from April to November. However, the wind shear in the surface layer at Dome C is much larger than at the South Pole achieving at 10-20 m a wind speed of 8-9 m s\\({}^{-1}\\) in winter. Such a strong wind shear combined with a stable stratification of the air in this layer is most likely to be the cause of the intense optical turbulence that has been measured in the first tens of meters at Dome C (Agabi et al., 2006). Such a strong wind speed at this height might be a source of vibrations produced by the impact of the atmospheric flow on telescope structures and should therefore be taken into account in the conception and design of astronomic facilities.
(5) Median monthly values of the inverse of the Richardson number (1/Ri) indicate that the probability to trigger instabilities is larger above a mid-latitude site (for which we have a reliable characterization of the optical turbulence) than above any of the three sites on the Internal Antarctic Plateau. Above all the three Antarctic sites 1/Ri is visibly smaller than that measured above Mt.Graham (selected as representative of mid-latitude sites). This is the first time that such a conclusion has been achieved and definitely prove that the method presented in Geissler & Masciadri (2006) is reliable.
(6) Moreover, our analysis permitted us also a more sophisticated discrimination between the quality of the the 1/Ri above the three sites. Dome A and the South Pole show, indeed, more stable conditions than Dome C above the first 100 m. This is probably due to the polar vortex which, producing an increase of the wind speed in the upper atmosphere, also increases the probability to trigger thermodynamic instabilities.
(7) We showed that it is risky to retrieve estimates of the Richardson number in the surface layer (Fig.3) because we did not find an equivalent smoothing effect of the ECMWF analyses for the gradient of the potential temperature nor for the gradient of the wind speed above different sites.
(8) In the free atmosphere, above the first 10 km, the polar vortex induces a monotonic increase of the wind speed
Figure 10: The monthly median for 2005 of the inverse of the Richardson number (1/Ri) for Dome A (blue lines), Dome C (green lines) and the South Pole (red lines).
in winter that is proportional to the distance of the site to the polar high. Dome C therefore shows the largest wind speed above 10 km in winter. At Dome C the wind speed at 15 km can easily be almost twice that of Dome A and even thrice the wind speed at the South Pole in winter. The wind speed above the South Pole is the weakest among the three sites on the whole 20 km in all seasons. This conclusion put therefore fundamental warnings for astronomical applications.
(9) This study allowed us to draw a first comprehensive picture of the atmospheric properties above the Internal Antarctic Plateau. In spite of the presence of generally good conditions for astronomical applications, Dome C does not appear to be the best site with respect to the wind speed, in the free atmosphere as well as in the surface layer. Both the South Pole and Dome A show a weaker wind speed in the free atmosphere. Estimates related to the surface layer need to be taken with precaution. ECMWF analyses cannot be used to draw definitive conclusions on comparisons of the three sites in this vertical slab due to their limited reliability in this thin atmospheric slab (see Section 5) and radiosoundings are available only for Dome C and the South Pole. Above Dome A the gradient of the potential temperature is particularly large in the very near surface layer indicating conditions of extreme thermal stability that might be associated to a strong value of the optical turbulence in this vertical range when a thermodynamic instability occurs (possibly even larger than above Dome C). However our study showed that, to predict the thickness of such a layer we should need measurements or simulations with atmospheric mesoscale model with a higher spatial resolution near the ground that is able to better resolve the evolution of the atmospheric flow. This is a part of our forthcoming activities.
In conclusion, at present, the real solid and unique argument that makes Dome C preferable to the South Pole for astronomical applications is the extreme thinness of the optical turbulence surface layer. We expect at Dome A comparable or even larger optical turbulence values with respect to Dome C in the surface layer. We cannot conclude if the surface layer at Dome A is thinner than that observed above Dome C. However our study clearly indicates that Dome C is not the best site on the Internal Antarctic Plateau with respect to the wind speed in the free atmosphere as well as in the surface layer nor is it the site with the most stable conditions in the free atmosphere. Both Dome A and the South Pole show more stable conditions in the free atmosphere.
## Acknowledgments
This study has been carried out using radiosoundings from the AMRC (Antarctic Meteorological Research Center, University of Wisconsin, Madison [ftp://amrc.ssec.wisc.edu/pub/southpole/radiosonde](ftp://amrc.ssec.wisc.edu/pub/southpole/radiosonde)) and from the Progetto di Ricerca 'Osservatorio Meteo Climatologico' of the Programma Nazionale di Ricerche in Antartide (PNRA), [http://www.climantride.it](http://www.climantride.it). ECMWF products are extracted from the Catalog MARS, [http://www.ecmwf.int](http://www.ecmwf.int). This study has been funded by the Marie Curie Excellence Grant (FOROT) - MEXT-CT-2005-023878.
## References
* Aristidi et al. (2005) Aristidi, E., Agabi, K., Azouit, M., Fossat, E., Vernin, J., Travouillon, T., Lawrence J.S., Meyer, C., Storey, J.W.V., Halter, B., Roth, W.L., Walden, V., 2005, A&A, 430, 739
* Agabi et al. (2006) Agabi, A., Aristidi, E., Azouit, M., Fossat E., Martin F., Sadibekova T., Vernin J., Ziad A., 2006, PASP, 118, 344
* Egner et al. (2007) Egner, S., Masciadri, E., McKenna D., 2007, PASP, 119, 669
* Fossat (2005) Fossat, E. 2005, JApA, 26, 349
* Geissler & Masciadri (2006) Geissler, K., Masciadri, E., 2006, PASP, 118, 1048
* Hudson & Brandt (2005) Hudson, S.R. & Brandt, R.E., 2005, Journ. of Clim., 1673
* Lawrence et al. (2004) Lawrence, J., Ashley M., Tokovinin A., Travouillon T, 2004, Nature, 431, 278
* Marks (2002) Marks, R.D., 2002, A&A, 385, 328
* Marks et al. (1999) Marks, R.D., Vernin, J., Azouit M., Manigault J.F., Clevelin C., 1999, A&AS, 134, 161
Figure 8: Antarctica map. The sites of South Pole, Dome A, Dome C and Dome F are labeled with a black point. The dashed region indicates the ’position space’ of the polar high at different heights as retrieved from the Fig. 7.
Marks, R.D., Vernin, J., Azouit, M., Briggs, J.W., Burton, M.G., Ashley, M.C.B., Manigualt, J.F., 1996, 118, 385
* [] Masciadri, E., Avila, R., Sanchez, L.J., 2004, RMxAA, 40, 3
* [] Masciadri, E., Lascaux, F., Stoesz, J., Hagelin, S., Geissler, K., 2007, \"Large Astronomical Infrastructures at Concordia, prospects and constraints for Antarctic Optical/IR Astronomy\", EAS Publ. Series, 25, 57
* [] Sadibekova, T., Fossat, E., Genthon, C., Krinne,r G., Aristidi, E., Agabi, K., Azouit, M., 2006, Antarctic Science, 18, 437
* [] Schwerdtfeger, W., 1984, Weather and climate of the Antarctic, Developments in atmospheric science, 15 (Elsiever)
* [] Swain, M. & Gallee, H., 2006, PASP, 118, 1190
* [] Storey, J.W.V., Ashley, M., C., B., Lawrence, J.S., Burton, M.G. 2003, Memorie Sai, 2, 13
* [] Travouillon, T., Ashley, M.C.B., Burton, M.G., Storey, J. W. V., Loewenstein, R.F., 2003, A&A, 400, 1163
* [] Trinquet, H., Agabi, K., Vernin, J., Azouit, M., Aristidi, E., Fossat, E., 2008, PASP, accepted | The atmospheric properties above three sites (Dome C, Dome A and the South Pole) on the Internal Antarctic Plateau are investigated for astronomical applications using the monthly median of the analyses from ECMWF (the European Centre for Medium-Range Weather Forecasts). Radiosoundings extended on a yearly time scale at the South Pole and Dome C are used to quantify the reliability of the ECMWF analyses in the free atmosphere as well as in the boundary and surface layers, and to characterize the median wind speed in the first 100 m above the two sites. Thermodynamic instability properties in the free atmosphere above the three sites are quantified with monthly median values of the Richardson number. We find that the probability to trigger thermodynamic instabilities above 100 m is smaller on the Internal Antarctic Plateau than on mid-latitude sites. In spite of the generally more stable atmospheric conditions of the Antarctic sites compared to mid-latitude sites, Dome C shows worse thermodynamic instability conditions than those predicted above the South Pole and Dome A above 100 m. A rank of the Antarctic sites done with respect to the strength of the wind speed in the free atmosphere (ECMWF analyses) as well as the wind shear in the surface layer (radiosoundings) is presented.
keywords: site testing - atmospheric effects - turbulence | Condense the content of the following passage. |
arxiv-format/0806_3473v1.md | # A Spectroscopic Orbit for Regulus
D. R. Gies12, S. Dieterich1, N. D. Richardson1, A. R. Riedel1, B. L. Team1,
H. A. McAlister1, W. G. Bagnuolo, Jr.1, E. D. Grundstrom123,
S. Stefl4, Th. Rivinius4, and D. Baade5
Footnote 1: affiliation: Center for High Angular Resolution Astronomy, Department of Physics and Astronomy, Georgia State University, P. O. Box 4106, Atlanta, GA 30302-4106; [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected]
Footnote 2: affiliation: Visiting Astronomer, Kitt Peak National Observatory, National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation.
Footnote 3: affiliation: Current address: Physics and Astronomy Department, Vanderbilt University, 6301 Stevenson Center, Nashville, TN 37235
Footnote 4: affiliation: European Organisation for Astronomical Research in the Southern Hemisphere, Alonso de Cordova 3107, Vitacura, Santiago de Chile, Chile; [email protected], [email protected]
Footnote 5: affiliation: European Organisation for Astronomical Research in the Southern Hemisphere, Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany; [email protected]
## 1 Introduction
Regulus (\\(\\alpha\\) Leo; HD 87901; HR 3982; HIP 49669) is a nearby (\\(d=24.3\\pm 0.2\\) pc; van Leeuwen 2007) intermediate mass star of spectral type B7 V (Johnson & Morgan 1953)or B8 IVn (Gray et al., 2003). It is one of a number of nearby B- and A-type stars exhibiting extremely fast rotation. The very broad shape of its photospheric absorption lines indicates a projected rotational velocity of \\(V\\sin i=317\\pm 3\\) km s\\({}^{-1}\\)(McAlister et al., 2005). The full picture of its fast spin came with the first interferometric observations of Regulus with the CHARA Array optical long baseline interferometer (McAlister et al., 2005). These observations showed that the star is rotationally flattened and gravity darkened at its equator. Models of the spectrum and interferometry demonstrate that the star has a rotation period of 15.9 hr, with an equatorial velocity equal to 86% of the critical velocity, where centripetal acceleration balances gravity.
The fast spin of Regulus is puzzling given its probable age (\\(\\approx 150\\) Myr; Gerbaldi, Faraggiana, & Balini, 2001). Stars born as fast rotators are expected to slow down relatively quickly after birth, and only again achieve rapid rotation at the conclusion of core H-burning through a redistribution of angular momentum (Ekstrom et al., 2008). Thus, it is surprising to find rapid rotation in Regulus, a star which is still in the middle of its core H-burning stage. On the other hand, stars that are members of interacting binaries can experience large changes in spin due to tidal interactions and mass exchange. Langer et al. (2008) discuss how mass transfer may lead to the spin up of the mass gainer in a large fraction of these binaries. Depending on the initial separation and mass ratio, the system may merge or it may widen following mass ratio inversion, leaving the donor remnant in a large orbit that shuts down mass transfer once the donor's envelope is lost. We know of several examples of such post-mass transfer binaries with rapid rotators, including the Be X-ray binaries with neutron star companions (Coe, 2000) and Be binaries with He-star companions (Gies et al., 1998; Maintz et al., 2005; Peters et al., 2008).
Regulus does have a known wide companion (\\(\\alpha\\) Leo B at a separation of \\(\\approx 175\\arcsec\\), which is itself a binary consisting of K2 V and M4 V pair; McAlister et al., 2005), but this companion has far too great a separation to have ever interacted directly with Regulus. There are no known closer companions, but the last significant radial velocity investigation was made in 1912 - 1913 by Mellor (1923). The scatter in the results introduced by the broad and shallow appearance of the spectral lines may have discouraged other investigators, but this early work and others (Maunder, 1892; Frost, Barrett, & Struve, 1926; Campbell, 1928; Palmer et al., 1968) suggest that any velocity variations present are relatively small. However, a low mass donor remnant would probably create only a modest reflex motion in Regulus, so the lack of demonstrated variability is not unexpected. We have made spectroscopic observations of Regulus on many occasions over the last few years, and here we present a summary of the velocities measured in these spectra. We find that Regulus is in fact a low amplitude, single-lined, spectroscopic binary, and we discuss the possible nature of the companion.
Observations and Radial Velocities
Table 1 lists the sources and properties of the spectra of Regulus we used to measure radial velocity. The spectra from run numbers 1 - 8 were made by us and have moderate resolving power and good S/N (usually better than 100 per pixel). These include spectra obtained with the Kitt Peak National Observatory Coude Feed Telescope (Valdes et al., 2004), the Czech Academy of Sciences Ondrejov Observatory telescope and HEROS spectrograph (Stefl et al., 2000), and the Multiple-Telescope Telescope at the Georgia State University Hard Labor Creek Observatory (Barry, Bagnuolo, & Riddle, 2002). We have also obtained a number of spectra from on-line archives including the ESO La Silla 50 cm telescope and HEROS spectrograph, University of Toledo Ritter Observatory echelle spectrograph (Morrison et al., 1997), the Elodie spectrograph of the Observatoire de Haute Provence (Moultaka et al., 2004), the ESO VLT and UVES (UVES Paranal Observatory Project, ESO DDT Program ID 266.D-5655; Bagnulo et al., 2003), La Silla 3.6 m telescope and HARPS spectrograph (Mayor et al., 2003), and La Silla 2.2 m telescope and FEROS spectrograph (Kaufer et al., 1999; Weselak et al., 2008). In many cases, the archival spectra included a series made within a few minutes time, and we report here the average velocity of such groups. Finally, we also collected a series of 12 UV high dispersion spectra from the archive of the _International Ultraviolet Explorer (IUE)_. All these spectra were reduced to a rectified continuum format using standard routines in IRAF6, and then each group was transformed onto a uniform, heliocentric wavelength grid in increments of \\(\\log\\lambda\\). Many of these observations record the red spectrum in the vicinity of H\\(\\alpha\\), and we usually removed the atmospheric telluric lines in this part of the spectrum using contemporaneous spectra of rapidly rotating A-type stars or using the atlas of atmospheric transmission7 made by L. Wallace, W. Livingston, and K. Hinckle (KPNO). The _IUE_ spectra were similarly transformed to a uniform \\(\\log\\lambda\\) grid (Penny, Gies, & Bagnuolo, 1997). In prior studies of more distant O-stars, we have checked the wavelength calibration by registering the positions of interstellar lines with those in the average spectrum. Regulus, however, is so close that most of the interstellar lines are too weak, and in the end we relied on the measurement of a single feature, O i \\(\\lambda\\)1302, for registration, which introduces some additional scatter into our radial velocities from the _IUE_ spectra.
Footnote 6: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.
Footnote 7: [ftp://ftp.noao.edu/catalogs/atmospheric_transmission/](ftp://ftp.noao.edu/catalogs/atmospheric_transmission/)
All the radial velocities were measured using the cross-correlestimated according to the scheme described by Zucker (2003). All the optical spectra were cross-correlated with a synthetic model spectrum taken from the work of Martins et al. (2005). This spectrum is based upon a Kurucz model atmosphere for solar abundances, interpolated to \\(T_{\\rm eff}=12200\\) K and \\(\\log g=3.5\\), which are close to average values over the visible hemisphere (McAlister et al., 2005). The model template was smoothed with a rotational broadening function to better match the actual spectrum, and in each case the model was transformed to the observed wavelength grid. Unfortunately, the models of Martins et al. (2005) do not extend to UV wavelengths, so for the _IUE_ spectra we used a model UV template from the TLUSTY/Synspec models of Lanz & Hubeny (2007) for \\(T_{\\rm eff}=15000\\) K (the lowest temperature in their grid) and \\(\\log g=3.75\\). The use of a different template for the UV spectra may introduce systematic differences from those obtained from the optical spectra, but these errors are probably comparable to the measurement errors (see SS3).
Our 168 measurements are gathered in Table 2 (given in full in the electronic version) that lists the heliocentric Julian date of mid-observation, the orbital phase (SS3), the radial velocity and its internal error, the observed minus calculated residual (SS3), and the run number corresponding the observational journal in Table 1.
## 3 Orbital Elements
The range in the radial velocities is larger than that expected from measurement errors, so we searched for evidence of periodic variations using the discrete Fourier transform and CLEAN method (Roberts, Lehar, & Dreher, 1987) and phase dispersion minimization (Stellingwerf, 1978). Both procedures identified the presence of one significant period at \\(P=40.11\\pm 0.02\\) d, with a power indicating a false alarm probability of \\(\\sim 10^{-22}\\) that the peak results from random errors (Scargle, 1982). This period is too long to be related to rotational or pulsational variations, so we assume that it results from orbital motion in a binary. We then derived the remaining orbit elements using the nonlinear, least squares fitting program of Morbey & Brosterhus (1974) by keeping the orbital period fixed at the value given above. Each measurement was assigned a weight proportional to the inverse square of the larger of the measurement error or 1 km s\\({}^{-1}\\) (to account for possible systematic errors between results from different groups of observations). Trials with other weighting schemes gave similar results. Elliptical solutions made no significant improvement in the residuals from the fit (Lucy & Sweeney, 1971), so we adopted a circular fit. We present in Table 3 the standard orbital elements where \\(T_{0}\\) is the epoch of the ascending node. Note that the error in \\(T_{0}\\) increases to \\(\\pm 3.9\\) d when the full range in acceptable period is considered. The derived systemic velocity \\(V_{0}\\) is similar to the radial velocity of \\(\\alpha\\) Leo B of \\(6.56\\pm 0.22\\) km s\\({}^{-1}\\)(Tokovinin & Smekhov, 2002), which strengthens the case for a physical connection in this common proper motion pair.
The radial velocity curve and measurements are illustrated in Figure 1. The _IUE_ measurements (_open circles_) show the largest scatter around the curve, which we think derives from errors related to registering the wavelength scale with a single interstellar line (SS2). Most of the residuals for the optical spectra have a size comparable to the measurement errors and are mainly free from systematic trends. However, several of the runs that recorded the red spectrum around H\\(\\alpha\\) do have residuals that are systematically low (see run #3). We suspect that these trends are due to subtle differences in data treatment, but because these specific runs cover only a limited part of the orbital cycle, we did not apply any corrections for systematic differences.
## 4 Discussion
The orbital variation has a small semiamplitude that eluded detection in earlier studies. Consequently, the derived mass function is also small (Table 3), and we show in Figure 2 the constraints on the possible masses from the mass function. McAlister et al. (2005) used model fits to derive a probable mass of the primary star of \\(M_{1}=3.4\\pm 0.2M_{\\odot}\\), and the boundaries of this range are indicated by the vertical dotted lines. Larger orbital inclinations are favored for random orientations, and it is possible that the orbital inclination is comparable to the spin inclination of Regulus, \\(i\\approx 90^{\\circ}\\)(McAlister et al., 2005). Thus, the mass of the companion may be close to the minimum mass shown (for \\(i=90^{\\circ}\\)) of \\(M_{2}>0.30\\pm 0.01M_{\\odot}\\).
A companion this small may be a low mass white dwarf or main sequence star. If Regulus was spun up by mass transfer in an interacting binary, then the remnant of the donor star is probably a low mass white dwarf (Raguzova, 2001; Willems & Kolb, 2004). Indeed, the lowest mass white dwarfs are usually found in binary systems (Marsh, Dhillon, & Duck, 1995) where they lost a significant fraction of their mass, and some reach masses as low as \\(0.17M_{\\odot}\\)(Kilic et al., 2007). Models by Willems & Kolb (2004) for mass transfer during H-shell burning (their \"evolutionary channel 1\") often lead to remnant and gainer masses and orbital periods similar to the case of Regulus. Since Regulus and its wider companion \\(\\alpha\\) Leo B are not too old (\\(<150\\) Myr; Gerbaldi et al., 2001), a white dwarf companion would not have progressed too far along its cooling track and would probably have an effective temperature \\(>16000\\) K (Althaus, Serenelli, & Benvenuto, 2001), much higher than that of the primary B7 V star. Consequently, we might expect to observe a modest FUV flux excess if the companion is a white dwarf, and, in fact, Morales et al. (2001) find that the spectral energy distribution of Regulus is about a factor of two brighter in the 1000 - 1200 A range than predicted by model atmospheres for a single B7 V star. We note for completeness that a neutron star companion is probably ruled out because it would require a very small inclination (see Fig. 2) and an unlikely evolutionary scenario in which only a modest amount of mass transfer occurred before the supernova explosion (no more than the present mass of the primary).
On the other hand, the companion could be a low mass, main sequence star (making Regulus one of the most extreme mass ratio binaries among the massive stars after exclusion of the massive X-ray binaries). Adopting the minimum mass, the companion would be an M4 V star that would be too faint to alter significantly the spectral energy distribution from that for the primary alone (see Fig. 4 in McAlister et al., 2005). It is very unlikely that the companion is the outer component of a once compact triple system. If the fast spin of Regulus is the result of prior mass transfer, then an M4 V star in the orbit we find would have been too close to the central binary for orbital stability (and would have likely been ejected or become a merger product). Thus, in binary models for rapid rotation, the companion cannot be a low mass main sequence star but must be the remnant of the donor.
The angular separation of the binary is probably small. If we assume masses of \\(M_{1}=3.4M_{\\odot}\\) and \\(M_{2}=0.3M_{\\odot}\\), then according to Kepler's third law, the semimajor axis is 0.35 AU. Thus, for a distance of 24.3 pc, the maximum angular separation will be approximately 15 mas, too small for detection by speckle interferometric or adaptive optics techniques. The binary could be resolved in principle by lunar occultation methods or optical long baseline interferometry, but the fact that there is no evidence of a binary from these methods is consistent with the expected faintness of the companion (Hanbury Brown, Davis, & Allen, 1974; Radick, 1981; Ridgway et al., 1982; McAlister et al., 2005). For example, the magnitude difference in the \\(K\\)-band is probably close to \\(\\triangle m\\approx 10\\) and 6 mag for the cases of a white dwarf and an M4 V star companion, respectively. Thus, the flux of the companion has no influence on the analysis of the interferometry presented by McAlister et al. (2005).
Our study has led to the discovery of a binary companion to the twenty second brightest star in the sky, and it may, like Sirius, offer another example of a bright star that is orbited by a faint white dwarf. If the companion is a white dwarf, then it may be the closest case of a star stripped to its core by mass transfer in a close binary. The best opportunity to test the white dwarf hypothesis will come from very short wavelength observations where a hot companion may outshine the B7 V primary.
We thank the staff of Kitt Peak National Observatory for their support in obtaining these observations. This work is partially based on spectral data retrieved from the ELODIEarchive at Observatoire de Haute-Provence (OHP). Additional spectroscopic data were retrieved from Ritter Observatory's public archive, which is supported by the National Science Foundation Program for Research and Education with Small Telescopes (NSF-PREST) under grant AST-0440784. The _IUE_ data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. This work was also supported by the National Science Foundation under grant AST-0606861 (DG). Institutional support has been provided from the GSU College of Arts and Sciences and from the Research Program Enhancement fund of the Board of Regents of the University System of Georgia, administered through the GSU Office of the Vice President for Research. We are grateful for all this support.
## References
* Althaus et al. (2001) Althaus, L. G., Serenelli, A. M., & Benvenuto, O. G. 2001, MNRAS, 323, 471
* Bagnulo et al. (2003) Bagnulo, S., Jehin, E., Ledoux, C., Cabanac, R., Melo, C., Gilmozzi, R., & The ESO Paranal Science Operations Team 2003, Messenger, 114, 10
* Barry et al. (2002) Barry, D. J., Bagnuolo, W. G., Jr., & Riddle, R. L. 2002, PASP, 114, 198
* Campbell (1928) Campbell, W. W. 1928, Publ. Lick Obs., 16, 1
* Coe (2000) Coe, M. 2000, in The Be Phenomenon in Early-Type Stars, IAU Coll. 175 (ASP Conf. Proc. 214), ed. M. A. Smith, H. F. Henrichs, & J. Fabregat (San Francisco: ASP), 656
* Ekstrom et al. (2008) Ekstrom, S., Meynet, G., Maeder, A., & Barblan, F. 2008, A&A, 478, 467
* Frost et al. (1926) Frost, E. B., Barrett, S. B., & Struve, O. 1926, ApJ, 64, 1
* Gerbaldi et al. (2001) Gerbaldi, M., Faraggiana, R., & Balin, N. 2001, A&A, 379, 162
* Gies et al. (1998) Gies, D. R., Bagnuolo, W. G., Jr., Ferrara, E. C., Kaye, A. B., Thaller, M. L., Penny, L. R., & Peters, G. J. 1998, ApJ, 493, 440
* Gray et al. (2003) Gray, R. O., Corbally, C. J., Garrison, R. F., McFadden, M. T., & Robinson, P. E. 2003, AJ, 126, 2048
* Hanbury Brown et al. (1974) Hanbury Brown, R., Davis, J., & Allen, L. R. 1974, MNRAS, 167, 121
* Johnson & Morgan (1953) Johnson, H. L., & Morgan, W. W. 1953, ApJ, 117, 313
* Johnson et al. (2003)Kaufer, A., Stahl, O., Tubbesing, S., Norregaard, P., Avila, G., Francois, P., Pasquini, L., & Pizzella, A. 1999, Messenger, 95, 8
* ()Kilic, M., Brown, W. R., Allende Prieto, C., Pinsonneault, M. H., & Kenyon, S. J. 2007, ApJ, 664, 1088
* ()Langer, N., Cantiello, M., Yoon, S. -C., Hunter, I., Brott, I., Lennon, D. J., de Mink, S. E., & Verheijdt, M. 2008, in Massive Stars as Cosmic Engines (IAU Symp. 250), ed. F. Bresolin, P. Crowther, & J. Puls (San Francisco: ASP), in press (arXiv:0803.0621)
* ()Lanz, T. & Hubeny, I. 2007, ApJS, 169, 83
* ()Lucy, L. B., & Sweeney, M. A. 1971, AJ, 76, 544
* ()Maintz, M., Rivinius, T., Stahl, O., Stefl, S., & Appenzeller, I. 2005, Publ. Astron. Inst. Cz., 93, 21
* ()Marsh, T. R., Dhillon, V. S., & Duck, S. R. 1995, MNRAS, 275, 828
* ()Martins, L. P., Delgado, R. M. Gonzalez, Leitherer, C., Cervino, M., & Hauschildt, P. 2005, MNRAS, 358, 49
* ()Maunder, E. W. 1892, Observatory, 15, 393
* ()Mayor, M., et al. 2003, Messenger, 114, 20
* ()McAlister, H. A., et al. 2005, ApJ, 628, 439
* ()Morales, C., et al. 2001, ApJ, 552, 278
* ()Mellor, L. L. 1923, Publ. Michigan Obs., 3, 61
* ()Morbey, C. L., & Brosterhus, E. B. 1974, PASP, 86, 455
* ()Morrison, N. D., Knauth, D. C., Mulliss, C. L., & Lee, W. 1997, PASP, 109, 676
* ()Moultaka, J., Ilovaisky, S. A., Prugniel, P., & Soubiran, C. 2004, PASP, 116, 693
* ()Palmer, D. R., Walker, E. N., Jones, D. H. P., & Wallis, R. E. 1968, R. Obs. Bull., 135, 385
* ()Penny, L. R., Gies, D. R., & Bagnuolo, W. G., Jr. 1997, ApJ, 483, 439
* ()Peters, G. J., Gies, D. R., Grundstrom, E. D., & McSwain, M. V. 2008, ApJ, submitted
* ()Raguzova, N. V. 2001, A&A, 367, 848
* ()Radick, R. 1981, AJ, 86, 1685
* () Ridgway, S. T., Jacoby, G. H., Joyce, R. R., Seigel, M. J., & Wells, D. C. 1982, AJ, 87, 680
* () Roberts, D. H., Lehar, J., & Dreher, J. W. 1987, AJ, 93, 968
* () Scargle, J. D. 1982, ApJ, 263, 835
* () Stefl, S., Hummel, W., & Rivinius, Th. 2000, A&A, 358, 208
* () Stellingwerf, R. F. 1978, ApJ, 224, 953
* () Tokovinin, A. A., & Smekhov, M. G. 2002, A&A, 382, 118
* () Valdes, F., Gupta, R., Rose, J. A., Singh, H. P., & Bell, D. J. 2004, ApJS, 152, 251
* () van Leeuwen, F. 2007, Hipparcos, the New Reduction of the Raw Data (ASSL 350) (Dordrecht: Springer)
* () Weselak, T., Galazutdinov, G., Musaev, F., & Krelowski, J. 2008, A&A, 479, 149
* () Willems, B., & Kolb, U. 2004, A&A, 419, 1057
* () Zucker, S. 2003, MNRAS, 342, 1291
\\begin{table}
\\begin{tabular}{l r r r r r r} \\hline \\hline \\multicolumn{1}{c}{ Run} & \\multicolumn{1}{c}{Dates} & \\multicolumn{1}{c}{Range} & \\multicolumn{1}{c}{Resolving Power} & \\multicolumn{1}{c}{Observatory/Telescope/} \\\\ Number & \\multicolumn{1}{c}{(BY)} & \\multicolumn{1}{c}{(Å)} & \\multicolumn{1}{c}{(\\(\\lambda/\\triangle\\lambda\\))} & \\multicolumn{1}{c}{\\(N\\)} & \\multicolumn{1}{c}{Spectrograph} \\\\ \\hline
1 & 1989.3 & 4453 – 4597 & 6280 & 6 & KPNO/0.9m/Coude \\\\
2 & 2000.9 & 6445 – 6700 & 12100 & 30 & KPNO/0.9m/Coude \\\\
3 & 2004.8 & 6466 – 6700 & 9500 & 16 & KPNO/0.9m/Coude \\\\
4 & 2005.9 & 4240 – 4580 & 10300 & 2 & KPNO/0.9m/Coude \\\\
5 & 2006.8 & 6434 – 6700 & 7600 & 2 & KPNO/0.9m/Coude \\\\
6 & 2006.8 & 4240 – 4580 & 10200 & 2 & KPNO/0.9m/Coude \\\\
7 & 2000.9 – 2002.4 & 3780 – 5700, 5832 – 8483 & 20000 & 46 & Ondřejov/2m/HEROS \\\\
8 & 1999.1 – 2000.3 & 6500 – 6700 & 14000 & 6 & HLCO/MTT 1m/Ebert-Fastie \\\\
9 & 1999.4 & 5832 – 8483 & 20000 & 1 & La Silla/0.5m/HEROS \\\\
10 & 2004.2 – 2007.1 & 6527 – 6596 & 26000 & 37 & Ritter/1m/Echelle \\\\
11 & 1996.3 & 4000 – 5000 & 34100 & 2 & OHP/1.9m/Elodie \\\\
12 & 2003.0 & 3760 – 4500, 4800 – 5100 & 80000 & 1 & VLT/8m/UVES \\\\
13 & 2004.0 & 3800 – 5000 & 120000 & 2 & La Silla/3.6m/HARPS \\\\
14 & 2006.1 – 2007.1 & 3750 – 5150 & 48000 & 3 & La Silla/2.2m/FEROS \\\\
15 & 1979.0 – 1995.3 & 1200 – 1900 & 10000 & 12 & _IUE_/0.45m/Echelle (SWP) \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Journal of Spectroscopy
\\begin{table}
\\begin{tabular}{c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Date} & \\multicolumn{1}{c}{Orbital} & \\multicolumn{1}{c}{\\(V_{r}\\)} & \\multicolumn{1}{c}{\\(\\triangle\\,V_{r}\\)} & \\multicolumn{1}{c}{(\\(O-C\\))} & \\multicolumn{1}{c}{Run} \\\\ \\multicolumn{1}{c}{(HJD\\(-\\)2,400,000)} & \\multicolumn{1}{c}{Phase} & \\multicolumn{1}{c}{(km s\\({}^{-1}\\))} & \\multicolumn{1}{c}{(km s\\({}^{-1}\\))} & \\multicolumn{1}{c}{(km s\\({}^{-1}\\))} & \\multicolumn{1}{c}{Number} \\\\ \\hline
43881.848 & 0.933 & 15.3 & 3.2 & 4.0 & 15 \\\\
44333.437 & 0.191 & 9.9 & 2.8 & 2.9 & 15 \\\\
44333.458 & 0.191 & 17.4 & 3.5 & 10.4 & 15 \\\\
44529.050 & 0.068 & 26.1 & 3.7 & 14.8 & 15 \\\\
45360.745 & 0.802 & 11.1 & 2.9 & 4.4 & 15 \\\\ \\hline \\end{tabular} Note. – Table 2 is available in its entirely in the electronic edition. A portion is shown here for guidance regarding its form and content.
\\end{table}
Table 2: Radial Velocity Measurements
Figure 1: The observed and derived radial velocity curves. The open circles indicate the _IUE_ UV measurements while the solid circles represent the optical spectra measurements.
Figure 2: The mass diagram constraints for the mass of Regulus (\\(M_{1}\\)) and its faint companion (\\(M_{2}\\)). Each solid line gives the relation from the mass function for the orbital inclination indicated on the right hand side. The vertical dotted lines show the probable mass range for Regulus (McAlister et al. 2005).
ONLINE MATERIAL
Title: A Spectroscopic Orbit for Regulus
Authors: Gies et al.
Table: Radial Velocity Measurements
===========================================================
Byte-by-byte Description of file: datafile2.txt
===========================================================
Bytes Format Units Label Explanations
===========================================================
1- 10 F10.3 d Date HJD-2400000
11- 16 F6.3 -- Phase Orbital phase from ascending node
17- 21 F5.1 km/s HRV Heliocentric radial velocity
22- 26 F5.1 km/s e_HRV Mean error on HRV
27- 31 F5.1 km/s O-C Observed minus calculated residual
32- 34 I3 -- Run Run number from Table 1
===========================================================
43881.848 0.933 15.3 3.2 4.0 15
44333.437 0.191 9.9 2.8 2.9 15
44333.458 0.191 17.4 3.5 10.4 15
44529.050 0.068 26.1 3.7 14.8 15
45360.745 0.802 11.1 2.9 4.4 15
45360.787 0.803 13.3 3.1 6.5 15
45360.817 0.803 -3.1 2.8 -9.9 15
45377.118 0.210 7.9 2.9 1.7 15
46184.477 0.337 -6.6 3.2 -6.9 15
[MISSING_PAGE_POST]
| We present a radial velocity study of the rapidly rotating B-star Regulus that indicates the star is a single-lined spectroscopic binary. The orbital period (40.11 d) and probable semimajor axis (0.35 AU) are large enough that the system is not interacting at present. However, the mass function suggests that the secondary has a low mass (\\(M_{2}>0.30M_{\\odot}\\)), and we argue that the companion may be a white dwarf. Such a star would be the remnant of a former mass donor that was the source of the large spin angular momentum of Regulus itself.
binaries: spectroscopic -- stars: early-type -- stars: individual (Regulus, \\(\\alpha\\) Leo) | Give a concise overview of the text below. |
arxiv-format/0807_4040v2.md | # The time horizon and its role in multiple species conservation planning
Florian HARTIG
[email protected]
Martin DRECHSLER
[email protected] UFI2 - Helmholtz Centre for Environmental Research, Department of Ecological Modelling, Permoserstr. 15, 04318 Leipzig, Germany
######
keywords: conservation planning, discounting, multiple species, objective function, time horizon, time preferences +
Footnote †: journal: Biological Conservation
,,
single-species conservation, and it is believed that the same holds true for multi-species conservation. Another reason may be that the controversy about discounting ecological values has been considered a social science issue much more than an ecological question. Nevertheless, our results show that excluding this discussion from the scope of conservation planning may result in misleading and possibly unintended conservation recommendations.
In this paper, we analyze three typical objective functions which are used in the literature with respect to their sensitivity to the choice of the time horizon. We find that, for additive functions, this choice may have a crucial impact on the resulting conservation decisions. We conclude that the choice of a time horizon is an inevitable part of decision making. Its influence must be borne in mind and should be explicitly communicated when determining conservation targets.
## 2 Methods and assumptions
### The time horizon and annual survival
Under stationary environmental conditions (no trends in population parameters such as carrying capacity, so that the population is in a quasi-stationary state), the probability of surviving until time \\(T\\) is given by
\\[p(T)=e^{-\\frac{T}{\\tau_{m}}} \\tag{1}\\]
where \\(T_{m}\\) is the mean time to extinction (Grimm and Wissel, 2004), measured in years. The annual survival probability is \\(x=exp(-1/T_{m})\\). With eq. 1, we can then express the survival of a species until time \\(T\\) by
\\[p(T)=x^{T} \\tag{2}\\]
where \\(x\\) denotes the annual survival probability as given before. Using this as the basis of our evaluation, we should first note a trivial, but crucial fact: The survival probability \\(p\\) decreases nonlinearly (exponentially) with the time horizon \\(T\\). For a stationary single species case under stationary external conditions, however, this nonlinearity does not change ratings based on the survival probability \\(p\\); given that a conservation option has a higher \\(p(T_{0})\\) than another option for a time horizon \\(T_{0}\\), it will also have a higher \\(p(T)\\) for any other time horizon \\(T\\).
### Multi-species objective functions
For the case of multiple species, knowledge of single species survival probabilities is not enough to compare conservation options. As an example, imagine the case of two species, and two conservation alternatives, one which yields survival probabilities of \\(p_{1}=70\\%\\) and \\(p_{2}=90\\%\\), and another which results in \\(p_{1}=80\\%\\) and \\(p_{2}=80\\%\\). Which option is to be preferred? The expectation value of the number of species surviving, \\(p_{1}+p_{2}\\), is the same for both cases. Yet, the second conservation alternative shows a more even distribution of survival probabilities between species.
The literature has approached the problem of multi-species survival mainly with two classes of objective functions: additive and multiplicative ones (see Nicholson and Possingham, 2006). In its most simple form, an additive objective function for \\(n\\) species is given by the sum of the single species survival probabilities \\(p_{i}\\):
\\[\\sum_{i=1}^{n}p_{i} \\tag{3}\\]
Mathematically, the sum represents the expected value of the number of species surviving. Examples of studies using additive functions are Faith and Walker (1996); Polasky et al. (2001); Nicholson et al. (2006). A simple multiplicative function is given by the product of all survival probabilities:
\\[\\prod_{i=1}^{n}p_{i} \\tag{4}\\]
This product represents the probability that all species survive (see e.g. Bevers et al., 1995). Multiplicative objective functions tend to favor an even distribution of survival probabilities, whereas additive objectives generally do not (Nicholson and Possingham, 2006). In the context of biodiversity, such an evenness objective is often considered advantageous. However, it is also possible to include evenness objectives in additive objective functions (see e.g. Arponen et al., 2005; Moilanen, 2007). As an example of such a function, we chose the p-norm:
\\[\\left(\\sum_{i=1}^{n}p_{i}^{\\alpha}\\right)^{1/\\alpha} \\tag{5}\\]
This function weights each single species survival probability with \\(p_{i}^{\\alpha}\\), and then adds these values up. For \\(0<\\alpha<1\\), the weighting favors an even distribution of survival probabilities, and for \\(\\alpha=1\\) it is identical to the additive function. In a broad sense, eq. 5 resembles the Shannon index, which is often used to express biodiversity as a function of species abundance. A summary of the three objective functions is given in Table 1.
### The relation between costs and species survival
Ideally, the question of conservation priorities would not have to be asked, and we would simply provide each species with sufficient and adequate resources and habitat for their survival. Unfortunately, conservation is only one of many competing human ambitions. In the majority of situations,systematic conservation planning is subject to a limited budget \\(B\\), and it has to be decided how this budget is spent most effectively (Naidoo et al., 2006; Wilson et al., 2006).
This decision is further complicated because the relationship between costs spent on conservation and resulting change in population survival is often not linear. On the one hand, it is very frequently found and assumed that the costs for additional conservation increase with increasing conservation efforts (see e.g. Eiswerth and Haney, 2001; Drechsler and Burgman, 2004; Naidoo et al., 2006). For example, land may get increasingly scarce and therefore more expensive when the areas used for conservation are increased (Drechsler and Watzold, 2001; Armsworth et al., 2006; Polasky, 2006). On the other hand, conservation efforts often need to cross certain thresholds, such as the minimal viable population size, to become effective (With and Crist, 1995; Hanski et al., 1996; Fahrig, 2001).
A function which may conveniently exhibit all theses characteristics and which is therefore often used to model threshold situations is the sigmoid function (Fig. 1). We use this function to illustrate our findings, however, all general results of this paper will not depend on the particular functional form, but only on general curvature properties of the cost-survival function. For now, let us assume an amount \\(b_{i}\\) of our conservation budget \\(B\\) will increase the annual survival rate \\(x_{i}\\) of the i-th single species according to
\\[x_{i}=\\frac{1}{1+e^{-a_{i}\\cdot(b_{i}+c_{i})}} \\tag{6}\\]
where \\(a_{i}\\) controls the steepness of the threshold and \\(c_{i}\\) represents the initial state of the species, i.e. the value which is achieved without any budget expenditures. Eq. 6 grows convexly (more than linearly, Fig. 1A) below the threshold (when \\(c_{i}+b_{i}<0\\)) and concavely (less than linearly, Fig. 1B) above the threshold (when \\(c_{i}+b_{i}>0\\)). Note that for sufficiently small steepness \\(a\\) (\\(a\\ll 1/B\\)), the cost-survival function can be considered approximately linear (Fig. 1C), a fact that will be used in the following analysis. Furthermore, we assume that species do not interact and do not share any common resources or habitats. Thus, \\(x_{i}\\) does not depend on \\(b_{j}\\) with \\(i\
eq j\\).
### The optimal conservation decision
To compare the conservation decisions which would be made based on the discussed objective functions (eqs. 3, 4, 5) and different time horizons \\(T\\), we assume the following:
A landscape planner has to split a budget \\(B\\) between two species. He spends \\(b_{1}\\) on species 1 and \\(b_{2}=B-b_{1}\\) on species 2. We call the case where most of the budget is used for one species an uneven distribution, and we call the case where the budget is spent evenly among the two species an even distribution. The annual survival probability of each species changes with \\(b_{i}\\) according to eq. 6. The survival probability after the time horizon \\(T\\) is given by eq. 2. Inserting this into the three objective functions (additive, multiplicative, p-norm), we calculate the value of the objective functions (the score) for time horizons between 1 and 100 years, \\(b_{1}\\) ranging from 0% \\(-\\) 100% of the budget \\(B\\), and different functional relationships between annual survival probability \\(x_{i}\\) and budget expenditure \\(b_{i}\\).
## 3 Results
Analyzing the model, it becomes evident that the effect of the time horizon depends on the relation between budget expenditures and species survival. To illustrate this, we discuss the results for four different scenarios: First, we present the results for species survival of both species depending linearly, concave (less than linearly) and convex (more than linearly) on budget expenditures. Finally, we discuss a case where the two species are in a different initial state and thus react differently to budget expenditures.
\\begin{table}
\\begin{tabular}{l l} \\hline \\hline Function & Objective \\\\ \\hline \\(\\sum_{i=1}^{n}p_{i}\\) & Expected number of surviving species after \\(T\\) \\\\ \\(\\prod_{i=1}^{n}p_{i}\\) & Probability of all species surviving after \\(T\\) \\\\ \\(\\left(\\sum_{i=1}^{n}p_{i}^{\\alpha}\\right)^{1/\\alpha}\\) Sum of weighted survival probabilities \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Overview of the analyzed objective functions.
Figure 1: Relationship between budget expenditure and survival of a single species for eq. 6 with \\(a=5,c=0\\). A: Below the threshold, eq. 6 is convex B: Beyond the threshold, eq. 6 is concave. C: For \\(a\\ll B^{-1}\\), eq. 6 is approximately linear.
### Linear cost-survival functions
For species survival depending linearly on budget expenditure (as, e.g., in Fig. 1C), we obtain the following scores as a function of \\(T\\) and the budget distribution: For the additive objective, we find the highest scores for uneven distributions, spending all of the budget on one of the two species. In contrast, the multiplicative objective favors an even distribution throughout all choices of the time horizon \\(T\\). Finally, the p-norm favors an even distribution for short time horizons until a critical time \\(T_{c}\\). For any \\(T\\) larger than \\(T_{c}\\), uneven distributions are favored. The results are displayed in Fig. 2.
### Convex cost-survival functions
Convex cost-survival functions naturally favor uneven budget distributions, owing to the more than linear growth of survival with the budget expenditure. For moderate convexity, however, the results still resemble the linear case (Fig. 2) very closely. Only for a very strong convexity may the balancing influence of the multiplicative and the p-norm function eventually be overruled, and all three objectives favor an uneven distribution for any time horizon \\(T>1\\).
### Non-even baseline values
Finally, we show a case with different initial states for the two species: Species 1 has a poor initial state of conservation below the threshold (convex cost-survival, see Fig. 1A), and species 2 is above the threshold and in a much better initial state (concave cost-survival, see Fig. 1B). The resulting scores are shown in Fig. 4: Both for the additive and the p-value functions, the score favors a concentration on the threatened species 1 for short time horizons and a concentration on the more stable species 2 for long time horizons. Under a multiplicative objective, conservation budgets are always concentrated on the threatened species.
Figure 3: Concave cost-survival function: The three 3-d plots show the score for the additive, the multiplicative, and the p-norm objective function. On the x-axis (right) the proportion of the budget assigned to species 1, on the y-axis (left) the time horizon \\(T\\) in years, and on the z-axis (upwards) the score of the respective conservation option. Parameters: \\(a=8,B=1,c=0,\\alpha=0.5\\). For each \\(T\\), the z values are scaled as in Fig. 2 to allow the graph to be more easily read.
Figure 2: Linear cost-survival function: The three 3-d plots show the score for the additive, the multiplicative, and the p-norm objective function. On the x-axis (right) the proportion of the budget assigned to species 1, on the y-axis (left) the time horizon \\(T\\) in years, and on the z-axis (upwards) the score of the respective conservation option. Parameters: \\(a=0,4,B=0.5,c=-0.1,\\alpha=0.013\\). For each \\(T\\), the z values are scaled to a reference value (the score that would be obtained by choosing \\(b_{1}=0\\) for additive and p-norm functions, and \\(b_{1}=0.5\\) for the multiplicative function) to allow the graph to be more easily read. Otherwise, cases of high \\(T\\) would hardly be visible because survival probabilities here are naturally lower than for cases of small \\(T\\).
### Generalization of the results
Are these results general, or only valid for a small or unreasonable parameter range? As we show in Appendix A, conservation decisions with additive functions like eqs. 3 and 5 are in fact sensitive to the time horizon under quite general conditions, which is that either: a) The functional relation between budget expenditures and survival is sufficiently concave; or b) The multi-species objective function puts a sufficiently strong weight on even survival probabilities and the relationship between costs and survival is concave, linear, or sufficiently weakly convex.
Equally important, however, is whether such a sensitivity of conservation decisions will appear in real world situations. To examine the sensitivity of the model to changes in the parameters, we solved numerically for the time where conservation decisions shift between even and uneven budget distributions. The results (Appendix B) show that the parameter range which yields a switch within the range of typical choices for \\(T\\) is fairly large.
In contrast to additive objective functions, we could not find any impact of \\(T\\) whatsoever for the case of the multiplicative function. This is no coincidence, but can easily be understood. Since the power operation commutes with the multiplication, a conservation alternative that maximizes \\(\\prod_{i}x_{i}\\) also maximizes \\(\\prod_{i}p_{i}\\) for any \\(T\\). Therefore, a simple multiplicative function with a static budget is not influenced by the choice of the time horizon. A formal proof of this is given in Appendix C.
## 4 Discussion
Evaluating multi-species survival probabilities requires the choice of an objective function which transforms survival probabilities into a single value. Different forms of objective functions have been used in the literature, some of which maximize the expected number of surviving species (additive functions), whereas others also emphasize an even distribution of survival probabilities among species (multiplicative functions or weighted additive functions).
Our results show that the time horizon at which species survival probabilities are calculated has a crucial impact on conservation decisions with additive functions when at least one of the following two assumptions is fulfilled: a) The functional relation between budget expenditures and survival is sufficiently concave; or b) The multi-species objective function puts a sufficiently strong weight on even survival probabilities and the relationship between costs and survival is concave, linear, or sufficiently weakly convex. For our simple case of two species, conservation decisions based on such functions change drastically when the time horizon crosses some critical value \\(T_{c}\\).
The underlying reason behind this is that survival probability drops exponentially with the time horizon. While a concave cost-survival relationship or a concave objective function favor an even budget distribution for short time horizons, the exponential decay makes small differences very large in the long run and therefore eventually shifts the highest score to uneven distributions when the time horizon \\(T\\) is increased. This time-dependence of the indicator \"survival probability\" constitutes a major difference to other indicators, such as expected coverage, which are used for conservation planning.
Our sensitivity analysis revealed that a crucial influence of the time horizon appears for a large range of realistic parameter combinations and functions. Therefore, a potentially drastic influence of \\(T\\) on conservation decisions for practical cases cannot be ruled out. Only multiplicative functions showed no response to the choice of \\(T\\) at all. This is no coincidence, but a fundamental property of multiplicative functions, as we showed. However, we do not believe that this is necessarily an argument in favor of multiplicative functions. A multiplicative function is certainly useful when the survival of all species is the main goal, but its absolute insistence on evenness can make it a dangerous choice when the budget is not large enough to conserve all species. For such cases, it may be that a distribution of the budget that maximizes a multiplicative objective minimizes the expected number of species sur
Figure 4: Different initial states: The three 3-d plots show the score for the additive, the multiplicative, and the p-norm objective function. On the x-axis (right) the proportion of the budget assigned to species 1, on the y-axis (left) the time horizon \\(T\\) in years, and on the z-axis (upwards) the score of the respective conservation option. Parameters: \\(a=7,B=1,c_{1}=-0.26,c_{2}=+0.26,\\alpha=0.5\\). The values for the two additive objective functions are scaled as in Fig. 2, the values for the multiplicative objective are scaled at each \\(T\\) to the value obtained by \\(b_{1}=0.82\\).
viving (e.g. Fig. 2).
In conclusion, we believe that the influence of time preferences on conservation decisions has not been appreciated enough in the past. This is even more so given that a lot of recent research is attracted by dynamical problems which are by their nature strongly affected by the choice of the time horizon (Meir et al., 2004; Drechsler, 2005; McBride et al., 2007; Pressey et al., 2007). As we increasingly realize that the future challenges for conservation such as climate and global change are dynamic, time preferences will play an increasing role in conservation decisions. Thus, the time horizon must be acknowledged as a fundamental part of the objective function. It should be selected with care, and its influence should be analyzed and communicated when presenting conservation recommendations.
But what is the right time horizon? Ultimately, the choice of a time horizon is a normative decision. It cannot be decided on scientifically, but must be developed in interaction with stakeholders and society. To establish such an interaction, the influence of the time horizon has to be determined and openly communicated.
## 5 Acknowledgements
The authors would like to thank Silvia Wissel, Karin Johst, Volker Grimm and Atte Moilanen as well as two anonymous reviewers for helpful comments on the manuscript.
## Appendix A Proof of the time-dependence of additive functions
Assume we have two species with equal cost-survival functions. We get an even distribution as a unique solution if the summands of the objective function which are given by
\\[(x(b))^{\\alpha\\cdot T}\\] (A.1)
are concave functions of \\(b\\) on the whole domain accessible with the budget \\(B\\). Accordingly, we get an uneven distribution as a unique solution if eq. A.1 is convex on the whole domain. Assuming that the cost-survival function is a smooth function of \\(b\\), all derivatives are bounded and there will be a \\(T_{min}\\) such that eq. A.1 is concave for all \\(T<T_{min}\\) and a \\(T_{max}\\) such that eq. A.1 is convex for all \\(T>T_{max}\\). Thus, the optimal budget distribution must switch or exhibit multiple solutions between \\(T_{min}\\) and \\(T_{max}\\). The same argument also applies for species with different cost-survival functions with the addition that optimal points may slightly shift position as can be seen in Fig. 4.
Hence, there will always be a range of \\(T\\) at which eq. A.1 changes from a concave to a convex function and we may observe a dramatic shift of optimal conservation decisions. For practical considerations, however, this will only be of relevance if the critical time \\(T_{c}\\) where the highest score switches from even to uneven distributions, is within the range of typical choices for the time horizon \\(T\\) (30 to 100 years). From eq. A.1, we see directly that this can only be the case if there exists a \\(T\\) within the considered range such that a) \\(x(b)\\) is sufficiently concave to compensate the convex influence of \\(\\alpha\\cdot T\\) in the exponent of eq. A.1 or b) \\(x(b)\\) is concave, linear, or sufficiently weakly convex and \\(\\alpha<1\\) is sufficiently small to make eq. A.1 linear within the considered range.
## Appendix B Sensitivity Analysis
To get an estimate of the sensitivity of the time \\(T_{c}\\) where the budget distribution changes towards a change of parameters, let us assume we have an additive objective function, equal initial states \\(c_{i}\\) and equal concave cost-survival functions. Then \\(T_{c}\\) will be approximately at the time \\(T\\) where the score of a totally uneven distribution of the budget equals the score of an even distribution:
\\[x(B)^{T_{c}}+x(0)^{T_{c}}=2\\cdot x(B/2)^{T_{c}}\\] (B.1)
Here, \\((B,B/2,0)\\) refers to the proportion of the budget \\(B\\) to be inserted in the cost-survival function eq. 6. We solved eq. B.1 numerically with the sigmoid function eq. 6. Fig. 5 shows that the range of parameters which yield times \\(T_{c}\\) between \\(1-100\\) years is fairly large.
## Appendix C Proof of the time-independence of a multiplicative score
The multiplicative score eq. 4 can be rewritten as
\\[\\prod_{i}p_{i}=\\prod_{i}(x_{i})^{T}=\\left(\\prod_{i}x_{i}\\right)^{T}\\] (C.1)
Figure 5: \\(T_{c}\\), the time horizon where the highest score changes from an even to an uneven budget distribution as functions of \\(a\\), \\(B\\) and \\(c\\). Other parameters: \\(c=0\\) (left panel), \\(a=5\\) (right panel). The cost-survival function corresponding to the right panel (\\(a=5\\)) is identical with Fig. 1.
As the power operations commute with the multiplication, we can factor out the power operation. The latter is strictly monotonous, hence an option which maximizes \\(\\prod_{i}x_{i}\\) also maximizes \\(\\prod_{i}p_{i}\\) for any \\(T\\).
## References
* Armsworth et al. (2006) Armsworth, P. R., Daily, G. C., Kareiva, P., Sanchirico, J. N., 2006. From the cover: Land market feedbacks can undermine biodiversity conservation. Proc. Natl. Acad. Sci. U. S. A. 103 (14), 5403-5408. doi: 10.1073/pnas.0505278103
* Arponen et al. (2005) Arponen, A., Heikkinen, R. K., Thomas, C. D., Moilanen, A., 2005. The value of biodiversity in reserve selection: Representation, species weighting, and benefit functions. Conserv. Biol. 19 (6), 2009-2014. doi: 10.1111/j.1523-1739.2005.00218.x
* Balvanera et al. (2001) Balvanera, P., Daily, G. C., Ehrlich, P. R., Ricketts, T. H., Bailey, S.-A., Kark, S., Kremen, C., Pereira, H., 2001. Conserving biodiversity and ecosystem services. Science 291 (5511), 2047-. doi: 10.1126/science.291.5511.2047
* Bevers et al. (1995) Bevers, M., Hof, J., Kent, B., Raphael, M. G., 1995. Sustainable forest management for optimizing multispecies wildlife habitat: a coastal douglas-fir example. Nat. Resour. Model. 9, 1-23.
* Cabeza and Moilanen (2003) Cabeza, M., Moilanen, A., 2003. Site-selection algorithms and habitat loss. Conserv. Biol. 17 (5), 1402-1413. doi: 10.1046/j.1523-1739.2003.01421.x
* Drechsler (2005) Drechsler, M., 2005. Probabilistic approaches to scheduling reserve selection. Biol. Conserv. 122 (2), 253-262. doi: 10.1016/j.biocon.2004.07.015
* Drechsler and Burgman (2004) Drechsler, M., Burgman, M. A., 2004. Combining population viability analysis with decision analysis. Biodivers. Conserv. 13 (1), 115-139. doi: 10.1023/B:BIOC.000004315.09433.f6
* Drechsler and Watzold (2001) Drechsler, M., Watzold, F., 2001. The importance of economic costs in the development of guidelines for spatial conservation management. Biol. Conserv. 97 (1), 51-59. doi: 10.1016/S0006-3207(00)00099-9
* Eiswerth and Haney (2001) Eiswerth, M. E., Haney, J. C., 2001. Maximizing conserved biodiversity: why ecosystem indicators and thresholds matter. Ecol. Econ. 38 (2), 259-274. doi: 10.1016/S0921-8009(01)00166-5
* Fahrig (2001) Fahrig, L., 2001. How much habitat is enough? Biol. Conserv. 100 (1), 65-74. doi: 10.1016/S0006-3207(00)00208-1
* Faith and Walker (1996) Faith, D. P., Walker, P. A., 1996. Integrating conservation and development: effective trade-offs between biodiversity and cost in the selection of protected areas. Biodivers. Conserv. 5 (4), 431-446. doi: 10.1007/BF00056389
* Grimm and Wissel (2004) Grimm, V., Wissel, C., 2004. The intrinsic mean time to extinction: a unifying approach to analysing persistence and viability of populations. Oikos 105 (3), 501-511. doi: 10.1111/j.0030-1299.2004.12606.x
* Guisan and Thuiller (2005) Guisan, A., Thuiller, W., 2005. Predicting species distribution: offering more than simple habitat models. Ecol. Lett. 8 (9), 993-1009. doi: 10.1111/j.1461-0248.2005.00792.x
* Hanski et al. (1996) Hanski, I., Moilanen, A., Gyllenberg, M., 1996. Minimum viable metapopulation size. Am. Nat. 147 (4), 527-541.
* Heal (2007) Heal, G., 2007. Discounting: A review of the basic economics. U. Chicago Law Rev. 74 (1), 59-77.
* Margules and Pressey (2000) Margules, C. R., Pressey, R. L., 2000. Systematic conservation planning. Nature 405 (6783), 243-253. doi: 10.1038/35012251
* McBride et al. (2007) McBride, M. F., Wilson, K. A., Bode, M., Possingham, H. P., 2007. Incorporating the effects of socioeconomic uncertainty into priority setting for conservation investment. Conserv. Biol. 21 (6), 1463-1474. doi: 10.1111/j.1523-1739.2007.00832.x
* Meir et al. (2004) Meir, E., Andelman, S., Possingham, H. P., 2004. Does conservation planning matter in a dynamic and uncertain world? Ecol. Lett. 7 (8), 615-622. doi: 10.1111/j.1461-0248.2004.00624.x
* Moilanen (2007) Moilanen, A., 2007. Landscape zonation, benefit functions and target-based planning: Unifying reserve selection strategies. Biol. Conserv. 134 (4), 571-579. doi: 10.1016/j.biocon.2006.09.008
* Naidoo et al. (2006) Naidoo, R., Balmford, A., Ferraro, P. J., Polasky, S., Ricketts, T. H., Rouget, M., 2006. Integrating economic costs into conservation planning. Trends in Ecology & Evolution 21 (12), 681-687. doi: 10.1016/j.tree.2006.10.003
* Nicholson and Possingham (2006) Nicholson, E., Possingham, H. P., 2006. Objectives for multiple-species conservation planning. Conserv. Biol. 20 (3), 871-881. doi: 10.1111/j.1523-1739.2006.00369.x
* Nicholson et al. (2006) Nicholson, E., Westphal, M. I., Frank, K., Rochester, W. A., Pressey, R. L., Lindenmayer, D. B., Possingham, H. P., 2006. A new method for conservation planning for the persistence of multiple species. Ecol. Lett. 9 (9), 1049-1060. doi: 10.1111/j.1461-0248.2006.00956.x
* Polasky (2006) Polasky, S., 2006. You can't always get what you want: Conservation planning with feedback effects. P. Natl. Acad. Sci. Usa. 103 (14), 5245-5246. doi: 10.1073/pnas.0601348103
* Polasky et al. (2001) Polasky, S., Camm, J. D., Garber-Yonts, B., 2001. Selecting biological reserves cost-effectively: An application to terrestrial vertebrate conservation in oregon. Land. Econ. 77 (1), 68-78.
* Pressey et al. (2007) Pressey, R. L., Cabeza, M., Watts, M. E., Cowling, R. M., Wilson, K. A., 2007. Conservation planning in a changing world. Trends in Ecology & Evolution 22 (11), 583-592. doi: 10.1016/j.tree.2007.10.001
* Rabl (1996) Rabl, A., 1996. Discounting of long-term costs: What would future generations prefer us to do? Ecol. Econ. 17 (3), 137-145. doi: 10.1016/S0921-8009(96)80002-4
* Roberts et al. (2007) Roberts, C. M., Branch, G., Bustamante, R. H., Castilla,J. C., Dugan, J., Halpern, B. S., Lafferty, K. D., Leslie, H., Lubchenco, J., McArdle, D., Ruckelshaus, M., Warner, R. R., 2003. Application of ecological criteria in selecting marine reserves and developing reserve networks. Ecol. Appl. 13 (1), S215-S228. doi: 10.1890/1051-0761(2003)013[0215:AOECIS]2.0.CO;2
* Svancara et al. (2005) Svancara, L. K., Brannon, R., Scott, J. M., Groves, C. R., Noss, R. F., Pressey, R. L., 2005. Policy-driven versus evidence-based conservation: A review of political targets and biological needs. Bioscience 55 (11), 989-995.
* Weitzman (1998) Weitzman, M. L., 1998. Why the far-distant future should be discounted at its lowest possible rate,. J. Environ. Econ. Manage. 36 (3), 201-208. doi: 10.1006/jeem.1998.1052
* Wiersma and Nudds (2006) Wiersma, Y. F., Nudds, T. D., 2006. Conservation targets for viable species assemblages in canada: Are percentage targets appropriate? Biodivers. Conserv. 15 (14), 4555-4567.
* Williams and Araujo (2000) Williams, P. H., Araujo, M. B., 2000. Using probability of persistence to identify important areas for biodiversity conservation. P. Roy. Soc. Lond. B. Bio. 267 (1456), 1959-1966. doi: 10.1098/rspb.2000.1236
* Williams and Araujo (2002) Williams, P. H., Araujo, M. B., 2002. Apples, oranges, and probabilities: Integrating multiple factors into biodiversity conservation with consistency. Environ. Model. Assess. 7 (2), 139-151. doi: 10.1023/A:1015657917928
* Wilson et al. (2006) Wilson, K. A., McBride, M. F., Bode, M., Possingham, H. P., 2006. Prioritizing global conservation efforts. Nature 440 (7082), 337-340. doi: 10.1038/nature04366
* With and Crist (1995) With, K. A., Crist, T. O., 1995. Critical thresholds in species responses to landscape structure. Ecology 76 (8), 2446-2459. | Survival probability within a certain time horizon \\(T\\) is a common measure of population viability. The choice of \\(T\\) implicitly involves a time preference, similar to economic discounting: Conservation success is evaluated at the time horizon \\(T\\), while all effects that occur later than \\(T\\) are not considered. Despite the obvious relevance of the time horizon, ecological studies seldom analyze its impact on the evaluation of conservation options. In this paper, we show that, while the choice of \\(T\\) does not change the ranking of conservation options for single species under stationary conditions, it may substantially change conservation decisions for multiple species. We conclude that it is of crucial importance to investigate the sensitivity of model results to the choice of the time horizon or other measures of time preference when prioritizing biodiversity conservation efforts. | Summarize the following text. |
arxiv-format/0807_4594v2.md | # Gamma-ray burst contributions to constraining the evolution of dark energy
Shi Qi
1Department of Physics, Nanjing University, Nanjing 210093, China
[email protected]
Fa-Yin Wang
2Department of Astronomy, Nanjing University, Nanjing 210093, China
[email protected]
Tan Lu
3Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, China
3t.lu9pmo.ac.cn
4Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing University - Purple Mountain Observatory, Nanjing 210093, China
4
Key Words.:cosmological parameters - Gamma rays: bursts
## 1 Introduction
Since the discovery of the accelerating expansion of the universe (Riess et al. 1998; Perlmutter et al. 1999), a lot of work has been done on constraining the behavior of dark energy. In contrast to simple parametric forms for dark energy equation of state (EOS), such as \\(w(z)=w_{0}+w^{\\prime}z\\) (Cooray & Huterer 1999) and \\(w(z)=w_{0}+w_{a}z/(1+z)\\) (Chevallier & Polarski 2001; Linder 2003), a nearly model-independent approach was introduced by Huterer & Cooray (2005) to estimate the evolution of dark energy independently, and was adopted in analyses using type Ia supernova (SN Ia) data (Riess et al. 2007; Sullivan et al. 2007; Sarkar et al. 2008). Qi et al. (2008) extended the approach to gamma-ray burst (GRB) luminosity data (Schaefer 2007), for which higher redshifts are accessible compared to SNe Ia. Though more stringent results have been obtained by this, the dark energy EOS at high redshifts, where we only have GRBs, is still totally unconstrained, except that it is most likely negative. This is primarily due to matter dominating dark energy at high redshifts and to dark energy becoming less important for determining the cosmic expansion; therefore, constraining it becomes more difficult. In this work we explore both how many GRBs are needed to get substantial constraints on dark energy EOS at high redshifts beyond those of SNe Ia and where the GRBs' contributions lie for a foreseeable number of GRBs.
## 2 Methodology
For the uncorrelated estimates of dark energy EOS, we first separate the redshifts into several bins and assume a constant dark energy EOS for each bin. Then we generate a Markov chain through the Monte-Carlo algorithm according to the likelihood function. The covariance matrix of the dark energy EOS parameters is calculated based on the Markov chain, and a transformation derived from the covariance matrix to decorrelate the EOS parameters. The evolution of dark energy is finally estimated out of the uncorrelated EOS parameters (see Huterer & Cooray (2005) for details).
The process of generating Markov chains in these procedures is very time-consuming, especially when there are considerable observational/mock standard candles. We see that we need a large number of GRBs to get substantial constraints on dark energy EOS at high redshifts. Instead of using the uncorrelated approach directly, we adopt an alternative way to estimate this number. When estimating the constraints imposed on dark energy EOS in a certain redshift range, we allow the EOS parameter to vary only in that redshift bin and fix the EOS parameters elsewhere, say, to \\(-1\\). This is compatible with the uncorrelated approach to an approximate estimate, because the uncorrelated EOS parameters defined in Huterer & Cooray (2005) are themselves localized well in redshift bins. Of course, the fixation of EOS at some redshifts will cause underestimates of the errors on the EOS at other redshifts, so the number we have estimated here can be viewed as a lower limit.
We determine the likelihood for EOS parameters from the \\(\\chi^{2}\\) statistic,
\\[\\chi_{n}^{2}=\\sum_{i=1}^{n}\\frac{\\left[\\mu_{\
u_{i}}(\\theta)-\\mu_{\\omega_{i}} \\right]^{2}}{\\sigma_{i}^{2}}, \\tag{1}\\]
where \\(\\mu_{\\omega}\\) is the observational/mock distance modulus of the standard candles with standard deviation \\(\\sigma\\), \\(\\mu_{\\rm r}(\\theta)\\) is the theoretically predicted distance modulus based on a cosmological model with parameter set \\(\\theta\\), and \\(n\\) is the number of the standard candles. We constrain dark energy EOS parameters with mock data, whose generation involves random numbers (see the details below). To reduce the impact from the fluctuation of the mock data themselves, we generate many more standard candles than needed. For example, if we want to see the constraints imposed by \\(n\\) samples of standard candles, we actually generate \\(N\\) mock samples with \\(N\\gg n\\) and calculate \\(\\chi_{N}^{2}\\). Then \\(\\chi_{n}^{2}\\) is given by
\\[\\chi_{n}^{2}=\\frac{n}{N}\\chi_{N}^{2}. \\tag{2}\\]
Throughout this paper, we choose \\(N=1.0\\)E5.
The fiducial cosmological model we used to generate mock data is the flat \\(\\Lambda\\)CDM model with \\(\\Omega_{m}=0.279\\) and \\(H_{0}=70.1\\rm{km/s/Mpc}\\) (see Komatsu et al. (2008)). For SNe Ia, we approximate the intrinsic noise in distance moduli as a Gaussian scatter with a dispersion of 0.1 magnitudes and the observational errors as 0.23 magnitudes, which are approximately the averages of the observational errors of the presently available SNe Ia (Riess et al. 2007; Wood-Vasey et al. 2007; Davis et al. 2007). The total error in our generated distance moduli for SN Ia is therefore \\(\\sqrt{0.1^{2}+0.23^{2}}\\) magnitudes. We assume an uniform distribution for SNe Ia along the redshifts. For GRBs, instead of generating mock data about the five luminosity relations (see Schaefer (2007)), we directly generate distance moduli like we do for SNe Ia for simplicity. The intrinsic scatter is set to be 0.65 magnitudes, which is approximately the average of the errors of the GRBs' average distance moduli presented in Schaefer (2007), and we ignore the measurement uncertainties, which are less than the intrinsic scatter. We consider two kinds of distributions for GRBs in the redshift bin \\(1.8<z<7\\). One is the uniform distribution, the other a very rough approximation to the distribution presented by Fig. 2 in Bromm & Loeb (2006), i.e. \\(P(z)\\propto\\exp(-z/7)\\). We will see that our results are independent of the GRB distributions.
## 3 Results
Figures 1 and 2 show our results for the constraints from GRBs distributed in the redshift bin \\(1.8<z<7\\) on the dark energy EOS parameter \\(w(1.8<z<7)\\). We can see that, for a few hundred GRBs, the constraints are only the steep drop at about zero that is seen in the probability function of the EOS parameter \\(P(w)\\). This is consistent with the results in Qi et al. (2008). Only when we have more than about 5000 GRBs can we begin to get concrete constraints on the EOS parameter. The GRBs' distributions have little impact on the conclusion; i.e., it is difficult to constrain the dark energy EOS parameters beyond the redshifts of SNe Ia with GRBs unless some new luminosity relations for GRBs with smaller scatters are discovered.
However, this does not mean that high-redshift GRBs contribute little to constraining the dark energy EOS parameters. It has been demonstrated in Qi et al. (2008) that, even with the presently available 69 GRBs (Schaefer 2007), the constraints could be improved significantly at redshifts \\(0.5\\lesssim z\\lesssim 1.8\\). Part of the improvement stems from GRBs beyond redshift 1.8. Because the luminosity distances of standard candles depend on the behavior of the dark energy through an integration over the redshift, high-redshift GRBs put constraints on dark energy at lower redshifts, where dark energy is important in determining the cosmic expansion. And since there are few GRBs at low redshifts, the contributions from GRBs would lie primarily in the middle redshifts. In Figs. 3 and 4 we explicitly show the constraints from GRBs distributed in the redshift bin \\(1.8<z<7\\) on the dark energy EOS parameter \\(w(0.5<z<1.8)\\). For comparison, we also plot the constraints from SNe Ia uniformly distributed in the redshift bin \\(0.5<z<1.8\\) on the dark energy EOS parameter \\(w(0.5<z<1.8)\\) in Fig. 5. We can see that the contributions from GRBs are comparable to that from SNe Ia.
Figure 1: Constraints from GRBs uniformly distributed in the redshift bin \\(1.8<z<7\\) on the dark energy EOS parameter \\(w(1.8<z<7)\\). The different lines stand for different number of GRBs. The probabilities are normalized to be 1 at the maxima.
Figure 2: Constraints from GRBs distributed in the redshift bin \\(1.8<z<7\\) according to \\(P(z)\\propto\\exp(-z/7)\\) on the dark energy EOS parameter \\(w(1.8<z<7)\\).
## 4 Summary
We explored the GRBs' contributions in constraining the dark energy EOS at high redshifts (\\(1.8<z<7\\)) and at middle redshifts (\\(0.5<z<1.8\\)). When constraining the dark energy EOS in a certain redshift range, we allow the dark energy EOS parameter to vary only in that redshift bin and fix EOS parameters elsewhere to \\(-1\\). We find that it is difficult to constrain the dark energy EOS parameters beyond the redshifts of SNe Ia with GRBs unless some new luminosity relations for GRBs with smaller scatters are discovered. However, at middle redshifts, GRBs have contributions comparable with SNe Ia in constraining the dark energy EOS.
###### Acknowledgements.
This work was supported by the Scientific Research Foundation of the Graduate School of Nanjing University (for Shi Qi), the Jiangsu Project Innovation for PhD Candidates CX07B-039z (for Fa-Yin Wang), and the National Natural Science Foundation of China under Grant No. 10473023.
## References
* () Bromm, V. & Loeb, A. 2006, Astrophys. J., 642, 382
* () Chevallier, M. & Polarski, D. 2001, Int. J. Mod. Phys., D10, 213
* () Cooray, A. R. & Huterer, D. 1999, Astrophys. J., 513, L95
* () Davis, T. M. et al. 2007, Astrophys. J., 666, 716
* () Huterer, D. & Cooray, A. 2005, Phys. Rev., D71, 023506
* () Komatsu, E. et al. 2008, arXiv:0803.0547 [astro-ph]
* () Linder, E. V. 2003, Phys. Rev. Lett., 90, 091301
* () Perlmutter, S. et al. 1999, Astrophys. J., 517, 565
* () Qi, S., Wang, F.-Y., & Lu, T. 2008, Astron. Astrophys., 483, 49, arXiv:0803.4304 [astro-ph]
* () Riess, A. G. et al. 1998, Astron. J., 116, 1009
* () Riess, A. G. et al. 2007, Astrophys. J., 659, 98
* () Sarkar, D., Sullivan, S., Joudaki, S., et al. 2008, Phys. Rev. Lett., 100, 241302
* () Schaefer, B. E. 2007, Astrophys. J., 660, 16
* () Sullivan, S., Cooray, A., & Holz, D. E. 2007, JCAP, 0709, 004
* () Wood-Vasey, W. M. et al. 2007, Astrophys. J., 666, 694
Figure 4: Constraints from GRBs distributed in the redshift bin \\(1.8<z<7\\) according to \\(P(z)\\propto\\exp(-z/7)\\) on the dark energy EOS parameter \\(w(0.5<z<1.8)\\).
Figure 5: Constraints from SNe Ia uniformly distributed in the redshift bin \\(0.5<z<1.8\\) on the dark energy EOS parameter \\(w(0.5<z<1.8)\\).
Figure 3: Constraints from GRBs uniformly distributed in the redshift bin \\(1.8<z<7\\) on the dark energy EOS parameter \\(w(0.5<z<1.8)\\). | Context:
Aims: We explore the gamma-ray bursts' (GRBs') contributions in constraining the dark energy equation of state (EOS) at high (\\(1.8<z<7\\)) and at middle redshifts (\\(0.5<z<1.8\\)) and estimate how many GRBs are needed to get substantial constraints at high redshifts.
Methods: We estimate the constraints with mock GRBs and mock type Ia supernovae (SNe Ia) for comparisons. When constraining the dark energy EOS in a certain redshift range, we allow the dark energy EOS parameter to vary only in that redshift bin and fix EOS parameters elsewhere to \\(-1\\).
Results: We find that it is difficult to constrain the dark energy EOS beyond the redshifts of SNe Ia with GRBs unless some new luminosity relations for GRBs with smaller scatters are discovered. However, at middle redshifts, GRBs have comparable contributions with SNe Ia in constraining the dark energy EOS. | Condense the content of the following passage. |
arxiv-format/0807_5043v1.md | # Monte Carlo approach to nuclei and nuclear matter
Stefano Fantoni
Stefano Gandolfi
Alexey Yu. Illarionov
Kevin E. Schmidt
Francesco Pederiva
## 1 Introduction
The improved accuracy of experimental data on nuclei, together with a rediscovered role of nuclear matter properties in the understanding of nuclear structure and several phenomena of astrophysical interest[1], has shown the need for a more detailed investigation of the ground state of nuclear many-body systems.
Recent realistic nuclear Hamiltonians have been used to compute properties of light nuclei in very good agreement with experiments[2]. However, the physical properties of nuclear and neutron matter could be very different from that of nuclei; in fact the nucleon density in the core of heavy nuclei reaches the maximum value of \\(\\rho_{0}\\)=0.16 fm\\({}^{-1}\\), while the relevant range of density of matter inside neutron stars is up to 9 times \\(\\rho_{0}\\)[3]. Therefore we are now facing, on one side, the problem of finding a fundamental scheme for the description of nuclear forces, valid from the deuteron up to dense nuclear matter, which is still an open fundamental problem, and, on the other, that of solving a many-body problem which is made extremely complex by the strong spin-isospin dependence of the forces. In this paper we will not address the problem of determining the nuclear force. We will consider a nuclear Hamiltonian which provides good fits to the N-N data up to meson production and reproduces fairly well the ground state and the low energy spectra of light nuclei. This is made of two- and three-body spin-isospin dependent potential. We present results for the ground state of large nuclear systems with this Hamiltonian.
It is well known that Quantum Monte Carlo (QMC) methods can provide estimatesof physical observables at the best known accuracy[4], and they are therefore useful to gauge the validity of proposed interaction models without having the bias of using more approximate methods.
A new generation of powerful QMC techniques have been recently devised to simulate large nucleonic systems with up to hundred of nucleons: the Auxiliary Field Diffusion Monte Carlo (AFDMC)[5]. They have been used to compute the EOS of nuclear matter[6], showing important limitations of other many-body methods and of the modern nuclear interactions based on two- plus three-body potentials in the high density regime. The accuracy of AFDMC was demonstrated by comparing the ground state of light nuclei with results provided by Green's Function Monte Carlo (GFMC)[7] that is known to give accurate results for the properties of light nuclei up to A=12[2]. By sampling the spin-isospin states of the nucleons AFDMC can be applied to large systems; it was used to simulate the ground state of medium sized nuclei up to A=40[8], nuclear matter with up to A=108[6], the properties of neutron-rich nuclei[9; 10], neutron drops[11], and neutron matter with up to A=114[11; 12; 13].
In this paper we discuss the latest results of the computation of the equation of state (EOS) of neutron matter in both the normal and superfluid phases, the computation of the superfluid gap of neutron matter in the low-density regime, and some preliminary results about the EOS of isospin-asymmetric nuclear matter.
## 2 Hamiltonian
The ground state of nuclear systems can be realistically studied by starting from the non-relativistic nuclear Hamiltonian of the form
\\[H=-\\frac{\\hbar^{2}}{2m}\\sum_{i=1}^{N}\
abla_{i}^{2}+\\sum_{i<j}v_{ij}+\\sum_{i< j<k}V_{ijk}\\,, \\tag{1}\\]
where \\(m\\) is the averaged mass of proton and neutron, and \\(v_{ij}\\) and \\(V_{ijk}\\) are two- and three-body potentials; it seems that the effect of forces due to n-body terms with \\(n>3\\) in the low energy properties of light nuclei is negligible. Such a form for the Hamiltonian has been shown to describe properties of light nuclei in excellent agreement with experimental data (see Ref. [2] and references therein). All the degrees of freedom responsible for the interaction between nucleons (such the \\(\\pi\\), \\(\\rho\\), \\(\\Delta\\), etc.) are integrated out and included in \\(v_{ij}\\) and \\(V_{ijk}\\).
At present, several realistic nucleon-nucleon interactions (NN) fit scattering data with very high precision. We consider the NN potentials belonging to the Argonne family. Such interactions are written as a sum of operators:
\\[v_{ij}=\\sum_{p=1}^{M}v_{p}(r_{ij})O^{(p)}(i,j)\\,, \\tag{2}\\]
where \\(O^{(p)}(i,j)\\) are spin-isospin dependent operators. The number of operators \\(M\\) characterizes the interaction; the most accurate of them is the Argonne AV18 with M=18[14]. Here we consider some simpler forms derived from AV18, namely the AV8'and the AV6'[15] with a smaller number of operators. For many systems, the difference between these simpler forms and the full AV18 potential can be computed perturbatively. Most of the contribution of the NN is due to the one pion exchange between nucleons, but the effect of other mesons exchange as well as some phenomenological terms are included.
The eight \\(O^{(p)}(i,j)\\) terms in AV8' are given by the four central components 1, \\(\\vec{\\tau}_{i}\\cdot\\vec{\\tau}_{j}\\), \\(\\vec{\\sigma}_{i}\\cdot\\vec{\\sigma}_{j}\\), \\((\\vec{\\sigma}_{i}\\cdot\\vec{\\sigma}_{j})(\\vec{\\tau}_{i}\\cdot\\vec{\\tau}_{j})\\), the tensor \\(S_{ij}\\), the tensor-\\(\\tau\\) component \\(S_{ij}\\vec{\\tau}_{i}\\cdot\\vec{\\tau}_{j}\\), where \\(S_{ij}=3(\\vec{\\sigma}_{i}\\cdot\\hat{r}_{ij})(\\vec{\\sigma}_{j}\\cdot\\hat{r}_{ij}) -\\vec{\\sigma}_{i}\\cdot\\vec{\\sigma}_{j}\\), the spin-orbit \\(\\vec{L}_{ij}\\cdot\\vec{S}_{ij}\\) and the spin-orbit-\\(\\tau\\)\\((\\vec{L}_{ij}\\cdot\\vec{S}_{ij})(\\vec{\\tau}_{i}\\cdot\\vec{\\tau}_{j})\\), where \\(\\vec{L}_{ij}\\) and \\(\\vec{S}_{ij}\\) are the total angular momentum and the total spin of the pair \\(ij\\).
The AV6' has the same structure of AV8', but the spin-orbit operators are dropped. In general, all the AVx' interactions are obtained starting from the AV18, written by dropping less important operators, and _refitted_ in order to keep the most important features of NN in the scattering data[15].
The three-body interactions (TNI) is essential to overcome the underbinding of nuclei with more than two nucleons. The NN is fitted to scattering data and correctly gives the deuteron binding energy, but starting with \\({}^{3}H\\) the NN is not sufficient to describe the ground state of light nuclei. The Urbana-IX (UIX) potential corrects this limitation of NN, and was fitted to light nuclei and to correctly reproduce the expected saturation energy of nuclear matter[16]. It essentially contains the Fujita-Miyazawa term[17] that describes the exchange of two pions between three nucleons, with the creation of an intermediate excited \\(\\Delta\\) state. Again, a phenomenological part is required to sum all the other neglected terms. The generic form of UIX is the following:
\\[V_{ijk}=V_{2\\pi}+V_{R}\\,. \\tag{3}\\]
The Fujita-Miyazawa term[17] is spin-isospin dependent:
\\[V_{2\\pi}=A_{2\\pi}\\sum_{cyc}\\left[\\{X_{ij},X_{jk}\\}\\{\\tau_{i}\\cdot\\tau_{j}, \\tau_{j}\\cdot\\tau_{k}\\}+\\frac{1}{4}[X_{ij},X_{jk}][\\tau_{i}\\cdot\\tau_{j},\\tau_ {j}\\cdot\\tau_{k}]\\right], \\tag{4}\\]
where the \\(X_{ij}\\) operators describe the one pion exchange, and their structure is the same of that of AV6'. The phenomenological part is
\\[V_{ijk}^{R}=U_{0}\\sum_{cyc}T^{2}(m_{\\pi}r_{ij})T^{2}(m_{\\pi}r_{jk})\\,. \\tag{5}\\]
The factors \\(A_{2\\pi}\\) and \\(U_{0}\\) are kept as fitting parameters. The binding energy of symmetrical nuclear matter is not well reproduced by such force. Other forms of TNI, called Illinois forces[18], which includes three-nucleon Feynman diagrams with two Deltas intermediate states, are available. However, they provide unrealistic overbinding of neutron systems when density increases[11; 12] and they do not seem to describe realistically high density (already at \\(\\rho\\geq\\rho_{0}\\)) nucleonic systems.
## 3 The AFDMC Method
Ground state AFDMC simulations rely, as do other traditional QMC methods, on previous variational calculations, often performed within FHNC theory[19], to compute a trial wave function \\(\\Psi_{T}\\), which is used to guide the sampling of the random walk. A typical form for \\(\\Psi_{T}\\) is given by a correlation operator \\(\\hat{F}\\) operating on a mean field wave function \\(\\Phi(R)\\),
\\[\\langle R,S|\\Psi_{T}\\rangle=\\hat{F}\\Phi(R)\\,. \\tag{6}\\]
Mean field wave functions \\(\\Phi(R)\\) that have been used are: (i) a Slater determinant \\(\\Phi_{FG}\\) of plane wave orbitals for nuclear and neutron matter in the normal phase, (ii) a linear combination \\(\\Phi_{sp}\\) of a small number of antisymmetric products of single particle orbitals \\(\\phi_{j}(\\vec{r}_{i},s_{i})\\) for nuclei and neutron drops, and (iii) a pfaffian \\(\\Phi_{pf}\\), namely an antisymmetric product of independent pairs for neutron matter in superfluid phase.
A realistic correlation operator is the one provided by FHNC/SOC theory, namely \\(\\mathcal{S}\\prod_{j>i}\\sum_{p=1}^{M}f^{(p)}(r_{ij})O^{(p)}(i,j)\\), where \\(\\mathcal{S}\\) is the symmetrizer and the operators \\(O^{(p)}(i,j)\\) are the same as those appearing in the two-body potential.
Unfortunately, the evaluation of this wave function requires exponentially increasing computational time with the number of particles. This procedure is followed in variational and Green's function Monte Carlo calculations, where the full sum over spin and isospin degrees of freedom is carried out. Since for large numbers of particles it is not computationally feasible to evaluate these trial functions, the much simpler correlation operator \\(\\prod_{j>i}f^{c}(r_{ij})\\), which contains the central Jastrow correlation only, is used instead. The evaluation of the corresponding trial function requires order \\(A^{3}\\) operations to evaluate the Slater determinants and \\(A^{2}\\) operations for the central Jastrow. Since many important correlations are neglected in these simplified functions, we use the Hamiltonian itself to define the spin sampling.
The AFDMC method works much like Diffusion Monte Carlo[4; 5; 20; 21; 9]. The wave function is defined by a set of what we call walkers. Each walker is a set of the \\(3A\\) coordinates of the particles plus a number \\(A\\) of four component spinors each representing a spin-isospin state. The imaginary time propagator for the kinetic energy and the spin-independent part of the potential is identical to that used in standard diffusion Monte Carlo. The new positions are sampled from a drifted Gaussian with a weight factor for branching given by the local energy of these components. Since they do not change the spin state, the spinors will be unchanged by these parts of the propagator.
To sample the spinors we first use a Hubbard Stratonovich transformation to write the propagator as an integral over auxiliary fields of a separated product of single particle spin-isospin operators. We then sample the auxiliary field value, and the resulting sample independently changes each spinor for each particle in the sample, giving a new sampled walker.
More details about the AFDMC method can be found in Ref. [11].
## 3 Nucleonic Matter
The properties of nuclear matter, like the Equation of State (EOS), are of fundamental importance in nuclear physics, mainly because nuclei behave very much like liquid drops. Indeed, each of these can be associated with a mass formula, which fits the corresponding data of stable nuclei from \\(A\\sim 20\\) on. Any such mass formula has a volume and a symmetry term provided by symmetrical nuclear matter and nuclear matter with \\(N>Z\\) respectively. Moreover, accurate model independent calculations of the above observables are much needed in the physics of heavy ion reactions, as well as in that of lepton and neutrino scattering off nuclei at intermediate energies. Medium effects have to be taken into account for the data analysis of such reactions at the present level of accuracy.
In addition, the theoretical knowledge of the properties of asymmetric nuclear matter at low temperature is needed to predict the structure, the dynamics and the evolution of stars, in particular during their last stages, when they become ultra-dense neutron stars.
We present in the following the results obtained with AFDMC for the EOS of pure neutron matter in normal phase, as well as the gap of the BCS phase of neutron matter[11; 22; 13]. Previously results of the EOS of nuclear matter and nuclei can be found in Refs. [6; 8; 11].
Neutron matter is simulated by considering \\(N\\) neutrons in a periodic box, and particular care is taken to evaluate the effects due to the finite size of the box. More details can be found in Refs. [12; 11]
In Fig. 1 we plot the AFDMC equation of state, obtained with the energy of 66 neutrons, and the variational calculation of Akmal et al. of Ref. [23], where the AV18 NN interaction combined with the Urbana UIX TNI was considered. As it can be seen both the AV8' and the AV18 essentially give an EOS with the same behavior, but the addition of the TNI adds some differences, in particular at higher densities.
The AV8' interaction should be more attractive than AV18 as shown in light nuclei and in neutron drop calculation[7]. This result is not confirmed by our calculations as it is clearly visible in Fig. 1. The AFDMC has proved to be in very good agreement with the GFMC results for light nuclei[8], and we believe that the same accuracy is reached in the neutron matter calculation, as shown in the comparison with the GFMC results of 14 neutrons[11; 13]. On the other hand the AFDMC calculation of the nuclear matter have shown that FHNC/SOC does not seem to provide safe energy upperbounds. This is because of the lack of cluster diagrams with commutator terms beyond the SOC approximation and that of elementary diagrams, not included in the FHNC summation[6].
Figure 1: The AFDMC equation of state evaluated by simulating 66 neutrons in a periodic box. See the text for details.
The addition of the UIX three-body interaction to the Hamiltonian increases the differences between the AFDMC results and that of Akmal et al. The difference cannot be due to finite-size effects in our calculation for the following reason: the total contribution of UIX should be positive in neutron matter, so that the inclusion of box corrections as done for the two-body part of the Hamiltonian would eventually increase the total energy. The periodic box-FHNC estimation of these effect essentially confirms this observation[12].
It is worth observing how important the three-nucleon interaction already is at medium-high densities. Its contribution at \\(2\\rho_{0}\\) is \\(\\sim 25\\)MeV and increases very rapidly with density. The four Illinois potentials[18], built to include two \\(\\Delta\\) intermediate states in the three nucleon processes, lead to very different results compared to the Urbana IX[12; 11] EOS at medium-high densities, in spite of the fact that all of them provide a satisfactory fit to the ground state and the low energy spectrum of nuclei with \\(A\\leq 8\\). This, once more, points outs the importance of understanding the role of \\(n\\)-body forces with \\(n>3\\) in nuclear astrophysics.
We explored the superfluid phase of low-density neutron matter. Because the AFDMC projects out the lowest energy state with the same symmetry and phase as the trial wave function, we tried to repeat some calculation using a BCS trial wave function of the form of Ref. [24]. The BCS state provides an energy that is lower than the normal state, however the difference never exceeds 3% of the total energy. We report the energy per neutron in the low-density regime evaluated using a normal and a BCS trial wave function in Fig. 2.
In the low-density regime the neutron matter is in a \\({}^{1}S_{0}\\) superfluid phase. In this regime, the neutron-neutron interaction is dominated by this channel, whose scattering length is very large and negative, about a=-18.5 fm. Several attempts to compute the pairing gap using a bare effective interaction like those used for cold atom problems have been performed in the last few years[22]. However, our results show that an accurate calculation of the superfluid gap in this regime must include the full Hamiltonian instead of one describing the \\({}^{1}S_{0}\\) physics only.
We computed the superfluid gap by means of AFDMC that allows for quantum simulations of the superfluid phase of neutron matter, by solving the ground state of
Figure 2: The EOS of neutron matter in the low–density regime. The two calculations were performed using different trial wave functions modeling a normal and a BCS state.
the full Hamiltonian with AV8'\\(+\\)UIX without the use of some simplified interaction. In Fig. 3 the AFDMC gap is compared with standard BCS theory and some other often referred calculations based upon to correlated theories at two-body approximation (see also Ref. [13]).
The computation of the isospin-asymmetric nuclear matter is possible with a trivial modification of the trial wave function used to project out the ground state of the system. The asymmetry is defined by
\\[\\alpha=\\frac{N-Z}{N+Z}\\,, \\tag{7}\\]
where \\(N\\) is the number of neutrons and \\(Z\\) of protons. We report in Fig. 4 preliminary results of the EOS of nuclear matter for different values of \\(\\alpha\\), using a simplified Hamiltonian containing the AV6' NN interaction. The various curves of the figure from the top to the bottom are in the same order of the legend, where it is also indicated the number of neutrons and protons for each simulation.
As displayed in the legend of Fig. 4, some calculations were performed using a very small number of protons. These results could suffer important finite size effects. The
Figure 4: The EOS of nuclear matter as a function of the density and of the isospin–asymmetry. See the text for details.
Figure 3: The superfluid gap computed using AFDMC and compared with other techniques is displayed.
study of finite size effect particularly due to the kinetic energy is in progress. It is possible to reduce the dependence of the kinetic energy on the number of particles by using different kinds of periodic boundary conditions: e.g. the so called twist average boundary conditions (TABC)[25]. The computation of several EOS by using TABC to see the effect to the kinetic energy is in progress.
## 4 Conclusions and perspectives
We have briefly shown some recent results obtained using AFDMC theory. This recently developed Quantum Monte Carlo provides results for the binding energy of light nuclei[8] and neutron drops[11] which are is very good agreement with accurate GFMC calculations. However, unlike those methods AFDMC is well suited to deal with nucleonic systems with up to hundred of nucleons in the Hamiltonian.
We have presented AFDMC results for the EOS of neutron matter in the normal phase[13]. In the low-density regime where neutrons form a superfluid phase, we modified the trial wave function in order to include important BCS correlations to compute the superfluid gap of neutron matter[22]. Preliminary results for the EOS of asymmetric nuclear matter have also been presented and discussed.
It should be stressed that, besides having shown that AFDMC theory opens up the possibility of studying the properties of large nucleonic systems, with an accuracy which goes much beyond that of other commonly used many-body theories, the results we have already obtained indicate serious inadequacies of commonly used nuclear interactions in the high density regime, of interest in astronuclear physics.
The problem of determining the nuclear Hamiltonian, and in particular the effect of \\(n\\)-body forces with \\(n>3\\) is becoming of primarily importance. We are working on developing a new form for the interaction that perturbatively contains the excitation of nucleons. The corresponding potential naturally generates many-body forces, that should be small in nuclei, but of primary importance in nuclear and neutron matter.
We thank J. Carlson for very useful and stimulating discussions. Calculations were partially performed on the BEN cluster at ECT* in Trento, under a grant for supercomputing projects, and partially on the HPC facility of Democritos/SISSA.
## References
* (1) G. G. Raffelt, _The stars as laboratories of fundamental physics_, University of Chicago, Chicago & London, 1996.
* (2) S. C. Pieper, _Nucl. Phys. A_**751**, 516 (2005).
* (3) J. Piekarewicz, _Phys. Rev. C_**69**, 041301 (2004).
* (4) K. E. Schmidt, and M. H. Kalos, _Applications of the Monte Carlo method in statistical physics_, 1984, topics in Applied Physics N. 36 (Springer, New York).
* (5) K. E. Schmidt, and S. Fantoni, _Phys. Lett. B_**446**, 99 (1999).
* (6) S. Gandolfi, F. Pederiva, S. Fantoni, and K. E. Schmidt, _Phys. Rev. Lett._**98**, 102503 (2007).
B. S. Pudliner, V. R. Pandharipande, J. Carlson, S. C. Pieper, and R. B. Wiringa, _Phys. Rev. C_**56**, 1720-1750 (1997).
* (8) S. Gandolfi, F. Pederiva, S. Fantoni, and K. E. Schmidt, _Phys. Rev. Lett._**99**, 022507 (2007).
* (9) S. Gandolfi, F. Pederiva, S. Fantoni, and K. E. Schmidt, _Phys. Rev. C_**73**, 044304 (2006).
* (10) S. Gandolfi, F. Pederiva, and A. a Beccara, _Eur. Phys. J. A_**35**, 207 (2008).
* (11) S. Gandolfi, _The Auxiliary Field Diffusion Monte Carlo Method for Nuclear Physics and Nuclear Astrophysics_, 2007, Ph.D. thesis, arXiv:0712.1364[nucl-th].
* (12) A. Sarsa, S. Fantoni, K. E. Schmidt, and F. Pederiva, _Phys. Rev. C_**68**, 024308 (2003).
* (13) S. Gandolfi, K. E. Schmidt, S. Fantoni, F. Pederiva, and A. Y. Illarionov (2008), in preparation.
* (14) R. B. Wiringa, V. G. J. Stoks, and R. Schiavilla, _Phys. Rev. C_**51**, 38-51 (1995).
* (15) R. B. Wiringa, and S. C. Pieper, _Phys. Rev. Lett._**89**, 182501 (2002).
* (16) B. S. Pudliner, V. R. Pandharipande, J. Carlson, and R. B. Wiringa, _Phys. Rev. Lett._**74**, 4396-4399 (1995).
* (17) J. Fujita, and H. Miyazawa, _Prog. Theor. Phys._**17**, 360 (1957).
* (18) S. C. Pieper, V. R. Pandharipande, R. B. Wiringa, and J. Carlson, _Phys. Rev. C_**64**, 014001 (2001).
* (19) V. R. Pandharipande, and R. B. Wiringa, _Rev. Mod. Phys._**51**, 821-861 (1979).
* (20) S. Fantoni, A. Sarsa, and K. E. Schmidt, _Phys. Rev. Lett._**87**, 181101 (2001).
* (21) F. Pederiva, A. Sarsa, K. E. Schmidt, and S. Fantoni, _Nucl. Phys. A_**742**, 255-268 (2004).
* (22) S. Gandolfi, A. Y. Illarionov, S. Fantoni, F. Pederiva, and K. E. Schmidt (2008), arXiv:0805.2513[nucl-th].
* (23) A. Akmal, V. R. Pandharipande, and D. G. Ravenhall, _Phys. Rev. C_**58**, 1804-1828 (1998).
* (24) A. Fabrocini, S. Fantoni, A. Y. Illarionov, and K. E. Schmidt, _Phys. Rev. Lett._**95**, 192501 (2005).
* (25) C. Lin, F. H. Zong, and D. M. Ceperley, _Phys. Rev. E_**64**, 016702 (2001). | We report on the most recent applications of the Auxiliary Field Diffusion Monte Carlo (AFDMC) method. The equation of state (EOS) for pure neutron matter in both normal and BCS phase and the superfluid gap in the low-density regime are computed, using a realistic Hamiltonian containing the Argonne AV8' plus Urbana IX three-nucleon interaction. Preliminary results for the EOS of isospin-asymmetric nuclear matter are also presented.
Keywords:nuclear matter, asymmetric nuclear matter, neutron matter, equation of state, nuclei, superfluid gap, Quantum Monte Carlo : 21.10.Dr, 21.60.De, 21.65.-f, 21.65.Cd, 21.65.Mn | Provide a brief summary of the text. |
arxiv-format/0808_0836v1.md | # The Optical Alignment System of the ZEUS MicroVertex Detector
K. Korcsak-Gorzo, G. Grzelak,
K. Oliver, M. Dawson,
R. Devenish,
J. Ferrando, T. Matsushita,
P. Shield,
R. Walczak
Department of Physics, University of Oxford,
Denys Wilkinson Building, Keble Road, Oxford OX1 3RH
## 1 Introduction
A silicon-strip MicroVertex Detector (MVD) was added to the ZEUS detector in 2001 as part of an upgrade programme for high luminosity running with the HERA-II electron-proton collider at DESY[1]. One of the main physics motivations was to improve the study of heavy flavour production at HERA, particularly charm. With mean decay lengths of the order of 100 \\(\\mu\\)m for charmedhadrons, precise alignment of the MVD active elements is crucial if their intrinsic spatial resolution of 10 \\(\\mu\\)m and better is to be properly exploited. Alignment has been addressed in three stages: i) during construction the position of the silicon strip detectors was measured with respect to the local support structure using an accurate 3-D measuring machine; ii) an optical alignment system tracks large movements of the MVD support structure; iii) individual MVD sensors are aligned precisely using charged-particle tracks from HERA run data. This paper describes the laser alignment system used for the second stage and summarises its performance. The primary aims of the laser system are to track large movements (at the level of 100 \\(\\mu\\)m) of the MVD support structure with respect to the central tracking detector (CTD) and to define periods of stability for the track alignment. A prototype [2] of the system described here was tested and used during the construction of the MVD.
The paper is organised as follows: the MVD is described very briefly in the next section; the optical alignment system is described in section 3; the readout and online control system is described in section 4; data reduction and reconstruction is covered in section 5; the results are summarised in section 6 and the paper concludes with a short summary (section 7).
## 2 The ZEUS Microvertex Detector
The space available for the MVD is limited by the CTD and the shape of the beam pipe. The space inside the tracking detector has a length of about 2 m and a diameter of 32 cm. The design requirements of the MVD were: polar angular coverage of \\(10^{\\circ}-170^{\\circ}\\); at least three measurements, in two projections, per track; at least 20 \\(\\mu\\)m hit resolution and impact parameter resolution of around 100 \\(\\mu\\)m at \\(90^{\\circ}\\) for tracks with momentum of at least 2 GeV/c. In order to meet these requirements within the limited space, the MVD consists of two parts, barrel and wheels, which are supported by a carbon-fibre tube made in two half-cylinders. The barrel has three concentric layers of silicon sensors but only the forward region is instrumented with sensors mounted on four wheels, numbered from 0 to 3. This follows from the unequal HERA beam energies, with 27.5 GeV electrons and 920 GeV protons, reaction products are boosted along the forward proton direction. The layout of the MVD is shown in Fig. 1 and Fig. 2 shows cross sections of the barrel and a wheel. All the MVD services and readout connections are made through the rear end of the detector. The region at the rear of the MVD (shown to the right of the barrel in Fig. 1) is used for cooling water distribution manifolds.
The MVD is described in more detail in references [3] and [4]. As the MVD has to fit into an existing detector, getting services in and out is not easy. The route that cables follow is close to the rear beampipe, through the beam hole in the rear tracking detector and rear calorimeter. A further challenge is that, as part of the measures to increase the luminosity for HERA-II, a superconducting combined-function magnet (the HERA GG magnet) penetrates the detector around the rear beampipe. Getting all the MVD cables to fit between this magnet and the rear calorimeter was particularly challenging. The special very thin cables designed to keep material at a minimum within the detector run for about five metres from the MVD to four cable patch-boxes located above the first HERA magnets outside the ZEUS detector on the rear (upstream proton) side. From here much more robust cables take services and signals to the MVD services and readout racks about 20 m away.
## 3 Optical Alignment System
The MVD laser alignment system consists of five straightness monitors placed around the circumference of the MVD support tube. Each straightness monitor consists of a collimated laser beam, approximately parallel to the collider beam line, with seven semi-transparent silicon sensors positioned along its path. Two of the sensors, mounted on the forward and rear CTD end-plates, define the line for that laser beam. The five remaining sensors are mounted within the MVD at the forward and rear end-flanges, the forward and rear barrel-flanges and the support structure for wheel-3, as shown schematically in Fig. 3. Inside the MVD a laser beam is contained between sensors within a narrow carbon-fibre cover with a semicircular profile, glued to the inner surface of the outer half-cylinders. Each sensor provides two mutually orthogonal position measurements, with resolution better than 10 \\(\\mu\\)m, in a local sensor coordinate system. The alignment system is sensitive to rotations, twists and sags of the MVD support structure with respect to the CTD from which it is mounted. The system is not sensitive to translations along the beam direction.
### Position sensors and signal cables
The laser position sensors, DPSD-516 transparent silicon diodes (TSD), use semi-transparent amorphous-silicon as the active material. The sensors were developed by H. Kroha _et. al._[5] and manufactured by EG&G Heimann Opto-electronics. The active material of the TSD sensors has a thickness of \\(\\sim 1\\,\\mu\\)m and an area of \\(5\\times 5\\) mm\\({}^{2}\\). Signals are read out by strips made of \\(\\sim 100\\) nm thick indium-tin-oxide, with 16 strips on each side of the amorphous-silicon. The strips on opposite sides are perpendicular to each other, with strip pitch 312 \\(\\mu\\)m and strip gap 10 \\(\\mu\\)m. The whole structure is deposited on a 0.5 mm thick glass substrate. The sensor is transparent to light with wavelength greater than \\(\\sim 600\\) nm. Transmission reaches a rough plateau of 80% at wavelengths around 700 nm and above, as shown in Fig. 6 of reference [2]. Howeverthe sensitivity of the sensor drops fairly rapidly above 700 nm. Full details of the sensor and its performance are given in reference [5].
Sensors are positioned in planes perpendicular to the beamline at seven locations:4 0 (RCTD) and 7 (FCTD) are attached to the CTD at its rear and forward end-plates, respectively; 1 (RMVD) and 6 (FMVD) are just inside the rear and forward end flanges of the MVD support tube,respectively; 2 (RBarrel) and 3 (FBarrel) are on the rear and forward MVD barrel-flanges and 5 (Wheel-3) is at the position of MVD most forward wheel, wheel-3. Plane-4 at the position of wheel-1 could not be installed because of space constraints. The TSD sensor strip signals are read out in a local \\(x-y\\) coordinate system where \\(x\\) is defined to be from the bottom ohmic contacts (cathodes) and \\(y\\) from the upper bias voltage contacts (anodes). For space reasons the sensors on the two planes at the rear end had to be mounted in a reversed orientation. Fig. 4 shows the orientation of the local coordinate systems for standard and reversed sensors (top figures). The bottom figure shows the bonding of the special flat readout cable to sensor strips in the standard mounting. Note that the coordinate will vary in a direction at right-angles to the readout strips.
Footnote 4: Table 1 gives the precise positions.
The relationship between the sensor local coordinate system and the ZEUS coordinate system depends on the sensor location. More details are given in the section on reconstruction and data analysis. The readout cables for the sensors presented quite a challenge. Although the MVD has a much higher spatial precision than the large drift chamber outside it, the benefit of these points could be significantly reduced in the overall track fit if the MVD increased the multiple scattering by too much. Thus the readout cables had to have minimum mass and, as there was no space for on- or near-detector amplification, the impedance of the cables over the total length from sensor to readout board of around 25 m had to be kept low. A special cable was designed and fabricated by Oxford engineers and technical staff. Using a photo-fabrication technique \\(32\\times 250\\,\\mu\\)m tracks were etched onto a \\(1\\,\\mathrm{m}\\times 0.5\\,\\mathrm{m}\\) flexible kapton-backed copper sheet. Then using an ingenious 'cut and fold' technique continuous flexible cables with thickness of only 200 \\(\\mu\\)m and of various lengths up to 20 m were produced.
### Lasers and optical fibres
The wavelength of the laser should be long enough to give good sensor transmission, but short enough to have adequate sensor sensitivity. The available aperture for the laser beam is \\(5\\times 5\\,\\mathrm{mm}^{2}\\) and the beam should be contained in the aperture over a length of 2 m. Taking all these requirements into consideration, the laser was chosen to have a wavelength of 780 nm, a gaussianprofile and a beam diameter of 1.5 mm at the waist. More details on the characterisation of the laser beam are given in reference [2]. The positions of the five laser beams around the circumference of the MVD support tube are shown in Fig. 5. A sixth laser beam was planned at the position 208.25\\({}^{\\circ}\\) but had to be abandoned because of conflicting demands for space. The numbering and power of the lasers are given in Table 2. Unfortunately beam-0 was knocked out of alignment during work to modify a radiation absorber inside the beampipe at the rear end of the MVD. Given the extreme difficulty of reaching the optical fibre laser heads at this time, the beam could not be realigned. The lasers, IFLEX600 CW with maximum power rating of 5.2 mW and 0.65 mm collimated output, were provided by Point Source [6]. The lasers and their power supplies are positioned with other MVD service and readout crates about 25 m from the detector. The laser beams are carried by optical fibres,5 following the cable route through the patch box, with fibre connectors, to the laser fibre heads at the rear end of the MVD support tube. The fibre heads are mounted on a ring attached to the CTD and 'pointing' adjustments for a laser beam are made using three screws for each head. Given the very limited space between the beam pipe and the CTD rear on-board electronics, the final adjustment is a difficult and delicate task, that becomes impossible once the rear tracking detector is installed.
Footnote 5: Point Source 5 mm outer diameter stainless-steel jacket fibres.
The lasers and their power supplies are contained in two'shutter boxes', one box with three lasers and the other box with two lasers. Once the lasers are powered on, the light output can be'switched' on or off by opening or closing a mechanical shutter across all three laser beams. The shutter is moved by an electromagnet which can be under operator or software control. The shutter control is also interfaced to the ZEUS slow-control and safety interlock systems. In normal operation the lasers are powered continuously to avoid fluctuations when they are powered up. The beams are switched on or off for the various data collection procedures by use of the shutters.
### Sensor Resolution
Before installation the TSD sensors were characterised using a test set-up consisting of a computer controlled two-dimensional translation stage mounted on an optical rail system [2]. For a given sensor plane the basic measurements are strip signals above the pedestal values. For a laser beam reasonably well-centred on the sensor and perpendicular to the sensor plane, the strip signals follow a gaussian distribution. Full details on how the position measurements are made in the final system are given in Section 5.
As originally noted by Kroha [7], the resolution for measurement of the laser beam position could be improved by correcting for variations in the thickness of the amorphous silicon layer and the glass substrate. Such variations may lead to interference patterns in the transmission and absorption of the laser light thus affecting the beam position measurement, see also Bauer et al [8]. Correction matrices with a 250 \\(\\mu\\)m grid were measured for all sensors. Using the correction matrices, the best resolution obtained in the test setup was less than 2 \\(\\mu\\)m - details are given in reference [2]. However, it was decided not to use the correction matrices in the routine analysis of the laser data as other systematic effects were much larger. These will be discussed in Section 5.
## 4 Readout and control
The photo-current signals from the sensor strips are carried from the MVD by the special cables described in Section 3.1 via the patch-boxes to the readout cards. For each of the eight \\(z\\)-locations along the MVD support tube all five TSD sensors are read out by a single card and all the cards are located in a VME crate at the MVD services area. There the signals are multiplexed, amplified by a current-voltage transformer, digitised and stored in memory on VMEbus boards. This memory is addressed using a complete rewrite of the VME driver to exploit Motorola's TUNDRA chipset [9, 10]. The data are transferred via TCP/IP and made available to the ZEUS event building components.
The laser alignment slow control and data acquisition are fully integrated into the existing ZEUS MVD framework [4], which is based on VME board computers running Lynx OS and Intel PCs running SuSE GNU/Linux. This slow control incorporates an interface to the ZEUS safety control system prohibiting operation of the lasers when the beam lines are accessible to people.
The readout and slow control systems have been designed to allow two modes of operation [11]. For the first the laser system is fully integrated with the ZEUS run control system, data taken is stored on tape as part of the main ZEUS data store. It is then available for analysis along with the slow-control and general environmental records from normal data-taking. For the second mode the system can run in parallel with normal ZEUS data taking, but the laser data is now stored on disks on the Intel PC's via NFS. ASCII format copies of laser data from both data-taking modes are also stored on a local MVD computer disk.
Data analysis and first results
The raw data from a single laser run comprises two sets of ADC counts for both local \\(x\\)- and \\(y\\)-coordinate strips at each sensor plane along all five beam lines. The first set is taken with the laser shutters closed to establish the pedestal values, followed by the second set with shutters open. The raw signals are then given by subtracting the pedestal values from the 'laser-on' values. Early studies of the laser alignment data showed that typical values for the dark currents of the anode and cathode strips were around 0 and up to 50 ADC counts, respectively. A number of cuts and corrections were applied to strip signals, \\(I_{i}\\), after pedestal subtraction and before applying the position algorithm:
* negative strip currents (\\(I_{i}<0\\)) set to zero;
* isolated 'hot strips' with \\(I_{i}>1000\\,\\)counts set to zero;
* strips with large ratios to both neighbours (\\(I_{i}/I_{i+1}\\) and \\(I_{i}/I_{i-1}>10\\)) rejected;
* 'empty' sensor plane with \\(\\sum_{i=1}^{16}I_{i}<160\\,\\)counts rejected;
* to avoid edge effects the first two and last two strips are ignored.
### Mean position
Two methods for determining the mean position, \\(\\bar{x}\\), and resolution, \\(\\sigma\\), of the signal in a sensor plane were considered. The simplest is to use a current weighted mean
\\[\\bar{x}=\\sum_{i}x_{i}w_{i}\\ \\ \\mbox{with}\\ \\ w_{i}=\\frac{I_{i}}{\\sum_{i}I_{i}} \\ \\ \\mbox{and}\\ \\ \\sigma=\\frac{d}{\\sqrt{12}}\\sqrt{\\sum_{i}w_{i}^{2}},\\]
where \\(d\\approx 300\\,\\mu\\)m is the strip width. The strip position, \\(x_{i}\\), is taken to be at the centre of the strip and the error is assumed to be uniformly distributed over the strip width. This gives a conservative value of around \\(40\\,\\mu\\)m for the mean position resolution. A more sophisticated approach is to assume a gaussian profile for the strip-current distribution of a sensor plane. The parameters \\(\\bar{x}\\) and \\(\\sigma\\) are determined by fitting the profile to \\(N\\exp\\left[-(x-\\bar{x})^{2}/(2\\sigma^{2})\\right]\\), with the error on an individual strip position taken to be \\(1/\\sqrt{I_{i}}\\). The fit is performed twice, with the second fit scaled to enforce \\(\\chi^{2}/\\mbox{ndf}=1\\). The typical resolution from the fit method is below \\(10\\,\\mu\\)m. In more detail, an analysis of about 2580 position measurements showed that 95% of the errors were below \\(15\\,\\mu\\)m, with a mean of \\(6.5\\,\\mu\\)m, with the remaining 5% of errors extending up to \\(40\\,\\mu\\)m. The gaussian fit method is the one used for all the results described below. Fig. 6 shows the beam-profile signals and gaussian fits along one beam line. The attenuation of the laser intensity and beam broadening are evident as the number of sensors traversed increases. The attenuation is roughly consistent with the 80% transmission found in the test system.
The mean positions may be plotted in sensor local coordinates or in ZEUS coordinates. The geometrical relationship between them is shown Fig. 7 and the transformation from the local system, \\(\\mathbf{r}_{local}\\), to the common ZEUS system, \\(\\mathbf{r}_{ZEUS}\\) may be written as
\\[\\mathbf{r}_{ZEUS}=\\mathcal{R}\\cdot\\mathbf{r}_{local}+\\mathcal{T}\\]
where \\(\\mathcal{R}\\) is a \\(2\\times 2\\) rotation matrix and \\(\\mathcal{T}\\) a 2-D translation. The transformations also take into account the standard or reversed orientation of the sensors.
During the period January to August 2004, laser data were collected regularly at the end of each HERA fill for normal e-p interactions, giving a total of 200 runs. The first attempt at analysis compared the mean sensor positions of a given laser beam relative to a reference run - usually the first run of the period under study. The reason for plotting positions relative to a reference run is to allow the data from different beams to be plotted using a common scale. An example of the data for the whole period is shown in Fig. 8.
The figure shows the relative mean position of the \\(y\\), anode, signal along laser beam-3 in each of the sensor planes, with plane 0 (RCTD) at the top and plane 7 (FCTD) at the bottom. A number of points may be made:
* the size of deviations tends to increase with distance along the laser beam;
* there are some quite large deviations in planes 5, 6, 7, up to 100 \\(\\mu\\)m or so;
* there are clear correlations between the larger deviations in different planes.
Although not shown, the local \\(x\\)-coordinate (cathode) signals shows similar features but with smaller fluctuations. The difficult question to answer from this type of plot is whether this is evidence for movement of the MVD support tube or simply instabilities of the lasers and noise in the sensors. The data shown in the figure were collected from laser runs separated by quite long time intervals - hours or even days between consecutive runs. In between laser runs, the lasers were switched off. The lasers were switched on for a short while before an alignment data run, but there was insufficient time to ensure that the system had stabilised because of the exigencies of physics data-taking.
For these reasons an alternative procedure was developed for the analysis of the laser beam position data. The idea, to define each beam line as an independent'straightness monitor', is shown schematically in Fig. 9. For a given laser beam and local coordinate the mean positions in sensor planes 0 and 7, attached to the CTD, are used to define the reference straight line. The expected position of the beam at a sensor plane, \\(i\\), within the MVD, is then given in local sensor coordinates by:
\\[x_{i}=x_{0}\\frac{z_{i}-z_{7}}{z_{0}-z_{7}}+x_{7}\\frac{z_{0}-z_{i}}{z_{0}-z_{7}}, \\hskip 14.226378pty_{i}=y_{0}\\frac{z_{i}-z_{7}}{z_{0}-z_{7}}+y_{7}\\frac{z_{0}-z_ {i}}{z_{0}-z_{7}}.\\]
The _residuals_ in the two coordinate directions are calculated as the differences between the expected positions and the corresponding measured mean positions, \\((x_{i}-\\bar{x}_{i}^{meas.}),\\ (y_{i}-\\bar{y}_{i}^{meas.})\\). Fig. 10 shows the same data as displayed in Fig. 8, but now the residuals are plotted relative to the line defined by planes 0 and 7 (hence the absence of deviations at these positions). The residuals are plotted relative to those from a reference run, which is chosen to be the same as that used for these data before. Comparing Figs 10 and 8, one sees that the fluctuations are smaller, particularly in the planes furthest from the optical fibre laser heads. There is evidence for movement or for further instabilities of the lasers. More information is clearly needed.
There is quite a variation in the 'noise' level of individual laser beams, fluctuations are caused by pointing instability of the source and beam deflections from thermal gradients. Beam-3 shown in Figures 8 and 10 is relatively quiet. Allowing for this variation, similar features as described above are seen for the other beams in the system. Although there may be evidence for some movement, there is no evidence for any permanent'step changes' in the position of the MVD support tube over the nearly nine months of data considered in this section. If the residuals for a given beam and sensor are plotted for the whole period, the resulting distributions are reasonably gaussian with values of \\(\\sigma\\) around \\(10-20\\,\\mu\\)m for a 'quiet' laser beam and around \\(40-50\\,\\mu\\)m for a 'noisy' beam.
## 6 Correlation studies
As discussed in the preceding section, to establish whether the variations in mean position (residual) of a sensor are caused by motion, more information is needed. The first attempts to provide precision alignment constants for the MVD tracking sensors used events with through-going cosmic ray tracks only. There were two sources for the data sets: the first from dedicated cosmic-ray runs with a special trigger taken when the HERA collider was not operational and the second from cosmic-ray events selected from the normal e-p interaction data stream with HERA operational. It was expected that the two types of data would give the same results (provided that the data were collected during roughly the same period of time and that no changes were made to the detector configuration). They did not and no 'trivial' explanations could be found for the differences in alignment constants which were of magnitude \\(50-100\\,\\mu\\)m. It was eventually realised6 that there was a difference in the environmental conditions: whether the HERA collider magnets were powered on or not. As described in Section 2, the final focusing GG magnet in the HERA-II configuration reaches well within the detector and quite close to the central tracking system. In addition, the MVD readout and services cables are tightly wrapped around it. The way the magnet is supported within the detector is by a strap from above and this would allow some movement, indeed position sensors attached to the magnet showed that it does move slightly.
Footnote 6: We thank Drs R. Carlin and U. Koetz for making this suggestion.
Two further steps were required before this idea could be tested quantitatively: the first was to change the mode of operation of the laser system and the second was to get access to the temporal records of the GG magnet current. To reduce laser instabilities from switching the lasers on and off, it was decided to leave the lasers on permanently (while the detector was closed for data taking) and control laser runs by the mechanical shutter. With this change laser runs could be taken at much shorter intervals. To study possible external effects, the laser data were collected every 4 minutes for periods of hours, up to a maximum of nine days. These data sets gave ample opportunity for study of the effects of both regular and irregular operation of the ZEUS detector. The circulation of bunches in HERA provides a very accurate clock signal that is used for synchronisation both within the ZEUS experiment and between the experiment and HERA, so it was relatively straightforward to relate the state of the GG magnet current to the timing of laser runs.
Other sources of environmental change were also considered, such as the temperature of the MVD, the temperature of the beam pipe and the temperature of the CTD. The last two were quickly ruled out, but it was found that the temperature within the MVD could change by almost \\(10^{\\circ}\\)C depending on whether the onboard electronics were powered or not.
Fig. 11 shows an example of the results from the correlation studies. It shows the local-\\(x\\) (cathode) coordinate residuals for laser beam-2 at the five planes within the MVD (upper five plots) together with the GG magnet current and the MVD temperature in the lowest two plots. All are plotted against the common elapsed time synchronised by the HERA clock. Note that the residuals are the actual values, relative to the beam-2 laser line, at each measuring plane and not relative to a reference run. The temperature in the MVD is measured at a position near wheel-3. Fig. 12 shows a similar plot from the same runs for the local-\\(y\\) (anode) coordinate residuals.
Considering the magnet current first, Fig. 11 (local-\\(x\\)) shows that between times of 0 and 45000 s there is a correlation in time between the mean residual values themselves and with the magnet current being zero or non-zero. Thereis also a tendency for the size of the movement to increase with increasing plane index. Fig. 12 (local-\\(y\\)) shows a similar, but more pronounced correlated movement. This pattern of movement is also seen in the data from other laser beams and is consistent with the MVD support tube being tilted about a fulcrum near the rear CTD attachment. The assumption is that the MVD support tube moves when the GG magnet moves, via the MVD cables.
Regarding correlated movement with temperature, the evidence is very clearly seen in the local-\\(y\\) plots (Fig. 12) for times between 50000 and 65000 s. The changes in temperature occurs when the MVD electronics are switched on or off. The pattern of movement is different in detail to that related to the GG magnet motion, the biggest effects are now seen in the three sensor planes nearest to the on-detector electronics, at planes 2, 3 and 5 (MVD barrel flanges and wheel-3). Fig. 11 - local-\\(x\\) coordinate for beam-2 - shows a much less clear correlation. The concentration of movement at these three sensor planes is also seen in other beam line data.
Figs 13 and 14 show the same beam-2 data but now the residuals have been transformed to the global ZEUS \\(x\\)- and \\(y\\)-coordinates, respectively. At this beam position the GG magnet associated movement is mainly along the ZEUS \\(y\\)-axis, whereas the MVD temperature movement is seen in both global \\(x\\) and \\(y\\) directions. Originally it was thought that one might be able to deduce the nature of the movement of the support tube by fitting the pattern of local movements to changes expected from a set of'standard motions' - for example twists and sags. This might well have been unrealistic even with the full production system as designed, but it is impossible with reduced number of beam lines and reliable sensors. However the system is able to give some quantitative information on the magnitude of local movements and to track such changes.
The residual plots indicate that the position of a sensor is stable while the external conditions are stable and that the positions of stability are themselves stable and reproducible. This has been investigated in more detail by averaging the residuals for periods of stability during long laser runs. The periods are defined by external changes, but regions of rapid change are excluded. Results from a typical long period of runs are shown in Fig.15. Two long periods, corresponding to e-p data-taking when both the MVD and GG magnet are on can be seen. In between is a period of beam injection and acceleration when the MVD is off throughout and the GG magnet current is varying, but has two short periods when it is also off. This pattern is typical and shows that 'off' periods are shorter than those when everything is on.
The detailed results depend on the noise quality of lasers and the sensors. For beams 2, 3 and 4 most residual means have RMS values better than 10 \\(\\mu\\)m, beam 1 is noisier and the local-\\(x\\) residuals at plane-6 appear to be unstable.
The positions for normal running with both the MVD and GG magnet powered and those with both MVD and GG magnets off are of particular interest. The results for local coordinates and beam 2 are shown in Fig. 16. Each point shows the mean and standard deviation (shown as a vertical error bar, often smaller than the symbol size) for a data set corresponding to a single period of stability. The means for the MVD and GG magnet both on are shown as triangles and when both are off as circles. The data show clearly that the structure is moving between two well-defined positions 'ON' and 'OFF'. The movement is mainly in the local-\\(y\\) coordinate and can be as much as \\(120\\,\\mu\\)m.
Fig. 17 shows the same results but now plotted for \\(x\\)- and \\(y\\)-residuals in the ZEUS coordinate frame. At this beam position, the size of the shift between ON and OFF positions is largest in the \\(y\\) direction but there is also a smaller but non-zero shift along the \\(x\\) direction. Two other positions are shown as single points on these plots: the 8\\({}^{\\rm th}\\) point, shown as a square, is for the MVD on and GG magnet off and the last point of all (inverted triangle) is for the MVD off and GG magnet on.
To give a more quantitative estimate of the movements, the beam-2 means from Figs 16 and 17 have been averaged over the 16 ON periods and the 14 OFF periods shown. The difference in the mean values for a given beam and sensor position shows the size of the movement and the standard deviations give a good idea of how precisely the MVD support frame is able to return to a particular position. These results are collected in Table 3. Movements of over \\(100\\,\\mu\\)m in the local frame are seen, with the largest standard deviations on the difference in position around \\(10\\,\\mu\\)m. A similar analysis has been performed for beam-4 residuals and the results are shown in Table 4. The movements seen at the position of beam-4 are smaller than those at beam-2, but there is clear evidence for two stable positions with standard deviations on the differences of position again at most \\(10\\,\\mu\\)m.
## 7 Summary
This paper has described the laser alignment system for the ZEUS microvertex detector and given a summary of its performance. The infra-red lasers and semi-transparent sensors provide five'straightness monitors' that can detect movement of the MVD support structure at the level of \\(100\\,\\mu\\)m or better. The system has been working reliably since it was installed and commissioned in 2001. The one less than satisfactory feature is the way that the optical fibre heads are mounted on the central tracking detector. Due to constraints of time, space and money, a rather crude system had to be employed which made it difficult to adjust the pointing of the laser beams, even at the time of installation, and impossible thereafter.
Time-series analysis of the residuals of the laser beam position with respect to the straight lines derived from the positions of a beam at the sensors mounted on the front and rear of the central tracking chamber is how the laser alignment information is used.
From the studies reported in this paper, the following conclusions may be drawn:
* The MVD support structure is very stable and there is no indication of any large long-term movement or step change (over the period 2003 to 2007);
* When external conditions vary, particularly the GG magnet on or off and the MVD on-detector electronics being powered or not, the MVD support structure shows movements locally as large as \\(100\\,\\mu\\)m;
* Once the conditions return to the previous state the MVD support tube returns to its previous configuration, to within \\(10\\,\\mu\\)m.
Finally, it is clear that track data for precision alignment should be collected under the operational conditions of regular e-p interaction data taking.
## Acknowledgements
We thank H. Kroha and the Munich group for help and advice during the early stages of this project. We thank T. Handford for his invaluable help throughout. We thank J. Hill for outstanding effort during the design, construction and installation of the laser system. We thank C. Band, D. Smith and the Oxford Photo-Fabrication Unit for their help with the design and manufacture of the flat readout cables. We thank S. Boogert, and M. Rigby for their work on the prototype system and L. Hung for her help in analysing early production data. Finally we thank C. Youngman for help and advice on how to access HERA and ZEUS slow-control data.
## References
* [1] The ZEUS collaboration, DESY-PRC 97/01, (1997).
* [2] T. Matsushita _et. al._, _Nucl. Instrum. Methods_ A **466**, (2001) 383.
* [3] E. N. Koffeman, _Nucl. Instrum. Methods_ A **453**, (2000) 89.
* [4] ZEUS MVD Group, _The design and performance of the ZEUS MVD_, to be submitted to _NIM A_.
* [5] W. Blum, H. Kroha and P. Widmann, _Nucl. Instrum. Methods_ A **367**, (1995) 413.
* [6][http://www.point-source.com/](http://www.point-source.com/)
* [7] H. Kroha, _Nucl. Phys._ B **54**, (1997) 80.
* [8] F. Bauer _et. al._, _IEEE Trans. Nucl. Sci._**48**, (2001) 262.
* [9] A. Polini, _UVMElib_, ZEUS Note 99-071 (1999), unpublished.
* [10] \"The Architecture of the ZEUS Microvertex Detector DAQ and Second Level Global Track Trigger.\", A. Polini, talk given at 2003 Conference for Computing in High-Energy and Nuclear Physics (CHEP 03), published in eConf C0303241:MOGT005,2003 e-Print Archive: physics/0307006
* [11] J. Ferrando & C. Youngman, _The MVD laser alignment System Guide_, ZEUS Note 05-013 (2005), unpublished.
List of Figures
* 1 Layout of the ZEUS MVD along beam axis. The barrel part covers the interaction region. Four wheels cover the forward direction of the proton beam.
* 2 Cross sections of the MVD. Left and right are barrel and wheel, respectively. The coordinate system is that of the ZEUS experiment, with the \\(z\\)-axis along the proton beam direction, the \\(x\\)-axis pointing towards the centre of the HERA ring and the \\(y\\)-axis vertical.
* 3 Schematic diagram of one laser alignment beam and sensors. Forward and Rear refer to the orientation of the tracking detectors, with forward in the direction of the HERA proton beam. The sensor numbering is also shown.
* 4 Top figures: the standard and reversed sensor orientations. Bottom: photo of a standard sensor showing the bonding of the flat readout cables.
* 5 The positions and numbering of the five laser beams around the circumference of the MVD support tube. View is from the forward end of the CTD in the ZEUS coordinate system.
* see Table 1 for details.
* 7 The orientation of sensor local coordinates with respect to the ZEUS coordinate system, viewed from the rear CTD end. The labels S and R refer to standard and reversed sensors, respectively.
* 8 Relative positions of laser beam-3, in local coordinates, at the seven planes starting with plane-0 at the top. The \\(y\\) (anode) signals are shown as functions of the run number. At the time of these measurements the laser at beam-3 was that with the highest power.
The procedure used to define the reference line for a given laser beam using the CTD planes 0 and 7 and residual (offset) between the expected position and measured mean position in an interior MVD sensor plane.
* 10 Residuals of laser beam-3, in local coordinates, with respect to the straightness monitor defined by the laser-beam positions in planes 0 and 7 (R and FCTD). Other details the same as in Fig. 8.
* 11 Local-\\(x\\) (cathode) residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
* 12 Local-\\(y\\) (anode) residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
* 13 ZEUS frame \\(x\\)-residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
* 14 ZEUS frame \\(y\\)-residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
* 15 Residuals for local-\\(y\\) (dark grey left-hand scales) and local-\\(x\\) (light grey right-hand scales) are shown for two periods of e-p data-taking with MVD and GG magnet both on and a period in between when the MVD was off with the GG current being varied with two short sub-periods when it was also off.
* 16 Mean values of residuals for local-\\(y\\) (left-hand plots) and local-\\(x\\) (right-hand plots) for beam-2 during periods of stability. The symbols indicate different conditions: triangle MVD and GG magnet both on; circle MVD and GG both off; square MVD on, GG off (\\(8^{\\rm th}\\) data set only); inverted triangle MVD off, GG on (last data set in the sequence). The size of the standard deviation is shown by the vertical error bar.
* 17 Mean values of ZEUS global-\\(y\\) (left-hand plots) and global-\\(x\\) (right-hand plots) residuals for beam-2 during periods of stability. The symbols have the same meaning as for Fig. 16.
Figure 1: Layout of the ZEUS MVD along beam axis. The barrel part covers the interaction region. Four wheels cover the forward direction of the proton beam.
Figure 2: Cross sections of the MVD. Left and right are barrel and wheel, respectively. The coordinate system is that of the ZEUS experiment, with the \\(z\\)-axis along the proton beam direction, the \\(x\\)-axis pointing towards the centre of the HERA ring and the \\(y\\)-axis vertical.
Figure 3: Schematic diagram of one laser alignment beam and sensors. Forward and Rear refer to the orientation of the tracking detectors, with forward in the direction of the HERA proton beam. The sensor numbering is also shown.
Figure 4: Top figures: the standard and reversed sensor orientations. Bottom: photo of a standard sensor showing the bonding of the flat readout cables.
Figure 5: The positions and numbering of the five laser beams around the circumference of the MVD support tube. View is from the forward end of the CTD in the ZEUS coordinate system.
Figure 6: Examples of the beam-profile signals from the sensors along one beam line, together with the gaussian fits. The strip currents are in units of ADC counts and are plotted as functions of strip number. RCTD corresponds to sensor plane 0 and FCTD to sensor plane 7 – see Table 1 for details.
Figure 7: The orientation of sensor local coordinates with respect to the ZEUS coordinate system, viewed from the rear CTD end. The labels S and R refer to standard and reversed sensors, respectively.
Figure 8: Relative positions of laser beam-3, in local coordinates, at the seven planes starting with plane-0 at the top. The \\(y\\) (anode) signals are shown as functions of the run number. At the time of these measurements the laser at beam-3 was that with the highest power.
Figure 9: The procedure used to define the reference line for a given laser beam using the CTD planes 0 and 7 and residual (offset) between the expected position and measured mean position in an interior MVD sensor plane.
Figure 10: Residuals of laser beam-3, in local coordinates, with respect to the straightness monitor defined by the laser-beam positions in planes 0 and 7 (R and FCTD). Other details the same as in Fig. 8.
Figure 11: Local-\\(x\\) (cathode) residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
Figure 12: Local-\\(y\\) (anode) residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
Figure 13: ZEUS frame \\(x\\)-residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
Figure 14: ZEUS frame \\(y\\)-residuals for beam-2, planes 1, 2, 3, 5 & 6 with the GG magnet current and the MVD temperature in the bottom two plots, all as a function of elapsed time.
Figure 15: Residuals for local-\\(y\\) (dark grey left-hand scales) and local-\\(x\\) (light grey right-hand scales) are shown for two periods of e-p data-taking with MVD and GG magnet both on and a period in between when the MVD was off with the GG current being varied with two short sub-periods when it was also off.
Figure 16: Mean values of residuals for local-\\(y\\) (left-hand plots) and local-\\(x\\) (right-hand plots) for beam-2 during periods of stability. The symbols indicate different conditions: triangle MVD and GG magnet both on; circle MVD and GG both off; square MVD on, GG off (8\\({}^{\\rm th}\\) data set only); inverted triangle MVD off, GG on (last data set in the sequence). The size of the standard deviation is shown by the vertical error bar.
Figure 17: Mean values of ZEUS global-\\(y\\) (left-hand plots) and global-\\(x\\) (right-hand plots) residuals for beam-2 during periods of stability. The symbols have the same meaning as for Fig. 16.
List of Tables
* 1 The positions of the sensor planes along the \\(z\\)-coordinate axis (parallel to the beam line) in the ZEUS reference frame with the origin at the nominal electron-proton interaction point. The plane positions are also shown in Fig. 3.
* 2 Laser beam positions, numbering and power. The details are from measurements made in April 2006. At this time some lasers were moved between beam lines, to get the best match between lasers and working sensors.
* 3 Mean and standard deviation, in microns, for the average of ON and OFF residuals for beam-2 at planes 2, 3, 5 and 6 and their differences in the local and global coordinate systems.
* 4 Mean and standard deviation, in microns, for the average of ON and OFF residuals for beam-4 at planes 2, 3, 5 and 6 and their differences in the local and global coordinate systems.
\\begin{table}
\\begin{tabular}{|c|c|l|c|l|} \\hline Index & Plane & & \\(z\\) (mm) & Comment \\\\ \\hline
0 & RCTD & Rear CTD end plate & \\(-1078.1\\) & reversed \\\\
1 & RMVD & Rear MVD end flange & \\(-1007.6\\) & reversed \\\\
2 & RBarrel & Rear MVD barrel flange & \\(-360.4\\) & standard \\\\
3 & FBarrel & Forward MVD barrel flange & \\(302.6\\) & standard \\\\
4 & Wheel-1 & & \\(437.1\\) & missing \\\\
5 & Wheel-3 & MVD wheel-3 & \\(717.1\\) & standard \\\\
6 & FMVD & Forward MVD end flange & \\(1104.6\\) & standard \\\\
7 & FCTD & Forward CTD end plate & \\(1179.1\\) & standard \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: The positions of the sensor planes along the \\(z\\)-coordinate axis (parallel to the beam line) in the ZEUS reference frame with the origin at the nominal electron–proton interaction point. The plane positions are also shown in Fig. 3.
\\begin{table}
\\begin{tabular}{|c|c|c|c|} \\hline Beam & Position & Power (mW) & Comment \\\\ \\hline
0 & \\(140.75^{\\circ}\\) & 2.9 & lost inside MVD barrel \\\\
1 & \\(95.75^{\\circ}\\) & 4.7 & noisy \\\\
2 & \\(28.25^{\\circ}\\) & 4.4 & good \\\\
3 & \\(320.75^{\\circ}\\) & 3.0 & power low \\\\
4 & \\(275.75^{\\circ}\\) & 3.7 & OK \\\\
5 & \\(208.25^{\\circ}\\) & - & not installed \\\\ \\hline \\end{tabular}
\\end{table}
Table 2: Laser beam positions, numbering and power. The details are from measurements made in April 2006. At this time some lasers were moved between beam lines, to get the best match between lasers and working sensors.
\\begin{table}
\\begin{tabular}{|c||c|c|c|c||c|c|c|c|} \\hline LOCAL & P2-\\(y\\) & P3-\\(y\\) & P5-\\(y\\) & P6-\\(y\\) & P2-\\(x\\) & P3-\\(x\\) & P5-\\(x\\) & P6-\\(x\\) \\\\ \\hline ON & -306.2 & -76.4 & -253.9 & 173.2 & 125.8 & 35.7 & 221.1 & 467.7 \\\\ & 3.9 & 3.9 & 4.1 & 7.4 & 2.5 & 5.3 & 8.8 & 5.9 \\\\ \\hline OFF & -379.2 & -195.8 & -378.0 & 99.9 & 132.1 & 51.8 & 264.4 & 469.3 \\\\ & 7.5 & 6.8 & 2.2 & 3.4 & 2.5 & 5.4 & 6.8 & 3.1 \\\\ \\hline DIFF & 73.0 & 119.4 & 124.1 & 73.3 & 6.3 & 16.1 & 43.3 & 1.7 \\\\ & 8.4 & 7.9 & 4.7 & 8.2 & 3.5 & 7.5 & 11.1 & 6.6 \\\\ \\hline \\end{tabular}
\\begin{tabular}{|c||c|c|c|c||c|c|c|} \\hline GLOBAL & P2-\\(y\\) & P3-\\(y\\) & P5-\\(y\\) & P6-\\(y\\) & P2-\\(x\\) & P3-\\(x\\) & P5-\\(x\\) & P6-\\(x\\) \\\\ \\hline ON & 329.3 & 84.2 & 328.3 & 68.8 & 34.
\\begin{table}
\\begin{tabular}{|c||c|c|c|c||c|c|c|c|} \\hline LOCAL & P2-\\(y\\) & P3-\\(y\\) & P5-\\(y\\) & P6-\\(y\\) & P2-\\(x\\) & P3-\\(x\\) & P5-\\(x\\) & P6-\\(x\\) \\\\ \\hline ON & 271.0 & 453.7 & -153.8 & 90.7 & -232.0 & -28.0 & -85.3 & -14.6 \\\\ & 5.2 & 6.6 & 3.6 & 1.5 & 2.0 & 2.7 & 1.9 & 7.4 \\\\ \\hline OFF & 240.2 & 421.0 & -185.0 & 107.1 & -246.7 & -54.6 & -109.4 & -46.3 \\\\ & 7.0 & 8.8 & 7.5 & 3.0 & 4.4 & 5.4 & 4.7 & 8.3 \\\\ \\hline DIFF & 30.8 & 32.7 & 31.2 & 16.4 & 14.7 & 26.6 & 24.1 & 31.7 \\\\ & 8.7 & 11.0 & 8.3 & 3.4 & 4.9 & 6.1 & 5.0 & 11.1 \\\\ \\hline \\end{tabular}
\\begin{tabular}{|c||c|c|c|c|c|c|c|} \\hline GLOBAL & P2-\\(y\\) & P3-\\(y\\) & P5-\\(y\\) & P6-\\(y\\) & P2-\\(x\\) & P3-\\(x\\) & P5-\\(x\\) & P6-\\(x\\) \\\\ \\hline ON & 292.9 & 454.2 & -144.4 & 91.8 &
| The laser alignment system of the ZEUS microvertex detector is described. The detector was installed in 2001 as part of an upgrade programme in preparation for the second phase of electron-proton physics at the HERA collider. The alignment system monitors the position of the vertex detector support structure with respect to the central tracking detector using semi-transparent amorphous-silicon sensors and diode lasers. The system is fully integrated into the general environmental monitoring of the ZEUS detector and data has been collected over a period of 5 years. The primary aim of defining periods of stability for track-based alignment has been achieved and the system is able to measure movements of the support structure to a precision around 10 \\(\\mu\\)m.
keywords: Alignment, vertex detector, laser, semi-transparent sensors. +
Footnote †: Reviewed.
+
Footnote †: Reviewed.
+
Footnote †: Reviewed.
+
Footnote †: Reviewed. | Write a summary of the passage below. |
arxiv-format/0809_0168v2.md | # Phase Transition from QMC Hyperonic Matter to Deconfined Quark Matter
J. D. Carroll
[email protected] Centre for the Subatomic Structure of Matter (CSSM), Department of Physics, University of Adelaide, SA 5005, Australia
D. B. Leinweber
Centre for the Subatomic Structure of Matter (CSSM), Department of Physics, University of Adelaide, SA 5005, Australia
A. G. Williams
Centre for the Subatomic Structure of Matter (CSSM), Department of Physics, University of Adelaide, SA 5005, Australia
November 3, 2021
## I Introduction
The use of hadronic models to describe high density matter enables us to investigate both the microscopic world of atomic nuclei and the macroscopic world of compact stellar objects, encompassing an enormous range of scales. The results of these investigations provide deep fundamental insight into the way that the world around us is constructed.
Experimental data from both extremes of scale aid in constraining such models, from the saturation properties of nuclear matter to the observed properties of neutron stars [1; 2; 3]. The literature [4; 5; 6; 7; 8; 9] provides a plethora of models for the EoS of hadronic matter, at least some of which have been successfully applied to calculate the properties of finite nuclei. There are also important constraints from data involving heavy-ion collisions [10; 11]. Many of these EoS have also been applied to neutron star features as well. However, the amount of data available for neutron stars (or compact stellar objects) is very limited, with only a single result containing both a mass and radius result simultaneously [12] and even that has been recently disputed [13].
With such a lack of constraining data, our focus shifts to finding models which better reflect the physics that is expected to be important under the conditions which we are investigating. A prime example of this is that at the densities which we consider to be interesting for this investigation (1-10 times nuclear density, \\(\\rho_{0}=0.16\\) fm\\({}^{-3}\\)) it is possible that either hyperonic matter (in which strangeness carrying baryons become energetically favourable as the Fermi sea fills), quark matter (in which it becomes energetically favourable that the quarks inside the baryons become deconfined) or a mixed phase of these is present, rather than the more traditional treatment of nucleons alone.
We construct a model of high density matter which is globally charge neutral, color neutral, rotationally symmetric, and in a state that is energetically favourable. For this purpose we consider hadronic matter modelled by the Quark-Meson Coupling (QMC) model [14; 15], which was recently improved through the self-consistent inclusion of color hyperfine interactions [16]. While this improvement had no significant effect on the binding of nucleons, it led to impressive results for finite hypernuclei [17]. We follow the method of Glendenning [18] to produce a mixed phase of hyperonic matter and deconfined quark matter under total mechanical stability, then a pure deconfined quark matter phase with relativistic non-interacting quarks.
We begin with a brief presentation of Relativistic Mean-Field Theory in Section II to establish a foundation with which to discuss the general formalism for the QMC model, including new additions, in Section III. Deconfined quark matter is discussed in Section IV. This is followed by Section V providing a summary of the requirements for andmethod to construct a phase transition from a hadronic phase to a mixed phase and from a mixed phase to a quark phase. Stellar solutions are calculated in Section VI and a summary of our results is presented in Section VII with conclusions in Section VIII.
## II Relativistic mean-field theory
We introduce the mean-field description of nuclear matter using the classic example of Quantum Hadrodynamics (QHD) [9; 19; 20]. Although the Quark-Meson Coupling model (QMC) has a fundamentally different starting point, namely the self-consistent modification of the structure of a hadron immersed in the nuclear medium [21; 22; 23], in practice the equations for nuclear matter involve only a few key differences. We summarize those in the next section. The original formulation of QHD included only nucleons interacting with scalar-isoscalar, \\(\\sigma\\), and vector-isoscalar, \\(\\omega\\), mesons. This was later expanded to include the vector-isovector, \\(\\rho\\), and subsequently the entire octet of baryons, \\(B\\in\\{p,n,\\Lambda,\\Sigma^{+},\\Sigma^{0},\\Sigma^{-},\\Xi^{0},\\Xi^{-}\\}\\), with global charge neutrality upheld via leptons, \\(\\ell\\in\\{e^{-},\\mu^{-}\\}\\).
The Lagrangian density for QHD is
\\[\\mathcal{L} = \\sum_{k}\\bar{\\psi}_{k}\\left[\\gamma_{\\mu}(i\\partial^{\\mu}-g_{\\omega k }\\omega^{\\mu}-g_{\\rho}\\vec{\\tau}_{(k)}\\cdot\\vec{\\rho}^{\\mu})-(M_{k}-g_{\\sigma k }\\sigma)\\right]\\psi_{k}\\] \\[+\\frac{1}{2}(\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma-m_{\\sigma} ^{2}\\sigma^{2})-\\frac{1}{4}F_{\\mu\
u}F^{\\mu\
u}-\\frac{1}{4}R_{\\mu\
u}^{a}R_{a }^{\\mu\
u}\\] \\[+\\frac{1}{2}m_{\\omega}^{2}\\omega_{\\mu}\\omega^{\\mu}+\\frac{1}{2}m_{ \\rho}^{2}\\rho_{\\mu}^{a}\\rho_{a}^{\\mu}+\\bar{\\psi}_{\\ell}\\left[\\gamma_{\\mu}i \\partial^{\\mu}-m_{\\ell}\\right]\\psi_{\\ell}+\\delta\\mathcal{L},\\]
where the index \\(k\\in\\{N,\\Lambda,\\Sigma,\\Xi\\}\\) represents each isospin group of the baryon states, and \\(\\psi_{k}\\) corresponds to the Dirac spinors for these
\\[\\psi_{N}=\\begin{pmatrix}\\psi_{p}\\\\ \\psi_{n}\\end{pmatrix},\\quad\\psi_{\\Lambda}=\\begin{pmatrix}\\psi_{\\Lambda}\\\\ \\end{pmatrix},\\quad\\psi_{\\Sigma}=\\begin{pmatrix}\\psi_{\\Sigma^{+}}\\\\ \\psi_{\\Sigma^{0}}\\\\ \\psi_{\\Sigma^{-}}\\end{pmatrix},\\quad\\psi_{\\Xi}=\\begin{pmatrix}\\psi_{\\Xi^{0}} \\\\ \\psi_{\\Xi^{-}}\\end{pmatrix}. \\tag{2}\\]
The vector field tensors are
\\[F^{\\mu\
u}=\\partial^{\\mu}\\omega^{\
u}-\\partial^{\
u}\\omega^{\\mu},\\quad R_{a}^ {\\mu\
u}=\\partial^{\\mu}\\rho_{a}^{\
u}-\\partial^{\
u}\\rho_{a}^{\\mu}-g_{\\rho} \\epsilon^{abc}\\rho_{b}^{\\mu}\\rho_{c}^{\
u}, \\tag{3}\\]
The third components of the isospin matrices are
\\[\\tau_{(N)3}=\\tau_{(\\Xi)3}=\\frac{1}{2}\\left[\\begin{array}{cc}1&0\\\\ 0&-1\\end{array}\\right],\\quad\\tau_{(\\Lambda)3}=\\left[\\begin{array}{cc}0& \\end{array}\\right],\\quad\\tau_{(\\Sigma)3}=\\left[\\begin{array}{ccc}1&0&0\\\\ 0&0&0\\\\ 0&0&-1\\end{array}\\right], \\tag{4}\\]
\\(\\psi_{\\ell}\\) is a spinor for the lepton states, and \\(\\delta\\mathcal{L}\\) are renormalisation terms. We do not include pions here as they provide no contribution to the mean-field, because the ground state of nuclear matter is parity-even. We have neglected nonlinear meson terms in this description for comparison purposes, though it has been shown that the inclusion of nonlinear scalar meson terms produces a framework consistent with the QMC model without the added hyperfine interaction [24]. The values of the baryon and meson masses in vacuum are summarized in Table 1.
Assuming that the baryon density is sufficiently large, we use a Mean-Field Approximation (MFA) with physical parameters (breaking charge symmetry) in which the meson fields are replaced by their classical vacuum expectation values. With this condition, the renormalisation terms can be neglected.
By enforcing rotational symmetry and working in the frame where the matter as a whole is at rest, we set all of the 3-vector components of the vector meson fields to zero, leaving only the temporal components. Furthermore, by
\\begin{table}
\\begin{tabular}{c c c c c c c c} \\(M_{p}\\) & \\(M_{n}\\) & \\(M_{\\Lambda}\\) & \\(M_{\\Sigma^{-}}\\) & \\(M_{\\Sigma^{0}}\\) & \\(M_{\\Sigma^{+}}\\) & \\(M_{\\Xi^{-}}\\) & \\(M_{\\Xi^{0}}\\) \\\\
938.27 & 939.57 & 1115.68 & 1197.45 & 1192.64 & 1189.37 & 1321.31 & 1314.83 \\\\ \\hline & \\(m_{\\sigma}\\) & & & \\(m_{\\omega}\\) & & & \\(m_{\\rho}\\) & \\\\ & 550.0 & & & 782.6 & & & 775.8 & \\\\ \\end{tabular}
\\end{table}
Table 1: The vacuum (physical) baryon and meson masses (in units of MeV) as used here [25].
enforcing isospin symmetry we remove all charged meson states. Consequently, because the mean-fields are constant, all meson derivative terms vanish, and thus so do the vector field tensors. The only non-zero components of the vector meson mean fields are then the time components, \\(\\langle\\omega^{\\mu}\\rangle=\\langle\\omega\\rangle\\delta^{\\mu 0}\\) and \\(\\langle\\rho^{\\mu}\\rangle=\\langle\\rho\\rangle\\delta^{\\mu 0}\\). Similarly, only the third component of the \\(\\rho\\) meson mean field in iso-space is non-zero, corresponding to the uncharged \\(\\rho\\) meson.
The couplings of the mesons to the baryons are found via SU(6) flavor-symmetry [26]. This produces the following relations for the \\(\\sigma\\) and \\(\\omega\\) couplings to each isospin group (and hence each baryon \\(B\\) in that isospin group)
\\[\\frac{1}{3}\\;g_{\\sigma N}=\\frac{1}{2}\\;g_{\\sigma\\Lambda}=\\frac{1}{2}\\;g_{ \\sigma\\Sigma}=g_{\\sigma\\Xi},\\quad\\frac{1}{3}\\;g_{\\omega N}=\\frac{1}{2}\\;g_{ \\omega\\Lambda}=\\frac{1}{2}\\;g_{\\omega\\Sigma}=g_{\\omega\\Xi}. \\tag{5}\\]
Using the formalism as above with isospin expressed explicitly in the Lagrangian density, the couplings of the \\(\\rho\\) meson to the octet baryons are unified, thus by specifying \\(g_{\\sigma N}\\), \\(g_{\\omega N}\\), and \\(g_{\\rho}\\) we are therefore able to determine the couplings to the remaining baryons.
By evaluating the equations of motion from the Euler-Lagrange equations
\\[\\frac{\\partial{\\cal L}}{\\partial\\phi_{i}}-\\partial_{\\mu}\\frac{\\partial{\\cal L }}{\\partial(\\partial_{\\mu}\\phi_{i})}=0, \\tag{6}\\]
we find the mean-field equations for each of the mesons, as well as the baryons. The equations for the meson fields are
\\[\\langle\\sigma\\rangle =\\sum_{B}\\frac{g_{\\sigma B}}{m_{\\sigma}^{2}}\\langle\\bar{\\psi}_{B }\\psi_{B}\\rangle, \\tag{7}\\] \\[\\langle\\omega\\rangle =\\sum_{B}\\frac{g_{\\omega B}}{m_{\\omega}^{2}}\\langle\\bar{\\psi}_{B }\\gamma^{0}\\psi_{B}\\rangle=\\sum_{B}\\frac{g_{\\omega B}}{m_{\\omega}^{2}}\\langle \\psi_{B}^{\\dagger}\\psi_{B}\\rangle,\\] (8) \\[\\langle\\rho\\rangle =\\sum_{k}\\frac{g_{\\rho}}{m_{\\rho}^{2}}\\langle\\bar{\\psi}_{k} \\gamma^{0}\\tau_{(k)3}\\psi_{k}\\rangle=\\sum_{k}\\frac{g_{\\rho}}{m_{\\rho}^{2}} \\langle\\psi_{k}^{\\dagger}\\tau_{(k)3}\\psi_{k}\\rangle=\\sum_{B}\\frac{g_{\\rho}}{m _{\\rho}^{2}}\\langle\\psi_{B}^{\\dagger}I_{3B}\\psi_{B}\\rangle, \\tag{9}\\]
where the sum over \\(B\\) corresponds to the sum over the octet baryon states, and the sum over \\(k\\) corresponds to the sum over isospin groups. \\(I_{3B}\\) is the third component of isospin of baryon \\(B\\), as found in the diagonal elements of \\(\\tau_{(k)3}\\). \\(\\langle\\omega\\rangle\\), \\(\\langle\\rho\\rangle\\), and \\(\\langle\\sigma\\rangle\\) are proportional to the conserved baryon density, isospin density and scalar density respectively, where the scalar density is calculated self-consistently.
The Euler-Lagrange equations also provide a Dirac equation for the baryons
\\[\\sum_{B}\\left[i\
ot{\\partial}-g_{\\omega B}\\gamma^{0}\\langle\\omega\\rangle-g_{ \\rho}\\gamma^{0}I_{3B}\\langle\\rho\\rangle-M_{B}+g_{\\sigma B}\\langle\\sigma \\rangle\\right]\\psi_{B}=0. \\tag{10}\\]
At this point, we can define the baryon effective mass as
\\[M_{B}^{*}=M_{B}-g_{\\sigma B}\\langle\\sigma\\rangle, \\tag{11}\\]
and the baryon chemical potential (also known as the Fermi energy, the energy associated with the Dirac equation) as
\\[\\mu_{B}=\\epsilon_{F_{B}}=\\sqrt{k_{F_{B}}^{2}+(M_{B}^{*})^{2}}+g_{\\omega B} \\langle\\omega\\rangle+g_{\\rho}I_{3B}\\langle\\rho\\rangle. \\tag{12}\\]
The chemical potentials for the leptons are found via
\\[\\mu_{\\ell}=\\sqrt{k_{F_{\\ell}}^{2}+m_{\\ell}^{2}}. \\tag{13}\\]
The energy density, \\({\\cal E}\\), and pressure, \\(P\\), for the EoS can be obtained using the relations for the energy-momentum tensor (where \\(u^{\\mu}\\) is the 4-velocity)
\\[\\langle T^{\\mu\
u}\\rangle=\\left({\\cal E}+P\\right)u^{\\mu}u^{\
u}+P{\\rm g}^{\\mu \
u},\\quad\\Rightarrow\\quad P=\\frac{1}{3}\\langle T^{ii}\\rangle,\\quad{\\cal E}= \\langle T^{00}\\rangle, \\tag{14}\\]
since \\(u^{i}=0\\) and \\(u_{0}u^{0}=-1\\), where \\({\\rm g}^{\\mu\
u}\\) here is the inverse metric tensor having a negative temporal component, and \\(T^{\\mu\
u}\\) is the energy-momentum tensor. In accordance with Noether's Theorem, the relation between the energy momentum tensor and the Lagrangian density is
\\[T^{\\mu\
u}=-{\\rm g}^{\\mu\
u}{\\cal L}+\\partial^{\\mu}\\psi\\frac{\\partial{\\cal L} }{\\partial(\\partial_{\
u}\\psi)}, \\tag{15}\\]and we find the Hartree-level energy density and pressure for the system as a sum of contributions from baryons, \\(B\\); leptons, \\(\\ell\\); and mesons, \\(m\\) to be
\\[\\begin{array}{rcl}{\\cal E}&=&\\sum_{j=B,\\ell,m}{\\cal E}_{j}\\\\ &=&\\sum_{i=B,\\ell}\\frac{(2J_{i}+1)}{(2\\pi)^{3}}\\int\\theta(k_{F_{i}}-|\\vec{k}| )\\sqrt{k^{2}+(M_{i}^{*})^{2}}\\;d^{3}k+\\sum_{\\alpha=\\sigma,\\omega,\\rho}\\frac{1 }{2}m_{\\alpha}^{2}\\langle\\alpha\\rangle^{2},\\\\ \\\\ P&=&\\sum_{j=B,\\ell,m}P_{j}\\\\ &=&\\sum_{i=B,\\ell}\\frac{(2J_{i}+1)}{3(2\\pi)^{3}}\\int\\frac{k^{2}\\; \\theta(k_{F_{i}}-|\\vec{k}|)}{\\sqrt{k^{2}+(M_{i}^{*})^{2}}}\\;d^{3}k+\\sum_{\\alpha =\\omega,\\rho}\\frac{1}{2}m_{\\alpha}^{2}\\langle\\alpha\\rangle^{2}-\\frac{1}{2}m_{ \\sigma}^{2}\\langle\\sigma\\rangle^{2},\\end{array} \\tag{16}\\]
where \\(J_{i}\\) is the spin of particle \\(i\\) (\\(J_{i}=\\frac{1}{2}\\;\\forall\\;i\\in\\{B,\\ell\\}\\)) which in this case accounts for the availability of both up and down spin-states. \\(\\theta(x)\\) is the Heaviside Step Function. Note that the pressure arising from the vector mesons is positive, while it is negative for the scalar meson.
The total baryon density, \\(\\rho\\), can be calculated via
\\[\\rho=\\sum_{B}\\rho_{B}=\\sum_{B}\\frac{(2J_{B}+1)}{(2\\pi)^{3}}\\int\\theta(k_{F_{B} }-|\\vec{k}|)\\;d^{3}k, \\tag{18}\\]
where in symmetric matter, the Fermi momenta are related via \\(k_{F}=k_{F_{n}}=k_{F_{p}}\\), and the binding energy per baryon, \\(E\\), is determined via
\\[E=\\left[\\frac{1}{\\rho}\\left({\\cal E}-\\sum_{B}M_{B}\\rho_{B}\\right)\\right]. \\tag{19}\\]
The couplings \\(g_{\\sigma N}\\) and \\(g_{\\omega N}\\) are determined such that symmetric nuclear matter (in which \\(\\rho_{p}=\\rho_{n}=0.5\\rho\\)) saturates with the appropriate minimum in the binding energy per baryon of \\(E_{0}=-15.86\\;\\mbox{MeV}\\) at a nuclear density of \\(\\rho_{0}=0.16\\;\\mbox{fm}^{-3}\\). The couplings for QHD which provide a fit to saturated nuclear matter are shown in Table 2.
The coupling \\(g_{\\rho}\\) is fixed such that the nucleon symmetry energy, given by
\\[a_{\\rm sym}=\\frac{g_{\\rho}^{2}}{12\\pi^{2}m_{\\rho}^{2}}k_{F}^{3}+\\frac{1}{12} \\frac{k_{F}^{2}}{\\sqrt{k_{F}^{2}+(M_{p}^{*})^{2}}}+\\frac{1}{12}\\frac{k_{F}^{2} }{\\sqrt{k_{F}^{2}+(M_{n}^{*})^{2}}}\\,, \\tag{20}\\]
is reproduced at saturation as \\((a_{\\rm sym})_{0}=32.5\\;\\mbox{MeV}\\).
The chemical potential for any particle, \\(\\mu_{i}\\), can be related to two independent chemical potentials -- we choose that of the neutron (\\(\\mu_{n}\\)) and the electron (\\(\\mu_{e}\\)) -- and thus we use a general relation
\\[\\mu_{i}=B_{i}\\mu_{n}-Q_{i}\\mu_{e}\\ ;\\ \\ \\ \\ i\\in\\{p,n,\\Lambda,\\Sigma^{+},\\Sigma^{0}, \\Sigma^{-},\\Xi^{0},\\Xi^{-},\\ell\\}, \\tag{21}\\]
where \\(B_{i}\\) and \\(Q_{i}\\) are the baryon (unitless) and electric (in units of the proton charge) charges respectively. For example, the proton has \\(B_{p}=+1\\) and \\(Q_{p}=+1\\), so it must satisfy \\(\\mu_{p}=\\mu_{n}-\\mu_{e}\\) which is familiar as \\(\\beta\\)-equilibrium. Since neutrinos are able to escape the star, we consider \\(\\mu_{\
u}=0\\). Leptons have \\(B_{\\ell}=0\\), and all baryons have \\(B_{B}=+1\\).
The relations between the chemical potentials are therefore derived to be
\\[\\left.\\begin{array}{rcl}\\mu_{\\Lambda}=\\mu_{\\Sigma^{0}}=\\mu_{\\Xi^{0}}&=\\mu_ {n},\\\\ \\mu_{\\Sigma^{-}}=\\mu_{\\Xi^{-}}&=\\mu_{n}+\\mu_{e},\\\\ \\mu_{p}=\\mu_{\\Sigma^{+}}&=\\mu_{n}-\\mu_{e},\\\\ \\mu_{\\mu}&=\\mu_{e}.\\end{array}\\right. \\tag{22}\\]
The EoS for QHD can be obtained by finding solutions to Eqs. (7-9) subject to charge neutrality, conservation of a chosen total baryon number, and equivalence of chemical potentials. These conditions can be summarised as
\\[\\left.\\begin{array}{rcl}0&=&\\sum_{i}Q_{i}\\rho_{i}\\\\ \\rho&=&\\sum_{i}B_{i}\\rho_{i}\\\\ \\mu_{i}&=&B_{i}\\mu_{n}-Q_{i}\\mu_{e}\\end{array}\\right\\}\\quad i\\in\\{p,n,\\Lambda,\\Sigma^{+},\\Sigma^{0},\\Sigma^{-},\\Xi^{0},\\Xi^{-},\\ell\\}. \\tag{23}\\]With these conditions, we are able to find the EoS for QHD. It should be noted that, as with many relativistic models for baryonic matter, once we include more than one species of baryon this model eventually produces baryons with negative effective masses at sufficiently high densities (\\(\\rho>1\\;{\\rm fm}^{-3}\\)). This is a direct result of the linear nature of the effective mass as shown in Eq. (11). As the Fermi energy (see Eq. (12)) approaches zero, the cost associated with producing baryon-anti-baryon pairs is reduced and at this point the model breaks down. From a more physical point of view, as the density rises one would expect that the internal structure of the baryons should play a role in the dynamics. Indeed, within the QMC model, the response of the internal structure of the baryons to the applied mean scalar field ensures that no baryon mass ever becomes negative. We now describe the essential changes associated with the QMC model.
## III QMC model
Like QHD, QMC is a relativistic quantum field theory formulated in terms of the exchange of scalar and vector mesons. However, in contrast with QHD these mesons couple not to structureless baryons but to clusters of confined quarks. As the density of the medium grows and the mean scalar and vector fields grow, the structure of the clusters adjusts self-consistently in response to the mean-field coupling. While such a model would be extremely complicated to solve in general, it has been shown by Guichon _et al._[27] that in finite nuclei one should expect the Born-Oppenheimer approximation to be good at the 3% level. Of course, in nuclear matter it is exact at mean-field level.
Within the Born-Oppenheimer approximation, the major effect of including the structure of the baryon is that the internal quark wave functions respond in a way that opposes the applied scalar field. To a very good approximation this physics is described through the \"scalar polarizability,\" \\(d\\), which in analogy with the electric polarizability describes the term in the baryon effective mass quadratic in the applied scalar field [28; 29; 30; 31]. Recent explicit calculations of the equivalent energy functional for the QMC model have demonstrated the very natural link between the existence of the scalar polarizability and the many-body forces, or equivalently the density dependence, associated with successful, phenomenological forces of the Skyrme type [14; 15]. In nuclear matter the scalar polarizability is the _only_ effect of the internal structure in mean-field approximation. On the other hand, in finite nuclei the variation of the vector field across the hadronic volume also leads to a spin-orbit term in the nucleon energy [27].
Once one chooses a quark model for the baryons, and specifies the quark-level meson couplings, there are no new parameters associated with introducing any species of baryon into the nuclear matter. Given the well known lack of experimental constraints on the forces between nucleons and hyperons, let alone hyperons and hyperons, which will be of great practical importance as the nuclear density rises above (2-3)\\(\\rho_{0}\\), this is a particularly attractive feature of the QMC approach and it is crucial for our current investigation. Indeed, we point to the very exciting recent results of the QMC model, modified to include the effect of the scalar field on the hyperfine interaction, which led to \\(\\Lambda\\) hypernuclei being bound in quite good agreement with experiment and \\(\\Sigma\\) hypernuclei being unbound because of the modification of the hyperfine interaction [17] - thus yielding a very natural explanation of this observed fact. We note the success that this description has found for finite nuclei as noted in [15].
While we focus on the MIT bag model [32] as our approximation to baryon structure, we note that there has been a parallel development [33] based upon the covariant, chiral symmetric NJL model [34], with quark confinement modelled using the proper time regularization proposed by the Tubingen group [35; 36]. The latter model has many advantages for the computation of the medium modification of form factors and structure functions, with the results for spin structure functions [37; 38] offering a unique opportunity to test the fundamental idea of the QMC model experimentally. However, in both models it is the effect of quark confinement that leads to a positive polarizability and a natural saturation mechanism.
Although the underlying physics of QHD and QMC is rather different, at the hadronic level the equations to be solved are very similar. We therefore focus on the changes which are required.
**1.** Because of the scalar polarizability of the hadrons, which accounts for the self-consistent response of the internal quark structure of the baryon to the applied scalar field [15], the effective masses appearing in QMC are non-linear in the mean \\(\\sigma\\) field. We write them in the general form
\\[M_{B}^{*}=M_{B}-w_{B}^{\\sigma}\\;g_{\\sigma N}\\langle\\sigma\\rangle+\\frac{d}{2} \\bar{w}_{B}^{\\sigma}\\;(g_{\\sigma N}\\langle\\sigma\\rangle)^{2}\\,, \\tag{24}\\]
\\begin{table}
\\begin{tabular}{c c c} \\(g_{\\sigma N}\\) & \\(g_{\\omega N}\\) & \\(g_{\\rho}\\) \\\\
10.644 & 13.179 & 6.976 \\\\ \\end{tabular}
\\end{table}
Table 2: Couplings for QHD with the octet of baryons, fit to saturation of nuclear matter.
where the weightings, \\(w^{\\sigma}_{B},\\ \\tilde{w}^{\\sigma}_{B}\\), and the scalar polarizability of the nucleon, \\(d\\), must be calculated from the underlying quark model. Note now that only the coupling to the nucleons, \\(g_{\\sigma N}\\), is required to determine all the effective masses.
The most recent calculation of these effective masses, including the in-medium dependence of the spin dependent hyperfine interaction [17], yields the explicit expressions:
\\[\\begin{split} M_{N}(\\langle\\sigma\\rangle)&=\\ M_{N}-g_{ \\sigma N}\\langle\\sigma\\rangle\\\\ &\\ +\\left[0.0022+0.1055R_{N}^{\\rm free}-0.0178\\left(R_{N}^{\\rm free }\\right)^{2}\\right]\\left(g_{\\sigma N}\\langle\\sigma\\rangle\\right)^{2},\\\\ M_{\\Lambda}(\\langle\\sigma\\rangle)&=\\ M_{\\Lambda}- \\left[0.6672+0.0462R_{N}^{\\rm free}-0.0021\\left(R_{N}^{\\rm free}\\right)^{2} \\right]g_{\\sigma N}\\langle\\sigma\\rangle\\\\ &\\ +\\left[0.0016+0.0686R_{N}^{\\rm free}-0.0084\\left(R_{N}^{\\rm free }\\right)^{2}\\right]\\left(g_{\\sigma N}\\langle\\sigma\\rangle\\right)^{2},\\\\ M_{\\Sigma}(\\langle\\sigma\\rangle)&=\\ M_{\\Sigma}- \\left[0.6706-0.0638R_{N}^{\\rm free}-0.008\\left(R_{N}^{\\rm free}\\right)^{2} \\right]g_{\\sigma N}\\langle\\sigma\\rangle\\\\ &\\ +\\left[-0.0007+0.0786R_{N}^{\\rm free}-0.0181\\left(R_{N}^{\\rm free }\\right)^{2}\\right]\\left(g_{\\sigma N}\\langle\\sigma\\rangle\\right)^{2},\\\\ M_{\\Xi}(\\langle\\sigma\\rangle)&=\\ M_{\\Xi}-\\left[0.3395+0.02822R_{N}^{\\rm free}-0.0128\\left(R_{N}^{\\rm free}\\right)^{2}\\right]g_{ \\sigma N}\\langle\\sigma\\rangle\\\\ &\\ +\\left[-0.0014+0.0416R_{N}^{\\rm free}-0.0061\\left(R_{N}^{\\rm free }\\right)^{2}\\right]\\left(g_{\\sigma N}\\langle\\sigma\\rangle\\right)^{2}\\,.\\end{split} \\tag{25}\\]
We take \\(R_{N}^{\\rm free}=0.8\\) fm as the preferred value of the free nucleon radius, although in practice the numerical results depend only very weakly on this parameter [15].
Given the parameters in Eq. (25), all the effective masses for the baryon octet are entirely determined. They are plotted as functions of \\(\\langle\\sigma\\rangle\\) in Fig. 1 and we see clearly that they never become negative. (Note that the range of \\(\\langle\\sigma\\rangle\\) covered here corresponds to densities up to (6-8)\\(\\rho_{0}\\)).
**2.** Since the mean scalar field, \\(\\langle\\sigma\\rangle\\), is derived self-consistently by taking the derivative of the energy density with
Figure 1: (Color online) Baryon effective masses within the QMC model, parameterized as a function of the mean scalar field, \\(\\langle\\sigma\\rangle\\). The values at \\(\\langle\\sigma\\rangle=0\\) are the vacuum masses as found in Table 1. We show the effective masses only up to \\(\\langle\\sigma\\rangle=100\\) MeV which corresponds to about 2 fm\\({}^{-3}\\) (6–8 \\(\\rho_{0}\\)), beyond which higher order terms not shown in Eq. (24) become significant.
respect to \\(\\langle\\sigma\\rangle\\), the scalar field equation
\\[\\langle\\sigma\\rangle=\\sum_{B}\\frac{g_{\\sigma N}}{m_{\\sigma}^{2}}C(\\langle\\sigma \\rangle)\\frac{(2J_{B}+1)}{(2\\pi)^{3}}\\int\\frac{M_{B}^{*}\\;\\theta(k_{F_{B}}-| \\vec{k}|)}{\\sqrt{k^{2}+(M_{B}^{*})^{2}}}\\;d^{3}k, \\tag{26}\\]
has an extra factor, denoted by
\\[C(\\langle\\sigma\\rangle)=\\left[w_{B}^{\\sigma}-\\tilde{w}_{B}^{\\sigma}dg_{\\sigma N }\\langle\\sigma\\rangle\\right]. \\tag{27}\\]
Note that the \\(d\\) term (the scalar polarizability) in \\(C(\\langle\\sigma\\rangle)\\) does not have the factor of \\(\\frac{1}{2}\\) that is found Eq. (24), because of the differentiation.
Given this new term in the equation for the mean scalar field, we can see that this allows feedback of the scalar field which is modelling the internal degrees of freedom of the baryons. This feedback prevents certain values of \\(\\langle\\sigma\\rangle\\) from being accessed.
**3.** The couplings to the proton are re-determined by the fit to saturation properties (minimum binding energy per baryon and saturation density) with the new effective masses for the proton and neutron. The couplings for QMC which provide a fit to saturated nuclear matter are shown in Table 3.
Given these changes alone, QHD is transformed into QMC. When we compare the results of Section VII with those of Ref. [16] minor differences arise because the QMC calculations in Ref. [16] are performed at Hartree-Fock level, whereas here they have been performed at Hartree level (mean-field) only.
## IV Deconfined quark matter
We consider two models for a deconfined quark matter phase, both of which model free quarks in \\(\\beta\\)-equilibrium. The first model, the MIT bag model [32], is commonly used to describe the quark matter phase because of its simplicity.
In this model we consider three quarks with fixed masses to possess chemical potentials related to the independent chemical potentials of Eq. (21) via
\\[\\mu_{u}=\\frac{1}{3}\\mu_{n}-\\frac{2}{3}\\mu_{e},\\qquad\\mu_{d}=\\frac{1}{3}\\mu_{n} +\\frac{1}{3}\\mu_{e},\\qquad\\mu_{s}=\\mu_{d}, \\tag{28}\\]
where quarks have a baryon charge of \\(\\frac{1}{3}\\) since baryons contain 3 quarks. Because the quarks are taken to be free, the chemical potential has no vector interaction terms, and thus
\\[\\mu_{q}=\\sqrt{k_{F_{q}}^{2}+m_{q}^{2}}\\;;\\quad q\\in\\{u,d,s\\}. \\tag{29}\\]
The EoS can therefore be solved under the conditions of Eq. (23).
As an alternative model for deconfined quark matter, we consider a simplified Nambu-Jona-Lasinio (NJL) model [34], in which the quarks have dynamically generated masses, ranging from constituent quark masses at low densities to current quark masses at high densities. The equation for a quark condensate at a given density (and hence, \\(k_{F}\\)) in NJL is similar to the scalar field in QHD/QMC, and is written as
\\[\\langle\\bar{\\psi}\\psi\\rangle=-4\\;{\\cal N}_{c}\\int\\frac{1}{(2\\pi^{3})}\\frac{M_ {q}^{*}\\;\\theta(k_{F}-|\\vec{k}|)\\;\\theta(\\Lambda-k_{F})}{\\sqrt{k^{2}+(M_{q}^{ *})^{2}}}\\;d^{3}k, \\tag{30}\\]
where \\(M_{q}^{*}\\) denotes the \\(k_{F}\\) dependent (hence, density dependent) quark mass; \\({\\cal N}_{c}\\) is the number of color degrees of freedom of quarks; and \\(\\Lambda\\) is the momentum cutoff. This is self-consistently calculated via
\\[M_{q}^{*}=M_{\\rm current}-G\\langle\\bar{\\psi}\\psi\\rangle, \\tag{31}\\]
\\begin{table}
\\begin{tabular}{c c c} \\(g_{\\sigma N}\\) & \\(g_{\\omega N}\\) & \\(g_{\\rho}\\) \\\\
8.278 & 8.417 & 8.333 \\\\ \\end{tabular}
\\end{table}
Table 3: Couplings for QMC with the octet of baryons, fit to saturation of nuclear matter.
where \\(G\\) is the coupling and \\(M_{\\rm current}\\) the current quark mass.
To solve for the quark mass at each density, we must first find the coupling, \\(G\\), which yields the required constituent quark mass in free space (\\(k_{F}=0\\)). The coupling is assumed to remain constant as the density rises. In free space, we can solve the above equations to find the coupling
\\[G=\\left.\\frac{(M_{q}^{*}-M_{\\rm current})}{4\\ {\\cal N}_{c}}\\left[\\int\\frac{1}{(2 \\pi)^{3}}\\frac{M_{q}^{*}\\ \\theta(|\\vec{k}|-k_{F})\\ \\theta(\\Lambda-k_{F})}{\\sqrt{k^{2}+(M_{q}^{*})^{2}}}\\ d^{3}k\\right]^{-1} \\right|_{k_{F}=0}. \\tag{32}\\]
We solve Eqs. (30-32) for \\({\\cal N}_{c}=3\\) to obtain constituent quark masses of \\(M_{u,d}=350\\) MeV using current quark masses of \\(M_{\\rm current}=10\\) MeV for the light quarks, and to obtain a constituent quark mass of \\(M_{s}=450\\) MeV using a current quark mass of \\(M_{\\rm current}=160\\) MeV for the strange quark, with a momentum cutoff of \\(\\Lambda=1\\) GeV. At \\(k_{F}=0\\) we find the couplings to be
\\[G_{u,d}=0.148\\ {\\rm fm}^{2},\\quad G_{s}=0.105\\ {\\rm fm}^{2}. \\tag{33}\\]
We can now use these parameters to evaluate the dynamic quark mass \\(M_{q}^{*}\\), for varying values of \\(k_{F}\\), by solving Eq. (30) and Eq. (31) self-consistently. The resulting density dependence of \\(M_{q}^{*}\\) is illustrated in Fig. 2. This shows that the masses of the quarks eventually saturate and are somewhat constant above a certain density. We can then construct the EoS in the same way as we did for the MIT bag model, but with density-dependent masses, rather than fixed masses.
Figure 2: (Color online) Density dependent (dynamic) masses for quarks using NJL. The mass at \\(k_{F}=0\\) is the constituent quark mass, and the mass at the cutoff of \\(k_{F}=\\Lambda\\) is roughly the current quark mass. This model successfully reproduces the behaviour found within the Schwinger-Dyson formalism, and we consider the model to be more sophisticated than the constant quark mass MIT bag model.
Phase transitions
### Equilibrium Conditions
We now have a description of hadronic matter with quark degrees of freedom, but we are still faced with the issue that the baryons are very densely packed. We wish to know if it is more energetically favourable for deconfined quark matter to be the dominant phase at a certain density. To do this, we need to find a point (if it exists) at which stability is achieved between the hadronic phase and the quark phase.
The condition for stability is that chemical, thermal, and mechanical equilibrium between the hadronic (\\(H\\)) and quark (\\(Q\\)) phases is achieved, and thus that the independent quantities in each phase are separately equal. Thus the two independent chemical potentials, \\((\\mu_{n},\\mu_{e})\\), are each separately equal to their counterparts in the other phase, _i.e._\\((\\mu_{n})_{H}=(\\mu_{n})_{Q}\\), and \\((\\mu_{e})_{H}=(\\mu_{e})_{Q}\\) (chemical equilibrium); the temperatures are equal (\\(T_{H}=T_{Q}\\)) (thermal equilibrium); and the pressures are equal (\\(P_{H}=P_{Q}\\)) (mechanical equilibrium). For a discussion of this condition, see Ref. [39]. We consider both phases to be cold on the nuclear scale, and assume \\(T=0\\), so the temperatures are by construction equal. We must therefore find the point at which, for a given pair of independent chemical potentials, the pressures in both the hadronic phase and the quark phase are the same.
To find the partial pressure of any baryon, quark, or lepton species, \\(i\\), we use
\\[P_{i}=\\frac{(2J_{B}+1)\\,{\\cal N}_{c}}{3(2\\pi)^{3}}\\int\\frac{k^{2}\\;\\theta(k_{F _{i}}-|\\vec{k}|)}{\\sqrt{k^{2}+(M_{i}^{*})^{2}}}\\;d^{3}k, \\tag{34}\\]
where \\({\\cal N}_{c}=3\\) for quarks, and \\({\\cal N}_{c}=1\\) for baryons and leptons. To find the total pressure in each phase we use
\\[P_{H}=\\sum_{B}P_{B}+\\sum_{\\ell}P_{\\ell}+\\sum_{\\alpha=\\omega,\\rho}\\frac{1}{2}m_{ \\alpha}^{2}\\langle\\alpha\\rangle^{2}-\\frac{1}{2}m_{\\sigma}^{2}\\langle\\sigma \\rangle^{2}, \\tag{35}\\]
which is equivalent to Eq. (17), and
\\[P_{Q}=\\sum_{q}P_{q}+\\sum_{\\ell}P_{\\ell}-B, \\tag{36}\\]
where \\(B\\) in the quark pressure is the bag energy density. For the QMC model described in Section III, and a Fermi gas of quarks, both with interactions with leptons for charge neutrality, a point exists at which the condition of stability, as described above, is satisfied.
At this point, it is equally favourable that hadronic matter and quark matter are the dominant phase. Beyond this point, the quark pressure is greater than the hadronic pressure, and so the quark phase has a lower thermodynamic potential (through the relation \\(P=-\\Omega\\)) and the quark phase will be more energetically favourable. To determine the EoS beyond this point, we need to consider a mixed phase.
### Mixed Phase
We can model a mixed phase of hadronic and quark matter -- as opposed to modelling a simple direct phase transition between the two, a Maxwell construction, which would have a discontinuity in the density, while retaining a constant pressure between the two phases -- using the method of Glendenning. A detailed description of this appears in Ref. [18].
We solve for the hadronic EoS using the independent chemical potentials as inputs for the quark matter EoS, as the order parameter, \\(\\rho\\), the conserved baryon density, increases until we find a point (if it exists) at which the pressure in the quark phase is equal to that of the hadronic phase. Once we have the density and pressure at which the phase transition occurs, we change the order parameter from the conserved baryon density to the quark fraction, \\(\\chi\\). If we consider the mixed phase to be a fraction of the hadronic matter and a fraction of the quark matter, then the mixed phase (MP) of matter will have the following properties; the total density will be
\\[\\rho_{\\rm MP}=(1-\\chi)\\;\\rho_{\\rm HP}+\\chi\\;\\rho_{\\rm QP}, \\tag{37}\\]
where \\(\\rho_{\\rm HP}\\) and \\(\\rho_{\\rm QP}\\) are the densities in the hadronic and quark phases, respectively. The equivalent baryon density in the quark phase,
\\[\\rho_{\\rm QP}=\\sum_{q}\\rho_{q}=3(\\rho_{u}+\\rho_{d}+\\rho_{s}), \\tag{38}\\]arises because of the restriction that a bag must contain 3 quarks.
According to the condition of mechanical equilibrium, the pressure in the mixed phase will be
\\[P_{\\rm MP}=P_{\\rm HP}=P_{\\rm QP}. \\tag{39}\\]
We can step through values \\(0<\\chi<1\\) and find the density at which equilibrium is achieved, keeping the mechanical stability conditions as they were above. In the mixed phase we need to alter our definition of charge neutrality; it becomes possible now that one phase is (locally) charged, while the other phase carries the opposite charge, making the system globally charge neutral. This is achieved by enforcing
\\[0=(1-\\chi)\\;\\rho_{\\rm HP}^{c}+\\chi\\;\\rho_{\\rm QP}^{c}+\\rho_{\\ell}^{c}\\,, \\tag{40}\\]
where this time we are considering charge densities, which are simply charge proportions of density and \\(\\rho_{\\ell}^{c}\\) is the lepton charge density. For example, the charge density in the quark phase is given by
\\[\\rho_{\\rm QP}^{c}=\\sum_{q}Q_{q}\\rho_{q}=\\frac{2}{3}\\rho_{u}-\\frac{1}{3}\\rho_{d }-\\frac{1}{3}\\rho_{s}. \\tag{41}\\]
We continue to calculate the densities until we reach \\(\\chi=1\\), at which point the mixed phase is now entirely charge neutral quark matter. After this point, we continue with the EoS for pure charge neutral quark matter, using \\(\\rho\\) as the order parameter.
Figure 3: (Color online) Illustrative locus of values for \\(\\mu_{e},\\mu_{n},P\\) for phases of hadronic matter and deconfined quark matter. Note that pressure increases with density. and that a projection onto the \\(\\mu_{n}\\mu_{e}\\) plane is a single line, as ensured by the chemical equilibrium condition.
## VI Stellar solutions
To test the predictions of these models, we find solutions of the Tolman-Oppenheimer-Volkoff (TOV) [40] equation
\\[\\frac{dP}{dR}=-\\frac{G\\left(P+\\mathcal{E}\\right)\\left(M(R)+4\\pi R^{3}P\\right)}{R (R-2GM(R))}, \\tag{42}\\]
where the mass, \\(M(R)\\), contained within a radius \\(R\\) is found by integrating the energy density
\\[M(R)=\\int_{0}^{R}4\\pi r^{2}\\mathcal{E}\\;dr, \\tag{43}\\]
and \\(\\mathcal{E}\\) and \\(P\\) are the energy density and pressure in the EoS, respectively.
Given an EoS and a choice for the central density of the star, this provides static, spherically symmetric, non-rotating, gravitationally stable stellar solutions for the total mass and radius of a star. For studies of the effect of rapid rotation in General Relativity we refer to Refs. [41; 42]. This becomes important for comparison to experimental data, as only data for stellar masses exists (with the single, disputed exception from [12]), we can use the model to predict the radii of the observed stars.
## VII Results
To obtain numerical results, we solve the meson field equations, Eqs. (7-9), with the conditions of charge neutrality, fixed baryon density, and the equivalence of chemical potentials given by Eq. (23), for various models. Having found the EoS by evaluating the energy density, Eq. (16), and pressure, Eq. (17), we can solve for stellar solutions for an EoS using the TOV equation. The radius of the star is defined as the radius at which the pressure is zero and is calculated using a fourth-order Runge-Kutta integration method.
The EoS for octet QMC hadronic matter is shown in Fig. 4 alongside the same model when including a phase transition to 3-flavor quark matter modelled with the MIT bag model, and the results do not appear to differ much at this scale. The theoretical causality limit of \\(P=\\mathcal{E}\\) is also shown (corresponding to the limit \\(v_{\\rm sound}=c\\)) and we can see that these models do not approach this limit at the scale displayed. This is because of the softening of the EoS that occurs with the introduction of hyperons, enlarging the Fermi sea to be filled and reducing the overall pressure.
The species fraction for each particle, \\(Y_{i}\\), is simply the density fraction of that particle, and is calculated via
\\[Y_{i}=\\frac{\\rho_{i}}{\\rho}\\;;\\quad i\\in\\left\\{p,n,\\Lambda,\\Sigma^{+},\\Sigma^ {0},\\Sigma^{-},\\Xi^{0},\\Xi^{-},\\ell,q\\right\\}, \\tag{44}\\]
where \\(\\rho\\) is the total baryon density. The species fractions for octet QMC when a phase transition is neglected are shown in Fig. 5, where we note that the \\(\\Lambda\\) species fraction is enhanced and the \\(\\Sigma\\) species fractions are suppressed with increasing density. From the investigations by Rikovska-Stone _et al._[16] we expect that the \\(\\Sigma\\) would disappear entirely if we were to include Fock terms.
The value of compression modulus and effective nucleon mass at saturation are frequently used as a comparison to experimental evidence. Models which neglect quark-level interactions, such as QHD, typically predict much higher values for the compression modulus than experiments suggest. In the symmetric (nuclear) matter QHD model described in this paper, we find values of \\((M^{*}/M)_{\\rm sat}=0.56\\) and \\(K=525\\) MeV which are in agreement with [19], but as stated in that reference, not with experiment. For QMC we find a significant improvement in the compression modulus; \\(K=280\\) MeV which lies at the upper end of the experimental range. The nucleon effective mass at saturation for QMC is found to be \\((M^{*})_{\\rm sat}=735\\) MeV, producing \\((M^{*}/M)_{\\rm sat}=0.78\\).
When we calculate the EoS including a mixed phase and subsequent pure quark phase, we find that small changes in the parameters can sometimes lead to very significant changes. In particular, the bag energy density, \\(B\\), and the quark masses in the MIT bag model have the ability to both move the phase transition points, and to vary the constituents of the mixed phase. We have investigated the range of parameters which yield a transition to a mixed phase and these are summarised in Table 4. For illustrative purposes we show an example of species fractions for a reasonable set of parameters (\\(B^{1/4}=180\\) MeV and \\(m_{\\rm u,d,s}=3,7,95\\) MeV) in Fig. 6. Note that in this case the \\(\\Lambda\\) hyperon enters the mixed phase briefly (and at a low species fraction).
Note that the transition density of \\(\\rho_{\\rm MP}\\sim 0.12\\) fm\\({}^{-3}\\) produced by the combination of the octet QMC and MIT bag models (as shown in Fig. 6) is clearly not physical as it implies the presence of deconfined quarks at densities less than \\(\\rho_{0}\\).
With small changes to parameters, such as those used to produce Fig. 7 in which the bag energy density is given a slightly higher value from that used in Fig. 6 (\\(B^{1/4}\\) increased from 180 MeV to 195 MeV, but the quark massesremain the same), it becomes possible for the \\(\\Xi\\) hyperons to also enter the mixed phase, albeit in that case with small species fractions, \\(Y_{\\Sigma},Y_{\\Xi}\\leq 0.02\\).
The TOV solutions for octet QMC with and without a phase transition to a mixed phase are shown in Fig. 8. The stellar masses produced using these methods are similar to observed neutron star masses. Once we have solved the TOV equations, we can examine individual solutions and determine the species content for specific stars. If we examine the solutions with a stellar mass of \\(M=1.2~{}M_{\\odot}\\), where \\(M_{\\odot}\\) is a solar mass, for the set of parameters used to produce Figs. 5 and 6, we can find the species fraction as a function of stellar radius to obtain a cross-section of the star. This is shown in Fig. 9 for the case of no phase transition, and Fig. 10 for the case where we allow a transition to a mixed phase, and subsequently to a quark matter phase.
If we now examine the stellar solution with mass \\(M=1.2~{}M_{\\odot}\\) of the set of parameters (\\(B^{1/4}=195\\) MeV and \\(m_{\\rm u,d,s}=3,7,95\\) MeV used to produce Fig. 7) as shown in Fig. 11, we note that the quark content of this 10.5 km star reaches out to around 8 km, and that the core of the star contains roughly equal proportions of protons, neutrons and \\(\\Lambda\\) hyperons with \\(Y_{i}\\simeq 10\\%\\).
Within a mixed phase, we require that for a given pair of \\(\\mu_{n}\\) and \\(\\mu_{e}\\) at any value of the mixing parameter \\(\\chi\\), the quark density is greater than the hadronic density. This condition ensures that the total baryon density increases monotonically within the range \\(\\rho_{\\rm QP}>\\rho_{\\rm MP}>\\rho_{\\rm HP}\\), as can be seen in Eq. (37). An example of this is illustrated in Fig. 12 for a mixed phase of octet QMC and 3-flavor quark matter modelled with the MIT bag model.
The use of quark masses corresponding to the NJL model results in a quark density that is lower than the hadronic density, and as a result there are no solutions for a mixed phase in which the proportion of quarks increases with fraction \\(\\chi\\), while at the same time the total baryon density increases. It may be possible that with smaller constituent quark masses at low density, the Fermi momenta would provide sufficiently high quark densities, but we feel that it would be unphysical to use any smaller constituent quark masses. This result implies that, at least for the model we have investigated, dynamical chiral symmetry breaking (in the production of constituent quark masses at low density) prevents a phase transition from a hadronic phase to a mixed phase involving quarks.
Figure 4: (Color online) Equation of State for; nucleonic ‘N’ matter modelled with octet QMC but where hyperons are explicitly forbidden; nucleonic matter where a phase transition to NJL modelled quark matter is permitted; baryonic ‘N+Y’ matter modelled with octet QMC including hyperons; and baryonic matter where a phase transition to MIT bag modelled quark matter is permitted. The line \\(P={\\cal E}\\) represents the causal limit, \\(v_{\\rm sound}=c\\). The bends in these curves indicate a change in the composition of the EoS, such as the creation of hyperons or a transition to a mixed or quark phase. Note that at low energies (densities) the curves are identical, where only nucleonic matter in \\(\\beta\\)-equilibrium is present.
We do note, however, that if we restrict consideration to nucleons only within the QMC model (with the same parameters as octet QMC), and represent quark matter with the NJL model, we do in fact find a possible mixed phase. More surprisingly, the phase transition density for this combination is significantly larger than the case where hyperons are present. An example of this is shown in Fig. 13 with parameters found in Table 4. This produces a mixed phase at about \\(3\\rho_{0}\\) (\\(\\rho=0.47\\) fm\\({}^{-3}\\)) and a pure quark matter phase above about \\(10.5\\rho_{0}\\) (\\(\\rho=1.67\\) fm\\({}^{-3}\\)). We note the coincidence of this phase transition density with the density corresponding to one nucleon per nucleon volume, with the aforementioned assumption of \\(R_{N}^{\\rm free}=0.8\\) fm, though we do not draw any conclusions from this. Performing this calculation with quark matter modelled with the MIT bag model produces results similar to those of Fig. 6 except of course lacking the \\(\\Lambda\\) hyperon contribution. Although this example does show a phase transition, the omission of hyperons is certainly unrealistic. This does however illustrate the importance and significance of including hyperons, in that their inclusion alters the chemical potentials which satisfy the equilibrium conditions in such a way that the mixed phase is no longer produced.
For each of the cases where we find a phase transition from baryonic matter to quark matter, the solution consists of negatively charged quark matter, positively charged hadronic matter, and a small proportion of leptons, to produce globally charge neutral matter. The proportions of hadronic, leptonic and quark matter throughout the mixed phase (for example, during a transition from octet QMC matter to 3-flavor quark matter modelled with the MIT bag model) are displayed in Fig. 14. A summary of the results of interest is given in Table 4.
Results for larger quark masses are not shown, as they require a much lower bag energy density to satisfy the equilibrium conditions. For constituent quark masses, we find that no phase transition is possible for any value of the bag energy density, as the quark pressure does not rise sufficiently fast to overcome the hadronic pressure. This is merely because the mass of the quarks does not allow a sufficiently large Fermi momentum at a given chemical potential, according to Eq. (29).
## VIII Conclusions
We have produced several EoS that simulate a phase transition from octet QMC modelled hadronic matter, via a continuous Glendenning style mixed phase to a pure, deconfined quark matter phase. This should correspond to
Figure 5: Species fractions, \\(Y_{i}\\), for octet QMC where a transition to a mixed phase is explicitly forbidden. Note that in this case, all of the octet baryons contribute at some density, and that with increasing density the species fractions of \\(\\Sigma\\) hyperons are suppressed while the \\(\\Lambda\\) species fraction is enhanced. Parameters used here are shown in Table 3.
a reasonable description of the relevant degrees of freedom in each density region. The models used here for quark matter provide a framework for exploring the way that this form of matter may behave, in particular under extreme conditions. The success of the QMC model in reproducing a broad range of experimental data gives us considerable confidence in this aspect of these calculations, and provides a reasonable hadronic sector and calculation framework, which then awaits improvement in the quark sector to produce realistic stellar solutions.
We have presented EoS and stellar solutions for octet QMC matter at Hartree level. We have explored several possible phase transitions from this hadronic sector to a mixed phase involving 3-flavor quark matter. The corresponding EoS demonstrate the complexity and intricacy of the solutions as well as the dependence on small changes
\\begin{table}
\\begin{tabular}{l l l l l l} Particles: & \\(B^{1/4}\\) (MeV) & \\(\\{m_{u},m_{d},m_{s}\\}\\) (MeV) & \\(\\rho_{\\rm Y}\\) (fm\\({}^{-3}\\)) & \\(\\rho_{\\rm MP}\\) (fm\\({}^{-3}\\)) & \\(\\rho_{\\rm QP}\\) (fm\\({}^{-3}\\)) & Figure: \\\\ \\hline N, Y, \\(\\ell\\) & — & — & 0.27 & — & — & Fig. 5 \\\\ N, Y, \\(\\ell\\), q & 180 & \\{3, 7, 95\\} & 0.55 & 0.12 & 0.95 & Fig. 6 \\\\ N, Y, \\(\\ell\\), q & 195 & \\{3, 7, 95\\} & 0.35 & 0.24 & 1.46 & Fig. 7 \\\\ N, Y, \\(\\ell\\), q & 170 & \\{30, 70, 150\\} & 0.56 & 0.10 & 0.87 & — \\\\ N, Y, \\(\\ell\\), q & 175 & \\{100, 100, 150\\} & 0.44 & 0.16 & 1.41 & — \\\\ N, \\(\\ell\\), q & 180 & Dynamic (NJL) & — & 0.47 & 1.67 & Fig. 13 \\\\ \\end{tabular}
\\end{table}
Table 4: Table of species content (\\(N\\) = nucleons, \\(Y\\) = hyperons, \\(\\ell\\) = leptons, \\(q\\) = quarks); inputs (\\(B^{1/4}\\), \\(m_{q}\\)); and results for octet QMC and quark models presented in this paper. \\(\\rho_{\\rm Y}\\), \\(\\rho_{\\rm MP}\\) and \\(\\rho_{\\rm QP}\\) represent the density at which hyperons first appear (\\(\\Lambda\\) is invariably the first to enter in these calculations); the density at which the mixed phase begins; and the density at which the quark phase begins, respectively. Figures for selected parameter sets are referenced in the final column. Dynamic NJL quark masses are determined by Eqs. (30–32).
Figure 6: Species fractions, \\(Y_{i}\\), for octet QMC (the same as in Fig. 5) but where now we allow the phase transition to a mixed phase involving quark matter modelled with the MIT bag model, and subsequently to a pure deconfined quark matter phase. Parameters used here are summarised in Table 4. Note that with these parameters, the \\(\\Lambda\\) is the only hyperon to appear in the mixed phase, and does so at a much higher density than the case where the transition to a mixed phase is forbidden. We also note that with these parameters, the transition to a mixed phase occurs below saturation density, \\(\\rho_{0}\\).
in parameters. The stellar solutions provide overlap with the lower end of the experimentally acceptable range.
Several investigations were made of the response of the model to a more sophisticated treatment of the quark masses in-medium, namely the NJL model. In that model the quark masses arise from dynamical chiral symmetry breaking and thus take values typical of constituent quarks at low density and drop to current quark masses at higher densities. The result is that no transition to a mixed phase is possible in this case.
The omission of hyperons in the QMC model yields a transition to a mixed phase of either NJL or MIT bag model quark matter, as the hadronic EoS is no longer as soft. This observation makes it clear that hyperons can play a significant role in the EoS. However, we acknowledge that their presence in neutron stars remains speculative.
The models considered here reveal some important things about the possible nature of the dense nuclear matter in a neutron star. It seems that if dynamical chiral symmetry does indeed result in typical constituent quark masses in low density quark matter, then a phase transition from hadronic matter to quark matter is unlikely. This result invites further investigation.
The results presented in Fig. 8 indicate that the model in its current form is unable to reproduce sufficiently massive neutron stars to account for all observations, notably the observed stellar masses of 1.45 \\(M_{\\odot}\\) and larger. This is a direct result of the softness of the EoS. This issue will be explored in a future publication via the inclusion of Fock terms, which have been shown to have an effect on the scalar and vector potentials [43].
Many open questions remain to be investigated in further work, including the effects of Fock terms, and the density dependence of the bag energy density in the quark phase, which can be calculated explicitly within the NJL model. The quark matter models used here are still not the most sophisticated models available, and further work may involve an investigation of the effects of color-superconducting quark matter [44; 45].
###### Acknowledgements.
This research was supported in part by DOE contract DE-AC05-06OR23177, (under which Jefferson Science Associates, LLC, operates Jefferson Lab) and in part by the Australian Research Council. JDC would like to thank
Figure 7: Species fractions, \\(Y_{i}\\), for octet QMC (the same as in Fig. 6 but now where the bag energy density has been increased to \\(B^{1/4}\\) = 195 MeV). Note that now the appearance of hyperons occurs at a smaller density than in the case of Fig. 6, the transition to a mixed phase occurs at a slightly larger density, and that now \\(\\Xi\\) hyperons are present in the mixed phase.
Jefferson Lab for their hospitality and and to thank Ping Wang for helpful discussions.
## References
* Podsiadlowski et al. (2005) P. Podsiadlowski et al., Mon. Not. Roy. Astron. Soc. **361**, 1243 (2005), eprint astro-ph/0506566.
* Grigorian et al. (2006) H. Grigorian, D. Blaschke, and T. Klahn (2006), eprint astro-ph/0611595.
* Klahn et al. (2007) T. Klahn et al., Phys. Lett. **B654**, 170 (2007), eprint nucl-th/0609067.
* Lattimer and Prakash (2001) J. M. Lattimer and M. Prakash, Astrophys. J. **550**, 426 (2001), eprint astro-ph/0002232.
* Heiselberg and Hjorth-Jensen (2000) H. Heiselberg and M. Hjorth-Jensen, Phys. Rept. **328**, 237 (2000), eprint nucl-th/9902033.
* Weber (2005) F. Weber, Prog. Part. Nucl. Phys. **54**, 193 (2005), eprint astro-ph/0407155.
* Schaffner-Bielich (2005) J. Schaffner-Bielich, J. Phys. **G31**, S651 (2005), eprint astro-ph/0412215.
* Weber and Weigel (1989) F. Weber and M. K. Weigel, Nucl. Phys. **A493**, 549 (1989).
* Chin and Walecka (1974) S. A. Chin and J. D. Walecka, Phys. Lett. **B52**, 24 (1974).
* Danielewicz et al. (2002) P. Danielewicz, R. Lacey, and W. G. Lynch, Science **298**, 1592 (2002), eprint nucl-th/0208016.
* Worley et al. (2008) A. Worley, P. G. Krastev, and B.-A. Li (2008), eprint 0801.1653.
* Ozel (2006) F. Ozel, Nature **441**, 1115 (2006).
* Alford et al. (2007) M. Alford et al., Nature **445**, EFT (2007), eprint astro-ph/0606524.
* Guichon and Thomas (2004) P. A. M. Guichon and A. W. Thomas, Phys. Rev. Lett. **93**, 132502 (2004), eprint nucl-th/0402064.
* Guichon et al. (2006) P. A. M. Guichon, H. H. Matevosyan, N. Sandulescu, and A. W. Thomas, Nucl. Phys. **A772**, 1 (2006), eprint nucl-th/0603044.
* Rikovska-Stone et al. (2007) J. Rikovska-Stone, P. A. M. Guichon, H. H. Matevosyan, and A. W. Thomas, Nucl. Phys. **A792**, 341 (2007), eprint nucl-th/0611030.
* Guichon et al. (2007) P. A. M. Guichon, A. W. Thomas, and K. Tsushima (2007), eprint 0712.1925.
* Glendenning (2001) N. K. Glendenning, Phys. Rept. **342**, 393 (2001).
* Serot and Walecka (1986) B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986).
* Furnstahl and Serot (2000) R. J. Furnstahl and B. D. Serot, Comments Nucl. Part. Phys. **2**, A23 (2000), eprint nucl-th/0005072.
* Guichon (1988) P. A. M. Guichon, Phys. Lett. **B200**, 235 (1988).
Figure 8: (Color online) Solutions of the TOV equations for the total stellar mass and radius for octet QMC, where a phase transition to mixed phase is explicitly forbidden (as shown in Fig. 5) and the same model with an allowed phase transition to 3-flavor quark matter modelled with the MIT bag model (as shown in Fig. 6). Also shown is the data point from [12]. The points on the vertical axis are the maximum masses in the respective models. The causal and general relativistic limits on mass and radius are also shown.
* Saito et al. (1997) K. Saito, K. Tsushima, and A. W. Thomas, Phys. Rev. **C55**, 2637 (1997), eprint nucl-th/9612001.
* Saito et al. (2007) K. Saito, K. Tsushima, and A. W. Thomas, Prog. Part. Nucl. Phys. **58**, 1 (2007), eprint hep-ph/0506314.
* Muller and Jennings (1997) H. Muller and B. K. Jennings, Nucl. Phys. **A626**, 966 (1997), eprint nucl-th/9706049.
* Yao et al. (2006) W. M. Yao et al. (Particle Data Group), J. Phys. **G33**, 1 (2006).
* Rijken et al. (1999) T. A. Rijken, V. G. J. Stoks, and Y. Yamamoto, Phys. Rev. **C59**, 21 (1999), eprint nucl-th/9807082.
* Guichon et al. (1996) P. A. M. Guichon, K. Saito, E. N. Rodionov, and A. W. Thomas, Nucl. Phys. **A601**, 349 (1996), eprint nucl-th/9509034.
* Thomas et al. (2004) A. W. Thomas, P. A. M. Guichon, D. B. Leinweber, and R. D. Young, Prog. Theor. Phys. Suppl. **156**, 124 (2004), eprint nucl-th/0411014.
* Ericson and Chanfray (2008) M. Ericson and G. Chanfray, AIP Conf. Proc. **1030**, 13 (2008), eprint 0804.1683.
* Massot and Chanfray (2008) E. Massot and G. Chanfray (2008), eprint 0803.1719.
* Chanfray et al. (2003) G. Chanfray, M. Ericson, and P. A. M. Guichon, Phys. Rev. **C68**, 035209 (2003), eprint nucl-th/0305058.
* Chodos et al. (1974) A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn, and V. F. Weisskopf, Phys. Rev. **D9**, 3471 (1974).
* Bentz and Thomas (2001) W. Bentz and A. W. Thomas, Nucl. Phys. **A696**, 138 (2001), eprint nucl-th/0105022.
* Nambu and Jona-Lasinio (1961) Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122**, 345 (1961).
* Hellstern et al. (1997) G. Hellstern, R. Alkofer, and H. Reinhardt, Nucl. Phys. **A625**, 697 (1997), eprint hep-ph/9706551.
* Ebert et al. (1996) D. Ebert, T. Feldmann, and H. Reinhardt, Phys. Lett. **B388**, 154 (1996), eprint hep-ph/9608223.
* Cloet et al. (2005) I. C. Cloet, W. Bentz, and A. W. Thomas, Phys. Rev. Lett. **95**, 052302 (2005), eprint nucl-th/0504019.
* Cloet et al. (2006) I. C. Cloet, W. Bentz, and A. W. Thomas, Phys. Lett. **B642**, 210 (2006), eprint nucl-th/0605061.
* Reif (1965) F. Reif, _Fundamentals of Statistical and Thermal Physics (McGraw-Hill Series in Fundamentals of Physics)_ (McGraw-Hill Science/Engineering/Math, 1965), ISBN 0070518009.
* Oppenheimer and Volkoff (1939) J. R. Oppenheimer and G. M. Volkoff, Phys. Rev. **55**, 374 (1939).
* Lattimer and Schutz (2005) J. M. Lattimer and B. F. Schutz, Astrophys. J. **629**, 979 (2005), eprint astro-ph/0411470.
* Owen (2005) B. J. Owen, Phys. Rev. Lett. **95**, 211101 (2005), eprint astro-ph/0503399.
* Krein et al. (1999) G. Krein, A. W. Thomas, and K. Tsushima, Nucl. Phys. **A650**, 313 (1999), eprint nucl-th/9810023.
* Alford et al. (2007) M. G. Alford, A. Schmitt, K. Rajagopal, and T. Schafer (2007), eprint 0709.4635.
* Lawley et al. (2006) S. Lawley, W. Bentz, and A. W. Thomas, J. Phys. **G32**, 667 (2006), eprint nucl-th/0602014.
Figure 9: Species fractions for octet QMC in \\(\\beta\\)-equilibrium, where the phase transition to a mixed phase is explicitly forbidden, as a function of stellar radius for a stellar solution with a total mass of 1.2 \\(M_{\\odot}\\). The parameters used here are the same as those used to produce Fig. 5.
Figure 10: Species fractions for octet QMC with a phase transition to 3-flavor quark matter modelled with the MIT bag model, as a function of stellar radius for a stellar solution with a total mass of 1.2 \\(M_{\\odot}\\). The parameters used here are the same as those used to produce Fig. 6. Note that in this case one finds pure deconfined 3-flavor quark matter at the core (all of some 3.5 km) of this star, and still a small proportion of \\(\\Lambda\\) in the mixed phase.
Figure 11: Example of the interior of a star of total stellar mass \\(M=1.2M_{\\odot}\\) where the bag energy density is given a slightly higher value from that used in Fig. 10 (increased from \\(B^{1/4}=180\\) MeV to 195 MeV), but the quark masses remain the same. This illustrates that with relatively minor adjustments to the parameters, large changes can be introduced to the final solution. In this case \\(\\Xi\\) hyperons can provide a nonzero contribution to the composition of a star. Note that in this case, quark matter appears at 8 km, and at the core there exists a mixed phase containing nucleons, quarks, as well as \\(\\Lambda\\), \\(\\Xi^{0}\\), and \\(\\Xi^{-}\\) hyperons.
Figure 12: (Color online) Densities in the mixed phase for octet QMC mixed with 3-flavor quark matter modelled with the MIT bag model. Note that at all values of \\(\\chi\\) (the mixing parameter according to Eq. (37)), the equivalent quark baryon density is greater than the hadronic baryon density, allowing the total baryon density to increase monotonically. The total density is found via Eq. (37).
Figure 13: Species fractions for a phase transition from QMC nuclear matter to 3-flavor quark matter modelled with NJL. Note that in this unphysical case, a phase transition is possible, and occurs at a value of about \\(\\rho=0.47\\) fm\\({}^{-3}\\). We note the coincidence with the density of one baryon per baryon volume, but draw no conclusions from this. A similar transition from QMC nuclear matter to 3-flavor quark matter modelled with the MIT bag model produces results almost identical to those of Fig. 6 except that in that case there is no contribution from the \\(\\Lambda\\) hyperon.
Figure 14: (Color online) Charge densities (in units of the proton charge per cubic fm) in the mixed phase for a transition from octet QMC to 3-flavor quark matter modelled with the MIT bag model. Note that following the mixed phase, the quarks are able to satisfy charge neutrality with no leptons. \\(\\chi\\) is the mixing parameter within the mixed phase according to Eq. (37). | We investigate the possibility and consequences of phase transitions from an equation of state (EoS) describing nucleons and hyperons interacting via mean-fields of \\(\\sigma\\), \\(\\omega\\), and \\(\\rho\\) mesons in the recently improved Quark-Meson Coupling (QMC) model to an EoS describing a Fermi gas of quarks in an MIT bag. The transition to a mixed phase of baryons and deconfined quarks, and subsequently to a pure deconfined quark phase is described using the method of Glendenning. The overall EoS for the three phases is calculated for various scenarios and these are used to calculate stellar solutions using the Tolman-Oppenheimer-Volkoff equations. The results are compared to recent experimental data and the validity of each case is discussed with consequences for determining the species content of the interior of neutron stars.
pacs: 26.60.Kp, 21.65.Qr, 12.39.-x | Condense the content of the following passage. |
arxiv-format/0809_0228v4.md | # Smart spatial incentives for market-based conservation
Florian Hartig
[email protected]
Martin Drechsler
[email protected] UFZ - Helmholtz Centre for Environmental Research, Department of Ecological Modelling, Permoserstr. 15, 04318 Leipzig, Germany
## 1 Introduction
Market-based instruments such as payments (Wunder, 2007; Drechsler et al., 2007), auctions (Latacz-Lohmann and Van der Hamsvoort, 1998) or biodiversity offset trading (Panayotou, 1994; Chomitz, 2004) have been suggested as a means to complement existing reserves by inducing biodiversity protection on private lands. Market-based instruments are currently being used or tested in many countries around the world. Some examples are conservation and wetland mitigation banking in the US (Salzman and Ruhl, 2000; Wilcove and Lee, 2004; Fox and Nino-Murcia, 2005) or markets schemes in Australia (Coggan and Whitten, 2005; Latacz-Lohmann and Schilizzi, 2005). One of the reasons for the increasing popularity of these instruments is the realization that markets may achieve a more targeted and therefore more cost-efficient correction of a conservation problem, in particular because landowners have more information about their local costs and can choose the allocation of conservation measures accordingly (Jack et al., 2008). Another reason is that market-based instruments are well suited for targeting multiple ecosystem services (e.g. conservation and carbon sequestration (Nelson et al., 2008)), a point which has been highlighted in a recent statement of the European Union (EU-Commission, 2007).
At the same time, however, there has been considerable concern over whether current implementations of conservation markets target the right entities. At present, market-based policies for conservation tend to use simple and indirect incentives, such as payments for certain farming practices (Ferraro and Kiss, 2002). But are those incentives efficient in protecting threatened species, or are we paying \"money for nothing\" (Ferraro and Pattanayak, 2006)? Examining the structure of the given incentives for landowners is the key to answering these questions. What defines a unit of conservation? What are we paying landowners for?
The overall goal of global conservation efforts is to ensure the persistence of biodiversity in our landscapes (Margules and Pressey, 2000). Therefore, it would be ideal to assess the market value of a conservation measure directly by assessing its effect on species survival (Williams and Araujo, 2000; Bruggeman and Jones, 2008). Unfortunately, applying this method to real-world situations is often not feasible because direct monitoring or detailed population models are too expensive or not available (Jack et al., 2008). Moreover, the efficiency of markets crucially depends on the information available to landowners. If landowners do not understand the evaluation criteria for their land, they may choose suboptimal land configurations, or they may decide not to participate in the market at all. Therefore, practically all existing market schemes use a metric, given by a number of indices, that relates measurable quantities of a site (e.g. size) to the site's market value.
Most of these existing schemes (e.g. habitat banking in the US) base their evaluation solely on the quality and size of the local site without considering its surroundings. This raises some concern because in many cases, the ecological value of a typical private property (e.g. arable field, forest lot) does in fact depend on neighboring properties. Populations or ecosystems may exhibit thresholds for the effectiveness of conservation measures, which implies that a local measure may be ineffective when it is not accompanied by other measures. (Hanski et al., 1996; Scheffer et al., 2001). Furthermore, for many endangered species, not only the absolute loss of habitat area, but also habitat fragmentation is a major cause of population decline (compare e.g. Saunders et al., 1991; Fahrig, 2002). Therefore, metrics that only evaluate sites locally may set the wrong incentives because they do not correspond to the real conservational value of a site.
Spatial metrics that consider the surrounding of a site are available and are widely used for systematic reserve site selection (e.g. Moilanen, 2005; van Teeffelen et al., 2006). Yet, simply transferring spatial metrics from conservation planning into connectivity dependent incentives for landowners (in the following we will call such incentives short \"spatial incentives\") would be short sighted. Conservation planning metrics have been developed for assessing and optimizing the ecological value of a habitat network from the viewpoint of a planner who considers the whole landscape. Landowners in conservation markets, on the other hand, react to the given incentives independently and with limited knowledge, striving for maximization of their individual utility rather than maximizing global welfare. The fact that the value of a site depends on neighboring sites implies that land use decisions may create costs or benefits for neighboring landowners. In economics, such costs or benefits are referred to as externalities. It is well known that markets may fail to deliver an optimal allocation of land use in the presence of such externalities (Mills, 1980). Another problem is that, unless we assume perfect information and unlimited intellectual capacities, we must take into account that landowners may fail to find the optimal adoption of their land use in the presence of complicated spatial evaluation rules (Hartig and Drechsler, submitted). Thus, the need to consider human behavior in metrics for market-based instruments is characterized by a trade-off: Ecological accuracy calls for a metric that is complex enough to capture all details of the relevant ecological processes, but socio-economic reality may suggest compromises towards more practical and robust metrics.
In this paper, we combine a spatially explicit population model with an agent-based simulation model to assess the effect of connectivity-dependent incentives in a virtual conservation market. One key assumption is that landowners do not react optimally to the given incentives, but base their decisions only on the present land configuration and their estimated costs and benefits for the next period. Thus, we seek to optimize for ecological parameters such as dispersal as well as for economic parameters such as behavior of landowners. To simulate the reactions of landowners towards a given spatial metric, we use the conservation market model introduced in (Hartig and Drechsler, submitted). A spatially explicit metapopulation model is placed on top of the emerging landscape structure to evaluate the conservation success for different species in terms of survival probability at a fixed time horizon.
## 2 Methods
### Overview and purpose
The aim of this study is to design spatial incentives that result in cost-effective conservation when there are many landowners and the conservation outcome depends on the combination of decisions by landowners. Here, cost-effective means that we maximize the conservation effect at a given budget. The model used contains two submodels: An economic submodel that simulates the trading of conservation credits and an ecological submodel to assess the viability of several species in the dynamic landscape that emerges from the trading activity. The driver for trading and the subsequent change of the landscape configuration is economic change in the region, reflected by heterogeneously changing costs of maintaining a local site in a conserved state. We first describe the state variables of the model, followed by the economic and the ecological submodel and the coupling of the submodels. The coupled model is then used to find the cost-effective metric by comparing the forecasted species persistence across a range of different parameterizations of the metric. Fig. 1 shows a graphical representation of our model approach.
### State variables and scales
The simulation is conducted on a rectangular \\(30\\times 30\\) grid with periodic boundary conditions (i.e. the grid has the topology of a torus). The \\(n=30^{2}\\) grid cells represent both the economic (property) units and the ecological (habitat) units. Although the model may be applied to any spatial and temporal scale, we think of grid cells as being of the size of an average agricultural field in Europe (around 10 ha), and time steps being a year. Grid cells \\(x_{i}\\) occur in two states: They can be conserved at a cost \\(c_{i}\\) and thus provide habitat for the species, or they are used for other economic purposes, resulting in no costs. The conservation state of a grid cell is labelled with \\(\\sigma_{i}\\), \\(\\sigma_{i}=1\\) being a conserved cell and \\(\\sigma_{i}=0\\) being an unconserved cell. Conserved grid cells may be either occupied (populated) \\(p_{i}=1\\) or unoccupied \\(p_{i}=0\\) by the species under consideration. Unconserved grid cells can never be occupied. A list of the state variables and parameters of the two submodels is given in Table 1.
### Economic model
The economic model describes the decisions of landowners to establish, maintain, or quit a conservation measure on their land (grid cell) in each period. Landowners decisions are based on whether conservation or alternative land use generates a higher return. The returns on the two land use types are influenced by dynamic, spatially heterogeneous costs for conserving a grid cell and by the metric of the conservation market, which decides on the amount of conservation credits to be earned with a particular site, and by the current market price for conservation credits. The model is designed as a spatially explicit, agent-based partial equilibrium model (compare Drechsler and Watzold, in press; Hartig and Drechsler, submitted).
A conserved grid cell \\(x_{i}\\) produces a certain amount of conservation credits \\(\\xi_{i}\\) depending on the number of conserved grid cells in its neighborhood. We use the following metric to determine \\(\\xi_{i}\\):
\\[\\xi_{i}=(1-m)+m\\cdot\\zeta_{i}(l)\\;. \\tag{1}\\]
The first term \\(1-m\\) is independent of the connectivity and may be seen as a base reward for the conserved area. The parameter \\(m\\) is a weighting factor that determines the importance of connectivity compared to area. The second term \\(m\\cdot\\zeta(l)\\) includes the connectivity of the site, measured by the proportion of conserved sites within a circle of radius \\(l\\):
\\[\\zeta_{i}(l)=\\left(\\sum_{d_{ij}<l}\\sigma_{j}\\right)\\cdot\\left(\\sum_{d_{ij}<l}1 \\right)^{-1}\\;. \\tag{2}\\]
Here, \\(d_{ij}\\) refers to the distance between the focal cell \\(x_{i}\\) and another cell \\(x_{j}\\). Fig. 2 shows a graphical illustration of this connectivity measure. The total amount of credits in the market is given by the sum of \\(\\xi_{i}\\) over all conserved grid cells:
\\[U=\\sum_{i=1}^{n}\\sigma_{i}\\xi_{i}\\;. \\tag{3}\\]
The conservation of a site results in costs that differ among grid cells. Conservation costs may vary over space and time (Ando et al., 1998; Polasky et al., 2008). We use three different algorithms to generate pattern of random dynamic costs \\(c_{i}(t)\\). All algorithms create average costs of 1, but they differ in the spatial and temporal distribution of costs. Algorithm 1 generates spatially and temporally
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Symbol & Connotation & Range \\\\ \\hline \\multicolumn{3}{l}{State variables:} \\\\ \\(x_{i}\\) & Position of the i-th cell on the grid \\\\ \\(\\sigma_{i}(t)\\) & Conservation state of the i-th cell & \\(\\{0,1\\}\\) \\\\ \\(p_{i}(t)\\) & Population state of the i-th cell & \\(\\{0,1\\}\\) \\\\ \\(c_{i}(t)\\) & Opportunity costs of \\(\\sigma_{i}\\) = 1 at t & around 1 \\\\ \\multicolumn{3}{l}{Parameters economic model:} \\\\ \\(\\Delta\\) & Cost heterogeneity & \\([0..1]\\) \\\\ \\(\\omega\\) & Cost correlation & \\([0..\\infty]\\) \\\\ \\(m\\) & Connectivity weight & \\([0..1]\\) \\\\ \\(l\\) & Connectivity length & \\([0..\\infty]\\) \\\\ \\(\\lambda\\) & Budget constraint & \\([0..\\infty]\\) \\\\ \\multicolumn{3}{l}{Parameters ecological model:} \\\\ \\(e\\) & Local extinction risk & \\([0..1]\\) \\\\ \\(r\\) & Emigration rate & \\([0..\\infty]\\) \\\\ \\(r_{d}\\) & Emigration rate after destruction & \\([0..\\infty]\\) \\\\ \\(\\alpha^{-1}\\) & Dispersal distance & \\([0..\\infty]\\) \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: List of state variables (top), parameters of the economic model (middle) and parameters of the ecological model (bottom). Note that although we omit to denote the time dependence (\\(t\\)) explicitly throughout the main text, all state variables and expressions derived from state variables are time dependent.
Figure 1: Modelling approach: Drivers are spatially heterogeneous, dynamic costs for each site. On the basis of these costs and the spatial incentives, conservation measures are allocated by the economic submodel. The resulting dynamic landscape is used as an input for the ecological model, which estimates species survival probabilities on this landscape.
uncorrelated random costs by drawing from a uniform distribution of width \\(2\\Delta\\) at each time step. Algorithm 2 creates spatially uncorrelated, but temporally correlated costs by applying on each grid cell a random walk of maximum step length \\(\\Delta\\) together with a small rebounding effect that pushes costs towards \\(1\\) with strength \\(\\omega\\). Algorithm 3 creates spatio-temporally correlated costs, using a random walk of maximum step length \\(\\Delta\\) combined with a spatial correlation term that pushes costs with strength \\(\\omega\\) towards the average costs in the neighborhood. A mathematical description of the three algorithms is given in appendix A, together with figures of the created cost distributions (Fig. 7 and Fig. 8).
To simulate trading, we introduced a market price \\(P\\) for credits. The benefits to be earned by a site are given by \\(P\\cdot\\xi\\) where \\(\\xi\\) is the amount of credits to be earned by a site (eq. 1). Based on his costs and the potential benefits, each landowner decides whether to conserve his land or not. The model has two options for determining the equilibrium price of the market: Either the price is adjusted until a certain target level for the total amount of produced conservation credits \\(U\\) (eq. 3) is met, or the price is adjusted until a certain level of aggregated costs for the conservation is reached. By aggregated costs, we mean the sum of the costs of all conserved sites:
\\[C=\\sum_{i=1}^{n}\\sigma_{i}c_{i}\\;. \\tag{4}\\]
Fixing the target reflects a situation where the quantity of conservation credits is fixed. This is, for example, the case in a tradable permit scheme. Fixing the costs, on the other hand, could correspond to a payment scheme where a conservation agency buys credits until a budget constraint is reached. The two options differ when global properties of the cost distribution, such as the mean, change over time. In our simulation, however, costs are in a steady state that is normalized to a mean of \\(1\\). Thus, both options are approximately identical except for finite size effects, which would disappear in the limit of an infinitely large landscape. We chose the second option of fixing the budget for the analysis because it allows an easier comparison between different metrics. Appendix B.1 gives a detailed description of the scheduling of the economic model.
### Ecological model
To evaluate conservation success in the emerging dynamic landscapes, we use a stochastic metapopulation model (Hanski, 1998, 1999). Each conserved grid cell is treated as a habitat patch, meaning that each grid cell may hold a local population of the species. Local populations produce emigrants which may disperse and establish a new local population on an unoccupied cell. At the same time, local populations are subject to local extinction, which may be caused e.g. by demographic or environmental stochasticity. The population as a whole can persist on the landscape if the average recolonization rate is higher than the average local extinction risk, yet, stochastic fluctuations of the number of occupied patches may eventually cause extinction of the whole metapopulation. The better the connectivity among patches, and the more patches in the network, the lower the probability of such a global extinction.
Local extinctions are modelled by a constant chance \\(e\\) of each local population to go extinct per time step. The amount of dispersers arriving from a source patch \\(x_{j}\\) at a target patch \\(x_{i}\\) is given by the following dispersal kernel:
\\[p_{ij}=r\\cdot\\frac{1}{\\sum_{i}\\sigma_{i}-1}\\cdot e^{-\\alpha\\cdot d_{ij}}\\;, \\tag{5}\\]
where \\(r\\) is the emigration rate, the term \\((\\sum_{i}\\sigma_{i}-1)^{-1}\\) divides the number of dispersing individuals by the available habitat patches, and the exponential term describes mortality risk during dispersal as a function of distance between \\(x_{i}\\) and \\(x_{j}\\). If a patch has been destroyed at the current time step, we set the emigration rate to \\(r_{d}\\), assuming that a proportion of \\(r_{d}\\) of the population will be able to disperse before destruction. The sum of all arriving immigrants according to eq. 5 (truncated to 1) is taken as the probability that this patch is colonized at the current time step. Appendix B.2 gives a detailed description of the scheduling of the ecological model.
### Parametrization and analysis of the model
Different species have different connectivity requirements depending on their dispersal abilities. Therefore, we expect an optimized spatial metric to reflect this by values of the connectivity weight \\(m\\) and the connectivity length \\(l\\) that are related to the species characteristics \\(r,r_{d}\\) and \\(\\alpha\\). Additionally, optimal values for \\(m\\) and \\(l\\) may be affected by economic conditions, i.e. the distribution of conservation costs. To analyze the effect of species characteristics and the cost distribution on the optimal spatial incentive, we varied both the connectivity weight \\(m\\) and the connectivity length \\(l\\) of the metric eq. 1 for three different cost scenarios and for three different species types.
Figure 2: Illustration of the connectivity measure: The connectivity \\(\\zeta_{l}(l)\\) is the fraction of conserved sites within a circle of radius \\(l\\) of the focal site
The three cost scenarios were generated by Algorithm 1 at \\(\\Delta=0.2\\), Algorithm 2 at \\(\\Delta=5\\cdot 10^{-5}\\) and \\(\\omega=0.0065\\), and Algorithm 3 at \\(\\Delta=0.015\\) and \\(\\omega=0.006\\). Table 2 displays a summary of the three scenarios. Remember that the first scenario creates uncorrelated costs, the second creates temporally correlated costs and the third scenario creates spatio-temporally correlated costs. Figs. 7 and 8 in Appendix A show the spatial and temporal cost distribution generated by the chosen parameters.
For the species, we consider three functional types: Short-range, intermediate and global dispersers. The parametrization for the three species is displayed in Table 3. To assess the extinction risk for the species, we ran the simulation with different random economic starting conditions between 300 and 1000 times and calculated the probability of a metapopulation extinction after 1000 time steps.
The budget constraint \\(\\lambda\\) for the aggregated costs (eq. 4) was fixed at 0.03 times the number of grid cells \\(n\\) for scenarios with cost dynamics generated by the random walk algorithms (economic scenarios 2 and 3) and at 0.05 times the number of grid cells \\(n\\) for the scenarios created with the random algorithm (economic scenario 1). Exceptions are the combination economic scenario 3 with species 3, where aggregated costs were set at 0.1 times \\(n\\) and economic scenario 1 with species 3, where aggregated costs were set at 0.18 times \\(n\\). The adjustment to different budgets was done to create similar survival probabilities across the nine scenarios formed by systematic combination of the three cost scenarios and the three species types.
## 3 Results
### Emerging landscapes
For all cost scenarios and all connectivity lengths, an increase in connectivity weight results in more aggregated landscape structures. The density of the clustering is controlled by the connectivity length \\(l\\), which determines how close patches have to be to be counted as connected. Smaller connectivity lengths (\\(l\\sim 1.5\\), corresponding to the direct 8-cell neighborhood) result in very dense clusters at full connectivity weight, while larger connectivity lengths lead to more loose agglomerations of conserved sites. Due to the spatial cost heterogeneity, there is a trade-off between clustering and area: At a fixed budget, a higher connectivity weight results in lower total area, but with higher clustering. Typical landscapes are displayed in Fig. 3.
### Optimal incentive
To find the most effective spatial metric \\((m,l)\\), we varied connectivity weight between 0 and 1 and connectivity length between 1.5 and 9.5 in 11 linear steps. Note that a conservation market with no spatial trading rules corresponds to a value of \\(m=0\\). The resulting survival probabilities after 1000 years for the three cost scenarios and the three species types are shown in Fig. 4. The results show that a short disperser such as species I may gain substantially from a very high connectivity weight and short to medium connectivity lengths, while globally dispersing species such as species III benefit from a low connectivity
\\begin{table}
\\begin{tabular}{l l l} \\hline \\hline Cost Scenario & Parameters & Characteristics \\\\ \\hline
1 - Random & \\(\\Delta=0.2\\) & uncorrelated \\\\
2 - Random walk & \\(\\Delta=5\\cdot 10^{-5}\\) & time correlated \\\\ & \\(\\omega=0.0065\\) & \\\\
3 - Correlated walk & \\(\\Delta=0.015\\) & space and time correlated \\\\ & \\(\\omega=0.006\\) & space and time correlated \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 2: Overview of the cost scenarios created by the three algorithms.
Figure 3: Effect of the connectivity weight \\(m\\) and the connectivity length \\(l\\). The pictures show typical landscape structures emerging from trading with costs being sampled by Algorithm 2 at \\(\\Delta=5\\cdot 10^{-5},\\omega=0.0065\\). Conserved sites are colored black, other sites are colored white. The top row is created with a long connectivity length (\\(l=10\\)), the bottom row with a short connectivity length (\\(l=1.5\\)). The pictures in the left column are taken at \\(m=0\\), which means that no weight is put on connectivity. Consequently, the landscape structure is dominated by the sites of lowest costs. Increasing connectivity weight (\\(m=0.5\\) middle, \\(m=1\\) right) results in increasing clustering of conserved sites, but in a smaller total area. At a connectivity weight of \\(m=1\\), meaning that all weight is put on connectivity, \\(l=1.5\\) results in a very dense cluster, while the larger connectivity length \\(l=10\\) results in a more spread out configuration.
\\begin{table}
\\begin{tabular}{l l l l l} \\hline \\hline Species type & **e** & **r** & \\(\\mathbf{r_{d}}\\) & \\(\\alpha^{-1}\\) \\\\ \\hline I - short dispersal & 0.29 & 3 & 1 & 5 \\\\ II - intermediate dispersal & 0.51 & 3 & 1 & 25 \\\\ III - global dispersal & 0.66 & 3 & 1 & 1000 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 3: Parameter values for the three species types considered. \\(\\alpha^{-1}\\) is the typical dispersal distance, measured in units of the grid cell length. With cell lengths of 100 m, this translates to typical dispersal distances of 0.5 km, 2.5km and 100 km, respectively.
weight and are relatively insensitive towards the connectivity length. For intermediate species such as species II, the tendency changes depending on the cost scenario. An exception is the cost scenario 1 with random costs, which requires very high connectivity weight and short connectivity lengths for all species. We will discuss the reasons for this in the next subsection.
### Interpretation of the results
The observed influence of the cost scenarios on the effectiveness of the applied metric \\((m,l)\\) suggests that the emerging landscapes differ among the different cost scenarios. To analyze this difference, we plotted landscape connectivity as well as turnover (the fraction of conserved sites that are destroyed and recreated elsewhere per time step) as a function of the metric parameters \\(m\\) and \\(l\\) for the three considered cost scenarios (Fig. 5).
The results show that greater temporal randomness in the costs causes higher turnover, in that landowners switch rapidly between conserving and not conserving. This increase in turnover effectively increases the local extinction risk, because local populations go extinct at the destroyed sites, while the remaining subpopulations can not immediately recolonize the new sites. Creating connected patches leads to more stability, as neighborhood benefits may outweigh the individual variation in cost for a cell. Thus we are not only facing a trade-off between area and connectivity, but a trade-off between area, connectivity and turnover. The latter explains why different economic scenarios lead to different optimal metrics: For random costs as in scenario 1, turnover rates are very sensitive to the chosen spatial metric. Consequently, turnover totally dominates species survival and high connectivity weight is favored for all species because it reduces the turnover rate. In contrast, the spatial metric hardly affects turnover for scenario 2 and scenario 3. Here, the optimization results (Fig. 4) only reflect the trade-off between connectivity and area: Short range dispersers require high connectivity weights, while global dispersers prefer larger areas.
### Multiple-species optimization
Assuming that all three species defined in Table 3 share the same habitat, but do not interact, we can also use our model to generate recommendations on how to support all three species at the same time. There are several options available to combine the survival of multiple species into one index (Nicholson and Possingham, 2006; Hartig and Drechsler, 2008). Here, we use two common indices. The first is the expected number of surviving species, which is
Figure 4: Survival probability as a function of connectivity weight (x-axis) and connectivity length (y-axis) for the three species types (columns 1-3) and for three cost scenarios (rows 1-3). Dark values represent high survival probabilities. The gray circles mark the seven combinations of \\(m,l\\) that yielded the highest survival probabilities, with larger circle size indicating a better ranking within these seven combinations. For most of the scenarios, these optimal points cluster in one small area of the parameter range. The uncertainty of the survival probability can be estimated from a binomial error model. Typical values of the absolute standard error are in the order of \\(0.01\\). This explains why there is some remaining spread of the best combinations of \\(m,l\\) when \\(m\\simeq 0\\) is favored (meaning that \\(l\\) has little influence on the model) or when survival probabilities are very similar within a larger area of \\((m,l)\\).
Figure 5: Resulting mean connectivity and turnover for the three cost scenarios as a function of connectivity weight \\(m\\) and connectivity length \\(l\\). Connectivity is measured as the mean of \\(\\zeta(1.5)\\) of all conserved sites. Turnover, the fraction of conserved sites that are destroyed and recreated elsewhere per time step, serves as an estimate for the intensity of landscape dynamics. Dark values represent low turnover and high connectivity, respectively.
given by the sum of the survival probabilities of all species. The second index is the probability of all species surviving, given by the product of the survival probabilities of all species. As for the single species case, both indices were calculated for a time horizon of 1000 years. Fig. 6 shows the resulting scores for the spatio-temporally correlated cost scenario. Both objectives suggest a moderately strong connectivity weight around \\(m=0.8\\) and a small connectivity length around 3.
## 4 Discussion
### Main findings
We presented a coupled ecological-economic model to optimize spatial incentives in a market for conservation credits. The model shows that conservation markets that consider connectivity lead to considerably better conservation results than markets without spatial incentives (represented by \\(m=0\\) in Fig.4). Generally, we find that short dispersing species do best with a high weight on connectivity and small-scale connectivity measures. Global dispersers, being largely insensitive to the spatial arrangement of conservation measures, do better with a low weight on connectivity, because this allows the creation of more conserved sites within the given budget. When conserving all species together, a relatively high weight on connectivity yields robustly the highest joint survival probability (Fig 6). This shows once more that, if connectivity is relevant for the species of concern, spatial evaluation rules may considerably improve the cost-effectiveness of market-based instruments.
Besides species characteristics, the economic scenarios had an additional, and in some cases large, influence on the optimal spatial metric. The reason is that in the presence of dynamic conservation costs, the spatial incentive does not only influence landscape connectivity, but also landscape dynamics (Fig 5). Landscape dynamics, measured by the rate of turnover (the fraction of conserved sites that are destroyed and recreated elsewhere per time step), negatively affects species survival because the reallocation of a conserved site effectively increases the local extinction risk of the species. In most cases, turnover was negatively correlated with connectivity weight and clustering (Fig 5). The latter explains why under cost scenario 1 (uncorrelated random costs), a stronger connectivity weight is favored for all species: The spatio-temporally uncorrelated costs of this scenario lead to very high turnover rates under a low connectivity weight. Consequently a high connectivity weight that limits the amount of turnover rates is favored for all species.
### Generality of the results and future research
The ecological model used for this study neglects a number of factors frequently studied in population models: The landscape is ecologically homogeneous and we have included neither local population dynamics nor a possible dependence of local extinction risk and dispersal on the local population size, nor did we consider correlated environmental stochasticity or catastrophic events. Analyzing the consequences of these factors on the cost-effectiveness of metrics for market-based instruments is a matter of future research. If required all these factors could easily be included without changing the rest of the model, including the analysis method. Furthermore, more sophisticated policies and economic models could be introduced without changing the ecological model.
The main findings of this paper, however, i.e. the positive effect of relatively simple spatial incentives as opposed to no spatial incentives, will qualitatively hold for most realistic scenarios where dispersal is a limiting factor for species. We recommend testing these ideas more often in real-world market schemes such as the examples discussed by Chomitz et al. (2006) or Drechsler et al. (2007).
The most apparent shortcoming of the model at this point are simplifications with respect to the time dimension, in particular the inclusion of temporal incentives such as minimum durations of conservation measures on the economic side and time lags for recreation of habitat due to succession on the ecological side. It seems promising for future research to study the control of landscape dynamics through temporal incentives, either independently or in connection with spatial incentives.
### Consequences for conservation policy
We believe that our results contain three important messages for conservation policy. The first is that the inclusion of spatial incentives may provide a substantial efficiency gain for conservation markets when fragmentation is a crucial factor for the populations under consideration. Our simulations show that it is possible to account for complicated spatial ecological and economic interactions with
Figure 6: Expected number of surviving species (right) and probabilities of all species surviving (left) for the spatio-temporally correlated costs of scenario 3. The gray circles mark the seven combinations of \\(m,l\\) which yielded the highest score of the applied measure, with larger circle size indicating a better ranking within these seven combinations.
relatively simple spatial incentives. Given that most existing market-based conservation schemes worldwide do not explicitly account for spatial processes, it seems promising to examine the potential efficiency gains that could be realized by applying spatially explicit metrics for market-based conservation.
The second message is that market-based instruments are likely to produce dynamic landscapes, because a voluntary market is based on the possibility that landowners withdraw from conservation measures while others step in for them. This is not a problem in itself. A moderate amount of landscape dynamics may sometimes even benefit the conservation objective. Yet, landscape dynamics must be considered in the design of marked-based instruments and in underlying ecological models. Neglecting dynamics may lead to severe problems for the ecological effectiveness of a market scheme.
The third message is that optimal spatial incentives are not context-free. The effectiveness of a spatial metric may be sensitive to the economic situation to which it is applied. Thus, a thorough examination of both the ecological as well as the economic and social background is required before deciding on spatial incentives for market-based instruments.
## 5 Acknowledgements
The authors would like to thank Silvia Wissel and Karin Johst for helpful comments during the preparation of the manuscript and Anne Carney for proofreading the text. We are very grateful for the comments which were raised by Doug Bruggeman and two other anonymous reviewers. Their ideas and suggestions greatly contributed to the final manuscript.
## Appendix A Cost algorithms
Alg. 1 creates random, spatially and temporally uncorrelated costs by drawing the costs of each cell for each time step from a uniform distribution of width \\(2\\Delta\\). The scheduling within one time step is as follows:
```
1:for all cells do
2:\\(c_{i}(t)=random[1-\\Delta\\cdots 1+\\Delta]\\)
3:endfor
```
**Algorithm 1** Random Costs
Alg. 2 applies a random walk to each grid cell, but has no interaction between grid cells. As a result, we get a temporal correlation of the costs of each grid cell (Fig. 8), but a spatially random pattern (Fig. 7). To constrain the random walk around 1, an additional rebounding factor of \\(\\omega\\cdot\\sqrt{|1-c_{i}(t-1)|}\\) was added to the random walk. The scheduling within one time step is as follows:
```
1:for all cells do
2:\\(c_{i}(t)=c_{i}(t-1)+\\Delta\\cdot random[-1\\cdots 1]\\) + \\(\\omega\\cdot sign(1-c_{i}(t-1))\\cdot\\sqrt{|1-c_{i}(t-1)|}\\)
3:endfor
```
**Algorithm 2** Random Walk
Alg. 3 applies a random walk with an additional spatial interaction to each grid cell. It produces spatio-temporally correlated costs (Fig. 8 and Fig. 7). The scheduling within one time step is as follows:
## Appendix B Model Scheduling
### Economic Model
The economic model is initialized with a random configuration which is at the desired cost level. To ensure that the random walks are in a steady state, we ran the simulation 10000 time steps before the ecological model was initialized. Each time step, the scheduling was as follows:
Figure 7: Spatial cost distributions generated by the random walk algorithms (Alg. 2 and 3). The two figures shows the \\(30\\times 30\\) grid cells with high cost cells in light and low cost cells in dark colors. The left figure was created by the random walk (algorithm 2) at \\(\\Delta=5\\cdot 10^{-5},\\omega=0.0065\\), to the right the correlated random walk (algorithm 3) at \\(\\Delta=0.015,\\omega=0.006\\). Note that low and high cost areas are clustered for the correlated random walk.
### Ecological Model
The ecological model was started by randomly choosing 60% of the patches as occupied. We checked that populations were in a steady state after initialization and thus the measurements were not affected by the initialization (see Grimm and Wissel, 2004). The scheduling of the ecological model within one time step is as follows:
```
1:for all populated cells do
2: Local extinction with rate \\(e\\)
3:endfor
4:for all populated cells do
5:if Patch destroyed then
6: Disperse with emigration rate \\(r_{d}\\)
7:else
8: Disperse with emigration rate \\(r\\)
9:endif
10:endfor
11:for all unpopulated cells do
12: Check if immigration succesfull
13:endfor
```
**Algorithm 5** Scheduling Metapopulation Model
## References
* Ando et al. (1998) Ando, A., Camm, J., Polasky, S., Solow, A., 1998. Species distributions, land values, and efficient conservation. Science 279 (5359), 2126-2128.
* Bruggeman and Jones (2008) Bruggeman, D., Jones, M., 2008. Should habitat trading be based on mitigation ratios derived from landscape indices? A model-based analysis of compensatory restoration options for the red-occluded woodpecker. Environmental Management 42 (4), 591-602.
* Chomitz (2004) Chomitz, K. M., 2004. Transferable development rights and forest protection: An exploratory analysis. International Regional Science Review 27 (3), 348-373.
* Chomitz et al. (2006) Chomitz, K. M., da Fonseca, G. A. B., Alger, K., Stoms, D. M., Honczak, M., Landau, E. C., Thomas, T. S., Thomas, W. W., Davis, F., 2006. Viable reserve networks arise from individual landholder responses to conservation incentives. Ecology and Society 11 (2), 40.
* Coggan and Whitten (2005) Coggan, A., Whitten, S. M., 2005. Market Based Instruments (MBIs) in Australia: What are they, important issues to consider and some applications to date. Tech. rep., CSIRO, accessed 29.11.2008.
* Drechsler and Witzold (2006) Drechsler, M., Witzold, F., in press. Applying tradable permits to biodiversity conservation: Effects of space-dependent conservation benefits and cost heterogeneity on habitat allocation. Ecological Economics.
* Drechsler et al. (2007) Drechsler, M., Witzold, F., Johst, K., Bergmann, H., Settele, J., 2007. A model-based approach for designing cost-effective compensation payments for conservation of endangered species in real landscapes. Biological Conservation 140 (1-2), 174-186.
* EU-Commission (2007) EU-Commission, 2007. Green paper on market-based instruments for environment and related policy purposes. Tech. rep., Commission of the European Communities, accessed 29.11.2008.
* Fahrig (2002) Fahrig, L., 2002. Effect of habitat fragmentation on the extinction threshold: A synthesis. Ecological Applications 12 (2), 346-353.
* Ferraro and Kiss (2002) Ferraro, P. J., Kiss, A., 2002. Direct payments to conserve biodiversity. Science 298 (5599), 1718-1719.
* Ferraro and Pattanayak (2006) Ferraro, P. J., Pattanayak, S. K., 2006. Money for nothing? A call for empirical evaluation of biodiversity conservation investments. PLoS Biology 4 (4), E105.
* Fox and Nino-Murcia (2005) Fox, J., Nino-Murcia, A., 2005. Status of species conservation banking in the United States. Conservation Biology 19 (4), 996-1007.
* Grimm and Wissel (2004) Grimm, V., Wissel, C., 2004. The intrinsic mean time to extinction: A unifying approach to analysing persistence and viability of populations. Oikos 105 (3), 501-511.
* Hanski (1998) Hanski, I., 1998. Metapopulation dynamics. Nature 396, 41-49.
* Hanski (1999) Hanski, I., 1999. Metapopulation Ecology. Oxford University Press.
* Hanski et al. (1996) Hanski, I., Molinane, A., Gyllenberg, M., 1996. Minimum viable metapopulation size. American Naturalist 147 (4), 527-541.
* Hartig and Drechsler (2008) Hartig, F., Drechsler, M., 2008. The time horizon and its role in multiple species conservation planning. Biological Conservation 141 (10), 2625-2631, arXiv:0807.4040.
Figure 8: Time series of the costs of a grid cell over time. Algorithm 1 which changes costs randomly at each time step creates a strongly fluctuating time series. The two random walk algorithms lead to a time-correlated series.
Hartig, F., Drechsler, M., submitted. Stay by neighbor? Structure formation, coordination and costs in tradable permit markets with spatial trading rules, arXiv:0808.0111.
* Jack et al. (2008) Jack, B. K., Kousky, C., Sims, K. R. E., 2008. Designing payments for ecosystem services: Lessons from previous experience with incentive-based mechanisms. Proceedings of the National Academy of Sciences 105 (28), 9465-9470.
* Latacz-Lohmann and Schilizzi (2005) Latacz-Lohmann, U., Schilizzi, S., 2005. Auctions for conservation contracts: A review of the theoretical and empirical literature. Tech. rep., Report to the Scottish Executive Environment and Rural Affairs Department.
* Latacz-Lohmann et al. (1998) Latacz-Lohmann, U., Van der Hamsvoort, C. P. C. M., 1998. Auctions as a means of creating a market for public goods from agriculture. Journal of Agricultural Economics 49 (3), 334-345.
* Margules and Pressey (2000) Margules, C. R., Pressey, R. L., 2000. Systematic conservation planning. Nature 405 (6783), 243-253.
* Mills (1980) Mills, D. E., 1980. Transferable development rights markets. Journal of Urban Economics 7 (1), 63-74.
* Molianen (2005) Molianen, A., 2005. Reserve selection using nonlinear species distribution models. American Naturalist 165 (6), 695-706.
* Nelson et al. (2008) Nelson, E., Polasky, S., Lewis, D. J., Plantinga, A. J., Lonsdorf, E., White, D., Bael, D., Lawler, J. J., 2008. Efficiency of incentives to jointly increase carbon sequestration and species conservation on a landscape. Proceedings of the National Academy of Sciences of the United States of America 105 (28), 9471-9476.
* Nicholson and Possingham (2006) Nicholson, E., Possingham, H. P., 2006. Objectives for multiple-species conservation planning. Conservation Biology 20 (3), 871-881.
* Panayotou (1994) Panayotou, T., 1994. Conservation of biodiversity and economic development: The concept of transferable development rights. Environmental & Resource Economics 4 (1), 91-110.
* Polasky et al. (2008) Polasky, S., Nelson, E., Camm, J., Csuti, B., Fackler, P., Lonsdorf, E., Montgomery, C., White, D., Arthur, J., Garber-Yonts, B., Haight, R., Kagan, J., Starfield, A., Tobalske, C., 2008. Where to put things? Spatial land management to sustain biodiversity and economic returns. Biological Conservation 141 (6), 1505-1524.
* Salzman and Ruhl (2000) Salzman, J., Ruhl, J. B., 2000. Currencies and the commodification of environmental law. Stanford Law Review 53 (3), 607-694.
* Saunders et al. (1991) Saunders, D. A., Hobbs, R. J., Margules, C. R., 1991. Biological consequences of ecosystem fragmentation: A review. Conservation Biology 5 (1), 18-32.
* Scheffer et al. (2001) Scheffer, M., Carpenter, S., Foley, J. A., Folke, C., Walker, B., 2001. Catastrophic shifts in ecosystems. Nature 413 (6856), 591-596.
* van Teeffelen et al. (2006) van Teeffelen, A., Cabeza, M., Molianen, A., 2006. Connectivity, probabilities and persistence: Comparing reserve selection strategies. Biodiversity and Conservation 15 (3), 899-919.
* Wilcove and Lee (2004) Wilcove, D. S., Lee, J., 2004. Using economic and regulatory incentives to restore endangered species: Lessons learned from three new programs. Conservation Biology 18 (3), 639-645.
* Williams and Araujo (2000) Williams, P. H., Araujo, M. B., 2000. Using probability of persistence to identify important areas for biodiversity conservation. Proceedings of the Royal Society of London Series B-Biological Sciences 267 (1456), 1959-1966.
* Wunder (2007) Wunder, S., 2007. The efficiency of payments for environmental services in tropical conservation. Conservation Biology 21 (1), 48-58. | Market-based instruments such as payments, auctions or tradable permits have been proposed as flexible and cost-effective instruments for biodiversity conservation on private lands. Trading the service of conservation requires one to define a metric that determines the extent to which a conserved site adds to the regional conservation objective. Yet, while markets for conservation are widely discussed and increasingly applied, little research has been conducted on explicitly accounting for spatial ecological processes in the trading. In this paper, we use a coupled ecological-economic simulation model to examine how spatial connectivity may be considered in the financial incentives created by a market-based conservation scheme. Land use decisions, driven by changing conservation costs and the conservation market, are simulated by an agent-based model of land users. On top of that, a metapopulation model evaluates the conservational success of the market. We find that optimal spatial incentives for agents correlate with species characteristics such as the dispersal distance, but they also depend on the spatio-temporal distribution of conservation costs. We conclude that a combined analysis of ecological and socio-economic conditions should be applied when designing market instruments to protect biodiversity.
keywords: market-based instruments, biodiversity conservation, ecological-economic modelling, tradable permits, payments, spatial incentives +
Footnote †: journal: Biological Conservation | Condense the content of the following passage. |
arxiv-format/0809_0338v1.md | # The discrete dipole approximation for periodic targets: I. theory and tests
B. T. Draine
[email protected] Princeton University Observatory
Pierot J. Flatau
Scripps Institution of Oceanography, University of California, San Diego
November 3, 2021
## 1 Introduction
Electromagnetic scattering is used to study isolated particles, but increasingly to characterize extended targets ranging from nanostructure arrays in laboratories, to planetary and asteroidal regoliths. To model the absorption and scattering, Maxwell's equations must be solved for the target geometry.
For scattering by isolated particles with complex geometry, a number of different theoretical approaches have been used, including the discrete dipole approximation (DDA) [1, 2, 3, 4], also known as the coupled dipole approximation or coupled dipole method. The DDA can treat inhomogeneous targets and anisotropic materials, and has been extended to treat targets near substrates [5, 6]. Other techniques have also been employed, including the finite difference time domain (FDTD) method [7, 8].
For illumination by monochromatic plane waves, the DDA can be extended to targets that are spatially periodic and (formally) infinite in extent. This could apply, for example, to a periodic array of nanostructures in a laboratory setting, or it might be used to approximate a regolith by a periodic array of \"target unit cells\" with complex structure within each unit cell.
Generalization of the DDA (= coupled dipole method) to periodic structures was first presented by Markel [9] for a 1-dimensional chain of dipoles, and more generally by Chaumet et al. [10], who calculated the electric field near a 2-dimensional array of parallelepipeds illuminated by a plane wave. Chaumet and Sentenac [11] further extended the DDA to treat periodic structures with a finite number of defects.
From a computational standpoint, solving Maxwell's equations for periodic structures is only slightly more difficult than calculating the scattering properties of a single \"unit cell\" from thestructure. Since it is now feasible to treat targets with \\(N\\gtrsim 10^{6}\\) dipoles (target volume \\(\\gtrsim 200\\lambda^{3}\\), where \\(\\lambda\\) is the wavelength), it becomes possible to treat extended objects with complex substructure.
The objective of the present paper is to present the theory of the DDA applied to scattering and absorption by structures that are periodic in one or two spatial dimensions. We also generalize the standard formalism for describing the far-field scattering properties of finite targets (the \\(2\\times 2\\) scattering amplitude matrix, and \\(4\\times 4\\) Mueller matrix) to describe scattering by periodic targets. We show how to calculate the Mueller matrix to describe scattering of arbitrarily-polarized radiation. The theoretical treatment developed here has been implemented in the open-source code DDSCAT 7 (see Appendix A).
The theory of the DDA for periodic targets is reviewed in SS2, and the formalism for describing the far-field scattering properties of periodic targets is presented in SS3-5. Transmission and reflection coefficients for targets that are periodic in two dimensions are obtained in SS6.
The applicability and accuracy of the DDA method are discussed in SSS7,8, where we show scattering properties calculated using DDSCAT 7 for two geometries for which exact solutions are available for comparison: (1) an infinite cylinder and (2) an infinite slab of finite thickness. The numerical comparisons demonstrate that, for given \\(\\lambda\\), the DDA converges to the exact solution as the interdipole spacing \\(d\\to 0\\).
## 2 DDA for Periodic Targets
The discrete-dipole approximation (DDA) is a general technique for calculating scattering and absorption of electromagnetic radiation by targets with arbitrary geometry. The basic theory of the DDA has been presented elsewhere [3]. Conceptually, the DDA consists of approximating the target of interest by an array of polarizable points, with specified polarizabilities. Once the polarizabilities are specified, Maxwell's equations can be solved accurately for the dipole array. When applied to finite targets, the DDA is limited by the number of dipoles \\(N\\) for which computations are feasible - the limitations may arise from large memory requirements, or the large amount of computing that may be required to find a solution when \\(N\\) is large. In practice, modern desktop computers are capable of solving the DDA equations, as implemented in DDSCAT [3, 12], for \\(N\\) as large as \\(\\sim 10^{6}\\).
Developed originally to study scattering from isolated, finite structures such as dust grains [1], the DDA can be extended to treat singly- or doubly-periodic structures. Consider a collection of \\(N\\) polarizable points, defining a \"target unit cell\" (TUC). Now consider a target consisting of a 1-dimensional or 2-dimensional periodic array of identical TUCs, as illustrated in Figs. 1 and 2; we will refer to these as 1-d or 2-d targets, although the constituent TUC may have an arbitrary 3-dimensional shape. For a monochromatic incident plane wave
\\[{\\bf E}_{\\rm inc}({\\bf r},t)={\\bf E}_{0}\\exp\\left(i{\\bf k}_{0}\\cdot{\\bf r}-i \\omega t\\right)\\quad, \\tag{1}\\]
the polarizations of the dipoles in the target will oscillate coherently. Each dipole will be affected by the incident wave plus the electric field generated by _all_ of the other point dipoles.
Let index \\(j=1, ,N\\) run over the dipoles in a single TUC, and let indices \\(m\\), \\(n\\) run over replicas of the TUC. The \\((m,n)\\) replica of dipole \\(j\\) is located at
\\[{\\bf r}_{jmn}={\\bf r}_{j00}+m{\\bf L}_{u}+n{\\bf L}_{v}\\, \\tag{2}\\]
where \\({\\bf L}_{u}\\) and \\({\\bf L}_{v}\\) are the lattice vectors for the array. For 1-d targets we let \\(m\\) vary, but set \\(n=0\\). For 2-d targets, the area per TUC is
\\[A_{\\rm TUC}\\equiv|{\\bf L}_{u}\\times{\\bf L}_{v}|=L_{u}L_{v}\\sin\\theta_{uv} \\tag{3}\\]where \\(\\theta_{uv}\\) is the angle between \\({\\bf L}_{u}\\) and \\({\\bf L}_{v}\\).
The replica dipole polarization \\({\\bf P}_{jmn}(t)\\) is phase-shifted relative to \\({\\bf P}_{j00}(t)\\):
\\[{\\bf P}_{jmn}(t)\\;=\\;{\\bf P}_{j00}(t)\\exp\\left[i(m{\\bf k}_{0}\\cdot{\\bf L}_{u}+n{ \\bf k}_{0}\\cdot{\\bf L}_{v})\\right] \\tag{4}\\]
Define a matrix \\({\\bf A}\\) such that \\(-{\\bf A}_{j,kmn}{\\bf P}_{kmn}\\) gives the electric field \\({\\bf E}\\) at \\({\\bf r}_{j00}\\) produced by an oscillating dipole \\({\\bf P}_{kmn}\\) located at \\({\\bf r}_{kmn}\\). Expressions for the 3\\(\\times\\)3 tensor elements of \\({\\bf A}\\) have been presented elsewhere [e.g., 3]; \\({\\bf A}\\) depends on the target geometry and wavelength of the incident radiation, but not on the target composition or on the direction or polarization state of the incident wave.
Using eq. (2) we may construct a matrix \\({\\bf\\tilde{A}}\\) such that, for \\(j\
eq k\\), \\(-{\\bf\\tilde{A}}_{j,k}{\\bf P}_{k00}\\) gives the electric field at \\({\\bf r}_{j00}\\) produced by a dipole \\({\\bf P}_{k00}\\) and _all of its replica dipoles_\\({\\bf P}_{kmn}\\), and for \\(j=k\\) it gives the electric field at \\({\\bf r}_{j00}\\) produced only by the replica dipoles:
\\[{\\bf\\tilde{A}}_{j,k}\\,=\\,\\sum_{m=-\\infty}^{\\infty}\\sum_{n=-n_{\\rm max}}^{n_{ \\rm max}}\\left(1-\\delta_{jk}\\delta_{m0}\\delta_{n0}\\right){\\bf A}_{j,kmn}\\exp \\left[i(m{\\bf k}_{0}\\cdot{\\bf L}_{u}+n{\\bf k}_{0}\\cdot{\\bf L}_{v})\\right]\\;. \\tag{5}\\]
where \\(n_{\\rm max}=0\\) for 1-d targets and \\(n_{\\rm max}=\\infty\\) for 2-d targets, and \\(\\delta_{ij}\\) is the Kronecker delta. For \\(|m|,|n|\\to\\infty\\), location \\(j00\\) is in the radiation zone of dipole \\(kmn\\), and the electric field falls off in magnitude only as \\(1/r\\). The sums in (5) would be divergent were it not for the oscillating phases of the terms, which ensure convergence. Evaluation of these sums can be computationally-demanding when \\(k_{0}L_{y}\\) or \\(k_{0}L_{z}\\) are small. Chaumet et al. [10] have discussed methods for efficient evaluation of these sums.
We evaluate (5) numerically by introducing a factor \\(\\exp[-(\\gamma k_{0}r)^{4}]\\) to smoothly suppress the
Figure 1: (a) Target consisting of a 1-d array of TUCs and (b) showing how an infinite cylinder can be constructed from disk-like TUCs (lower). The \\(M=0\\) scattering cone, with \\(\\alpha_{s}=\\alpha_{0}\\), is illustrated.
contributions from large \\(r\\), and truncating the sums:
\\[\\mathbf{\\tilde{A}}_{j,k}\\approx{\\sum_{m,n}}^{\\prime}\\mathbf{A}_{j,kmn}\\exp\\left[i (m\\mathbf{k}_{0}\\cdot\\mathbf{L}_{u}+n\\mathbf{k}_{0}\\cdot\\mathbf{L}_{v})-(\\gamma k _{0}r_{j,kmn})^{4}\\right] \\tag{6}\\]
where \\(r_{j,kmn}\\equiv|\\mathbf{r}_{kmn}-\\mathbf{r}_{j00}|\\) and the summation is over \\((m,n)\\) with \\(r_{j,kmn}\\leq 2/\\gamma k_{0}\\), i.e., out to distances where the suppression factor \\(\\exp\\left[-(\\gamma k_{0}r)^{4}\\right]\\approx e^{-16}\\). For given \\(\\mathbf{k}_{0}\\), \\(\\mathbf{L}_{u}\\), \\(\\mathbf{L}_{v}\\), the \\(\\mathbf{\\tilde{A}}_{j,k}\\) depend only on \\(\\mathbf{r}_{j00}-\\mathbf{r}_{k00}\\), and therefore only \\(O(8N)\\) distinct \\(\\mathbf{\\tilde{A}}_{j,k}\\) require evaluation.
Ideally, one would use a very small value for the interaction cutoff parameter \\(\\gamma\\), but the number of terms [\\(\\propto\\gamma^{-1}\\) for 1-d, or \\(\\propto\\gamma^{-2}\\) for 2-d] in eq. (6) diverges as \\(\\gamma\\to 0\\). We show that \\(\\gamma\\approx 0.001\\) ensures accurate results for the cases studied here.
The polarizations \\(\\mathbf{P}_{j00}\\) of the dipoles in the TUC must satisfy the system of equations
\\[\\mathbf{P}_{j00}=\\mathbf{\\alpha}_{j}\\left[\\mathbf{E}_{\\rm inc}(\\mathbf{r}_{j})- \\sum_{k\
eq j}\\mathbf{\\tilde{A}}_{j,k}\\mathbf{P}_{k00}\\right]. \\tag{7}\\]
If there are \\(N\\) dipoles in one TUC, then (7) is a system of \\(3N\\) linear equations where the polarizability tensors \\(\\mathbf{\\alpha}_{j}\\) are obtained from lattice dispersion relation theory [13, 14]. After \\(\\mathbf{\\tilde{A}}\\) has been calculated, equations (7) can be solved for \\(\\mathbf{P}_{j00}\\) using iterative techniques when \\(N\\gg 1\\).
Figure 2: (a) Target consisting of a 2-d array of TUCs and (b) showing how an infinite slab is created from TUCs consisting of a single “line” of dipoles.
## 3 In the Radiation Zone
In the radiation zone \\(kr\\gg 1\\), the electric field due to dipole \\(jmn\\) is
\\[{\\bf E}_{jmn}=\\frac{k_{0}^{2}\\exp\\left(ik_{0}|{\\bf r}-{\\bf r}_{jmn}|\\right)}{|{ \\bf r}-{\\bf r}_{jmn}|}\\left[1-\\frac{({\\bf r}-{\\bf r}_{jmn})({\\bf r}-{\\bf r}_{jmn })}{|{\\bf r}-{\\bf r}_{jmn}|^{2}}\\right]{\\bf P}_{jmn}\\, \\tag{8}\\]
\\[|{\\bf r}-{\\bf r}_{jmn}| = \\left[r^{2}-2{\\bf r}\\cdot{\\bf r}_{jmn}+r_{jmn}^{2}\\right]^{1/2}\\] \\[\\approx r\\left\\{1-\\frac{{\\bf r}\\cdot{\\bf r}_{jmn}}{r^{2}}+\\frac{1}{2r^{ 2}}\\left[r_{jmn}^{2}-\\left(\\frac{{\\bf r}\\cdot{\\bf r}_{jmn}}{r}\\right)^{2} \\right]+ \\right\\}\\.\\]
Define the unit vector \\(\\hat{\\bf k}_{s}\\equiv{\\bf k}_{s}/k_{0}\\). We seek to sum the contribution of all the dipoles to the electric field propagating in direction \\(\\hat{\\bf k}_{s}\\). At location \\({\\bf r}=r\\hat{\\bf k}_{s}\\), the dominant contribution will be from dipoles located within the Fresnel zone [see, e.g., ref. 15], which will have a transverse radius \\(R_{F}\\approx(r/k_{0})^{1/2}\\). For dipoles within the Fresnel zone,
\\[\\frac{1}{|{\\bf r}-{\\bf r}_{jmn}|}\\left[1-\\frac{({\\bf r}-{\\bf r}_{jmn})({\\bf r }-{\\bf r}_{jmn})}{|{\\bf r}-{\\bf r}_{jmn}|^{2}}\\right]{\\bf P}_{jmn}\\approx \\frac{1}{r}\\left[1-\\hat{\\bf k}_{s}\\hat{\\bf k}_{s}\\right]{\\bf P}_{jmn}\\quad. \\tag{10}\\]
Thus, in the radiation zone,
\\[{\\bf E}({\\bf r})=\\frac{k_{0}^{3}}{k_{0}r}\\exp\\left(ik_{0}r\\right)\\left[1-\\hat {\\bf k}_{s}\\hat{\\bf k}_{s}\\right]\\sum_{j}{\\bf P}_{j00}\\sum_{m,n}\\exp\\left(i \\Psi_{jmn}\\right) \\tag{11}\\]
\\[\\Psi_{jmn} \\equiv m{\\bf k}_{0}\\cdot{\\bf L}_{u}+n{\\bf k}_{0}\\cdot{\\bf L}_{v}-{\\bf k }_{s}\\cdot{\\bf r}_{jmn}+\\frac{k_{0}}{2r}\\left[r_{jmn}^{2}-\\left(\\hat{\\bf k}_{ s}\\cdot{\\bf r}_{jmn}\\right)^{2}\\right]\\] \\[\\approx -{\\bf k}_{s}\\cdot{\\bf r}_{j00}+m({\\bf k}_{0}-{\\bf k}_{s})\\cdot{ \\bf L}_{u}+n({\\bf k}_{0}-{\\bf k}_{s})\\cdot{\\bf L}_{v}\\] \\[+\\frac{1}{2k_{0}r}\\bigg{[}m^{2}(k_{0}^{2}-k_{su}^{2})L_{u}^{2}+n^ {2}(k_{0}^{2}-k_{sv}^{2})L_{v}^{2}+2mn(k_{0}^{2}{\\bf L}_{u}\\cdot{\\bf L}_{v}-k_ {su}k_{sv}L_{u}L_{v})\\bigg{]}+O\\left(\\frac{mL}{r}\\right), \\tag{13}\\]
where \\(k_{su}\\equiv{\\bf k}_{s}\\cdot{\\bf L}_{u}/L_{u}\\), \\(k_{sv}\\equiv{\\bf k}_{s}\\cdot{\\bf L}_{v}/L_{v}\\), and terms of order \\((mL/r)\\) may be neglected because \\(mL/r\\sim R_{F}/r\\propto r^{-1/2}\\) as \\(r\\rightarrow\\infty\\). Thus, for \\(r\\rightarrow\\infty\\), the electric field produced by the oscillating dipoles is
\\[{\\bf E}_{s}=\\left\\{\\frac{k_{0}^{2}}{r}\\exp\\left(ik_{0}r\\right)\\left[1-\\hat{\\bf k }_{s}\\hat{\\bf k}_{s}\\right]\\sum_{j}{\\bf P}_{j00}\\exp\\left(-i{\\bf k}_{s}\\cdot{ \\bf r}_{j00}\\right)\\right\\}G(r,{\\bf k}_{s}) \\tag{14}\\]
\\[G(r,{\\bf k}_{s})\\equiv\\sum_{m,n}\\exp\\left(i\\Phi_{mn}\\right) \\tag{15}\\]
\\[\\Phi_{mn} \\equiv m({\\bf k}_{0}-{\\bf k}_{s})\\cdot{\\bf L}_{u}+n({\\bf k}_{0}-{\\bf k }_{s})\\cdot{\\bf L}_{v} \\tag{16}\\] \\[+\\frac{1}{2k_{0}r}\\left[m^{2}(k_{0}^{2}-k_{su}^{2})L_{u}^{2}+n^{ 2}(k_{0}^{2}-k_{sv}^{2})L_{v}^{2}+2mn(k_{0}^{2}{\\bf L}_{u}\\cdot{\\bf L}_{v}-k_ {su}k_{sv}L_{u}L_{v})\\right]\\.\\]
It is convenient to define
\\[{\\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s})\\equiv k_{0}^{3}\\left[1-\\hat{\\bf k}_{s}\\hat{ \\bf k}_{s}\\right]\\sum_{j=1}^{N}{\\bf P}_{j00}\\exp\\left(i\\omega t-i{\\bf k}_{s} \\cdot{\\bf r}_{j00}\\right)\\, \\tag{17}\\]
so that the electric field produced by the dipoles is
\\[{\\bf E}_{s}=\\frac{\\exp\\left(i{\\bf k}_{s}\\cdot{\\bf r}-i\\omega t\\right)}{k_{0}r}{ \\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s})G(r,{\\bf k}_{s})\\quad. \\tag{18}\\]
\\({\\bf F}_{\\rm TUC}\\) depends upon the scattering direction \\(\\hat{\\bf k}_{s}\\), and also upon the direction of incidence \\(\\hat{\\bf k}_{0}\\) and polarization \\({\\bf E}_{0}\\) of the incident wave.
### Isolated Finite Target: \\(\
u=0\\)
We will refer to finite isolated targets - consisting of only the dipoles in a single TUC - as targets that are periodic in \\(\
u=0\\) dimensions. For this case, we simply set \\(G=1\\) in eq. (14); the scattered electric field in the radiation zone is
\\[{\\bf E}_{s}=\\frac{\\exp{(ik_{0}r-i\\omega t)}}{k_{0}r}{\\bf F}_{\\rm TUC}(\\hat{\\bf k }_{s})\\quad. \\tag{19}\\]
The time-averaged scattered intensity is
\\[I_{s}=\\frac{c|{\\bf E}_{s}|^{2}}{8\\pi}=\\frac{c}{8\\pi k_{0}^{2}r^{2}}|{\\bf F}_{ \\rm TUC}|^{2}\\quad, \\tag{20}\\]
and the differential scattering cross section is
\\[\\frac{dC_{\\rm sca}}{d\\Omega}=\\frac{1}{k_{0}^{2}}\\frac{|{\\bf F}_{\\rm TUC}|^{2} }{|{\\bf E}_{0}|^{2}}\\quad. \\tag{21}\\]
### Target Periodic in One Dimension: \\(\
u=1\\)
Without loss of generality, we may assume that targets with 1-d periodicity repeat in the \\(\\hat{\\bf y}\\) direction. It is easy to see from eq.(15) and (16) that \\(G=0\\) except for scattering directions satisfying
\\[k_{sy}=k_{0y}+M\\frac{2\\pi}{L_{y}}\\quad,\\quad M=0,\\pm 1,\\pm 2, , \\tag{22}\\]
where energy conservation (\\(k_{s}^{2}=k_{0}^{2}\\)) limits the allowed values of the integer \\(M\\):
\\[(-k_{0y}-k_{0})\\frac{L_{y}}{2\\pi}\\leq M\\leq(-k_{0y}+k_{0})\\frac{L_{y}}{2\\pi}\\quad. \\tag{23}\\]
If \\((k_{0}+|k_{0y}|)L_{y}<2\\pi\\), then only \\(M=0\\) scattering is allowed. Define polar angles \\(\\alpha_{0}\\) and \\(\\alpha_{s}\\) for the incident and scattered radiation, so that
\\[k_{0y} = k_{0}\\cos\\alpha_{0} \\tag{24}\\] \\[k_{sy} = k_{0}\\cos\\alpha_{s}\\quad. \\tag{25}\\]
For each allowed value of \\(k_{sy}\\), the scattering directions define a cone:
\\[{\\bf k}_{s} = k_{sy}\\hat{\\bf y}+(k_{0}^{2}-k_{sy}^{2})^{1/2}\\frac{\\sin\\alpha_{s}} {\\sin\\alpha_{0}}\\times\\left[(\\hat{\\bf k}_{0}-\\hat{\\bf y}\\cos\\alpha_{0})\\cos \\zeta+\\hat{\\bf y}\\times\\hat{\\bf k}_{0}\\sin\\zeta\\right]\\, \\tag{26}\\]
where \\(\\zeta\\) is an azimuthal angle measured around the target axis \\(\\hat{\\bf y}\\). The sum for \\(G(r,{\\bf k}_{s})\\) is [since \\(M\\) must be an integer - see eq. (22)]
\\[G{=}\\!\\!\\sum_{m=-\\infty}^{\\infty}\\exp{(i\\Phi_{m0})} = \\!\\!\\sum_{m=-\\infty}^{\\infty}\\exp{\\left[-2\\pi iMm+i\\frac{m^{2}}{2 k_{0}r}(k_{0}^{2}-k_{sy}^{2})L_{y}^{2}\\right]} \\tag{27}\\] \\[\\to \\!\\!\\lim_{\\epsilon\\to 0^{+}}\\int_{-\\infty}^{\\infty}dm\\exp{\\!\\! \\left[\\frac{i(1+i\\epsilon)}{2k_{0}r}m^{2}(k_{0}^{2}-k_{sy}^{2})L_{y}^{2}\\right]}\\] (28) \\[= \\!\\!\\frac{(2\\pi ik_{0}r)^{1/2}}{(k_{0}^{2}-k_{sy}^{2})^{1/2}L_{y} }=\\frac{(2\\pi ik_{0}r)^{1/2}}{k_{0}L_{y}\\sin\\alpha_{s}}\\quad, \\tag{29}\\]
and the scattered electric field
\\[{\\bf E}_{s}=\\left(\\frac{2\\pi i}{k_{0}r}\\right)^{1/2}\\frac{\\exp{(ik_{0}r-i \\omega t)}}{k_{0}L_{y}\\sin\\alpha_{s}}{\\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s}) \\tag{30}\\]shows the expected \\(r^{-1/2}\\) behavior far from the scatterer (the distance from the cylinder axis is \\(R=r\\sin\\alpha_{s}\\)).
For each allowed value of \\(\\hat{\\bf k}_{s}\\), the total time-averaged scattered power \\(\\bar{P}_{\\rm sca}\\), per unit length of the target, per unit azimuthal angle \\(\\zeta\\), may be written
\\[\\frac{d^{2}\\bar{P}_{\\rm sca}}{dLd\\zeta}=\\frac{|{\\bf E}_{0}|^{2}}{8\\pi}c\\,\\frac{ d^{2}C_{\\rm sca}}{dLd\\zeta}\\ \\ \\, \\tag{31}\\]
where the differential scattering cross section is
\\[\\frac{d^{2}C_{\\rm sca}}{dLd\\zeta} = \\frac{8\\pi}{|{\\bf E}_{0}|^{2}}\\frac{1}{c}\\frac{d^{2}\\bar{P}_{\\rm sca }}{dLd\\zeta}=\\frac{8\\pi}{|{\\bf E}_{0}|^{2}c}\\frac{|{\\bf E}|^{2}c}{8\\pi}R\\sin \\alpha_{s} \\tag{32}\\] \\[= \\frac{2\\pi}{k_{0}^{3}L_{y}^{2}}\\frac{|{\\bf F}_{\\rm TUC}|^{2}}{|{ \\bf E}_{0}|^{2}}\\ \\ . \\tag{33}\\]
### Target Periodic in Two Dimensions: \\(\
u=2\\)
For targets that are periodic in two dimensions, it is apparent from eq. (15) and (16) that \\(G=0\\) unless
\\[({\\bf k}_{s}-{\\bf k}_{0})\\cdot{\\bf L}_{u} = 2\\pi M\\ \\,\\ \\ M=0,\\pm 1,\\pm 2, , \\tag{34}\\] \\[({\\bf k}_{s}-{\\bf k}_{0})\\cdot{\\bf L}_{v} = 2\\pi N\\ \\,\\ \\ N=0,\\pm 1,\\pm 2, \\tag{35}\\]
The 2-d target constitutes a diffraction grating, with scattering allowed only in directions given by (34,35). It is convenient to define the reciprocal lattice vectors
\\[{\\bf u}\\equiv\\frac{2\\pi\\hat{\\bf x}\\times{\\bf L}_{v}}{\\hat{\\bf x}\\cdot({\\bf L}_ {u}\\times{\\bf L}_{v})}\\ \\ \\,\\ \\ \\ {\\bf v}\\equiv\\frac{2\\pi\\hat{\\bf x}\\times{\\bf L}_{u}}{\\hat{\\bf x}\\cdot({\\bf L}_ {v}\\times{\\bf L}_{u})} \\tag{36}\\]
The wave vector transverse to the surface normal is
\\[{\\bf k}_{s\\perp}\\equiv{\\bf k}_{0\\perp}+M{\\bf u}+N{\\bf v}\\ . \\tag{37}\\]
Energy conservation requires that
\\[k_{sx}^{2}=k_{0}^{2}-|{\\bf k}_{0\\perp}+M{\\bf u}+N{\\bf v}|^{2}>0. \\tag{38}\\]
For any \\((M,N)\\) allowed by eq. (38), there are two allowed values of \\(k_{sx}\\), differing by a sign; one (with \\(k_{sx}k_{0x}>0\\)) corresponds to the \\((M,N)\\) component of the transmitted wave, and the other (with \\(k_{sx}k_{0x}<0\\)) to the \\((M,N)\\) component of the reflected wave. Define
\\[\\sin\\alpha_{0} \\equiv \\frac{|k_{0x}|}{k_{0}}\\ \\ \\, \\tag{39}\\] \\[\\sin\\alpha_{s} \\equiv \\frac{|k_{sx}|}{k_{0}}\\ \\ . \\tag{40}\\]
Note that \\(\\alpha_{0}=\\pi/2\\) for normal incidence, and \\(\\alpha_{0}\\to 0\\) for grazing incidence. For \\({\\bf k}_{s}\\) satisfying eq. (34,35), we have
\\[G = \\sum_{m,n}\\exp\\left(i\\Phi_{mn}\\right) \\tag{41}\\] \\[= \\sum_{m,n}\\exp\\left\\{\\frac{i}{2k_{0}r}\\left[\\left(k_{0}^{2}-k_{su }^{2}\\right)L_{u}^{2}m^{2}\\right.\\left.+\\left(k_{0}^{2}-k_{sv}^{2}\\right)L_{v}^ {2}n^{2}+2\\left(k_{0}^{2}{\\bf L}_{u}\\cdot{\\bf L}_{v}-k_{su}k_{sv}L_{u}L_{v} \\right)mn\\right]\\right\\}\\]\\[\\rightarrow \\lim_{\\epsilon\\to 0^{+}}\\int_{-\\infty}^{\\infty}dm\\int_{-\\infty}^{ \\infty}dn \\tag{41}\\] \\[\\times\\exp\\left\\{\\frac{i(1+i\\epsilon)}{2k_{0}r}\\Big{[}(k_{0}^{2}-k_ {su}^{2})L_{u}^{2}m^{2}+k_{0}^{2}-k_{sv}^{2})L_{v}^{2}n^{2}+2(k_{0}^{2}{\\bf L}_ {u}\\cdot{\\bf L}_{v}\\!-\\!k_{su}k_{sv}L_{u}L_{v})mn\\Big{]}\\right\\}\\] \\[= \\lim_{\\epsilon\\to 0^{+}}\\frac{1}{A_{\\rm TUC}}\\int_{-\\infty}^{ \\infty}dy\\int_{-\\infty}^{\\infty}dz\\exp\\left\\{\\frac{i(1\\!+\\!i\\epsilon)}{2k_{0}r }\\Big{[}k_{0}^{2}(y^{2}\\!+\\!z^{2})-(k_{sy}y+k_{sz}z)^{2}\\Big{]}\\right\\}\\] \\[= \\frac{2\\pi ir}{k_{0}A_{\\rm TUC}\\sin\\alpha_{s}}\\;.\\]
The scattered electric field is
\\[{\\bf E}_{s}=\\frac{2\\pi i\\exp\\left(i{\\bf k}_{s}\\cdot{\\bf r}-i\\omega t\\right)}{k _{0}^{2}A_{\\rm TUC}\\sin\\alpha_{s}}{\\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s})\\quad. \\tag{42}\\]
Note that \\(|{\\bf E}_{s}|\\) is independent of distance \\(x=r\\sin\\alpha_{s}\\) from the target, as expected for a target that is infinite in two directions. For pure forward scattering, \\({\\bf k}_{s}={\\bf k}_{0}\\), we must sum the incident wave \\({\\bf E}_{\\rm inc}\\) and the radiated wave \\({\\bf E}_{s}\\):
\\[{\\bf E}=\\exp\\left(i{\\bf k}_{0}\\cdot{\\bf r}-i\\omega t\\right)\\left[{\\bf E}_{0}+ \\frac{2\\pi i\\,{\\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s}\\!=\\!\\hat{\\bf k}_{0})}{k_{0}^{2 }A_{\\rm TUC}\\sin\\alpha_{s}}\\right]\\;. \\tag{43}\\]
The cross section per unit target area \\(A\\) for scattering into direction \\((M,N)\\) is (for \\({\\bf k}_{s}\
eq{\\bf k}_{0}\\)):
\\[\\frac{dC_{\\rm sca}(M,N)}{dA} = \\frac{|{\\bf E}|^{2}\\sin\\alpha_{s}}{|{\\bf E}_{0}|^{2}\\sin\\alpha_{0}} \\tag{44}\\] \\[= \\frac{4\\pi^{2}}{k_{0}^{4}A_{\\rm TUC}^{2}\\sin\\alpha_{0}\\sin\\alpha _{s}}\\frac{\\left|{\\bf F}_{\\rm TUC}(\\hat{\\bf k}_{s})\\right|^{2}}{|{\\bf E}_{0}| ^{2}}\\quad, \\tag{45}\\]
where \\(C_{\\rm sca}\\) can be evaluated for either transmitted or reflected waves. For the special case \\({\\bf k}_{s}={\\bf k}_{0}\\), the transmission coefficient \\(T(M,N)\\) is obtained from the total forward-propagating wave (43):
\\[T(0,0)=\\frac{1}{|{\\bf E}_{0}|^{2}}\\left|{\\bf E}_{0}+\\frac{2\\pi i\\,\\,{\\bf F}_{ \\rm TUC}(\\hat{\\bf k}_{s}\\!=\\!\\hat{\\bf k}_{0})}{k_{0}^{2}A_{\\rm TUC}\\sin\\alpha _{0}}\\right|^{2}\\quad. \\tag{46}\\]
## 4 Scattering Amplitude Matrices \\(S_{i}^{(\
u d)}\\)
### Isolated Finite Targets: \\(\
u=0\\)
In the radiation zone, the scattered electric field is related to the incident electric field via the \\(2\\times 2\\) scattering amplitude matrix [16], defined so that
\\[\\left(\\begin{array}{c}{\\bf E}_{s}\\cdot\\hat{\\bf e}_{s\\parallel}\\\\ {\\bf E}_{s}\\cdot\\hat{\\bf e}_{s\\perp}\\end{array}\\right) = \\frac{i\\exp(i{\\bf k}_{s}\\cdot{\\bf r}-i\\omega t)}{k_{0}r}\\left( \\begin{array}{cc}S_{2}^{(0d)}&S_{3}^{(0d)}\\\\ S_{4}^{(0d)}&S_{1}^{(0d)}\\end{array}\\right)\\left(\\begin{array}{c}{\\bf E}_{0} \\cdot\\hat{\\bf e}_{i\\parallel}\\\\ {\\bf E}_{0}\\cdot\\hat{\\bf e}_{i\\perp}\\end{array}\\right)\\;, \\tag{47}\\]
where
\\[\\hat{\\bf e}_{i\\perp}=\\hat{\\bf e}_{s\\perp} \\equiv \\frac{\\hat{\\bf k}_{s}\\times\\hat{\\bf k}_{0}}{|\\hat{\\bf k}_{s}\\times \\hat{\\bf k}_{0}|}=\\frac{\\hat{\\bf k}_{s}\\times\\hat{\\bf k}_{0}}{1-(\\hat{\\bf k}_{ s}\\cdot\\hat{\\bf k}_{0})^{2}}=-\\hat{\\phi}_{s} \\tag{48}\\] \\[\\hat{\\bf e}_{i\\parallel} \\equiv \\hat{\\bf k}_{0}\\times\\hat{\\bf e}_{i\\perp}=\\frac{\\hat{\\bf k}_{s}-( \\hat{\\bf k}_{s}\\cdot\\hat{\\bf k}_{0})\\hat{\\bf k}_{0}}{1-(\\hat{\\bf k}_{s}\\cdot \\hat{\\bf k}_{0})^{2}}\\] (49) \\[\\hat{\\bf e}_{s\\parallel} \\equiv \\hat{\\bf k}_{s}\\times\\hat{\\bf e}_{s\\perp}=\\frac{-\\hat{\\bf k}_{0}+( \\hat{\\bf k}_{s}\\cdot\\hat{\\bf k}_{0})\\hat{\\bf k}_{s}}{1-(\\hat{\\bf k}_{s}\\cdot\\hat {\\bf k}_{0})^{2}}=\\hat{\\theta}_{s} \\tag{50}\\]
are the usual conventions for the incident and scattered polarization vectors parallel and perpendicular to the scattering plane [see, e.g., SS3.2 of ref. 16].
### Target Periodic in One Dimension: \\(\
u=1\\)
For targets with 1-d periodicity, it is natural to generalize the scattering amplitude matrix, so that - for directions \\(\\hat{\\mathbf{k}}_{s}\\) for which scattering is allowed - the scattered electric field at a distance \\(R=r\\sin\\alpha_{s}\\) from the target is
\\[\\left(\\begin{array}{c}\\mathbf{E}_{s}\\cdot\\hat{\\mathbf{e}}_{s\\parallel}\\\\ \\mathbf{E}_{s}\\cdot\\hat{\\mathbf{e}}_{s\\perp}\\end{array}\\right)\\;=\\;\\frac{i\\exp (i\\mathbf{k}_{s}\\cdot\\mathbf{r}-i\\omega t)}{(k_{0}R)^{1/2}}\\left(\\begin{array}[ ]{cc}S_{2}^{(1d)}&S_{3}^{(1d)}\\\\ S_{4}^{(1d)}&S_{1}^{(1d)}\\end{array}\\right)\\left(\\begin{array}{c}\\mathbf{E}_ {0}\\cdot\\hat{\\mathbf{e}}_{i\\parallel}\\\\ \\mathbf{E}_{0}\\cdot\\hat{\\mathbf{e}}_{i\\perp}\\end{array}\\right) \\tag{51}\\]
for \\(\\mathbf{k}_{s}\\) satisfying eq. (22-26).
### Target Periodic in Two Dimensions: \\(\
u=2\\)
For targets with 2-d periodicity, it is natural to generalize the scattering amplitude matrix so that, for directions \\(\\mathbf{k}_{s}\
eq\\hat{\\mathbf{k}}_{0}\\) for which scattering is allowed, we write
\\[\\left(\\begin{array}{c}\\mathbf{E}_{s}\\cdot\\hat{\\mathbf{e}}_{s\\parallel}\\\\ \\mathbf{E}_{s}\\cdot\\hat{\\mathbf{e}}_{s\\perp}\\end{array}\\right)\\;=\\;i\\exp(i \\mathbf{k}_{s}\\cdot\\mathbf{r}-i\\omega t)\\left(\\begin{array}{cc}S_{2}^{(2d)} &S_{3}^{(2d)}\\\\ S_{4}^{(2d)}&S_{1}^{(2d)}\\end{array}\\right)\\left(\\begin{array}{c}\\mathbf{E} _{0}\\cdot\\hat{\\mathbf{e}}_{i\\parallel}\\\\ \\mathbf{E}_{0}\\cdot\\hat{\\mathbf{e}}_{i\\perp}\\end{array}\\right) \\tag{52}\\]
for \\(\\mathbf{k}_{s}\\) satisfying eq. (37-38).
For the special case of forward scattering (\\(M=N=0\\) and \\(\\mathbf{k}_{s}=\\mathbf{k}_{0}\\)), where the scattering plane is not simply defined by \\(\\mathbf{k}_{0}\\) and \\(\\mathbf{k}_{s}\\), it is natural to use \\(\\mathbf{k}_{0}\\) and the target normal \\(\\hat{\\mathbf{x}}\\) to define the scattering plane. Thus
\\[\\hat{\\mathbf{e}}_{i\\perp}=\\hat{\\mathbf{e}}_{s\\perp}\\;\\equiv\\;\\frac{\\mathbf{k }_{0}\\times\\mathbf{k}_{s\\perp}}{|\\mathbf{k}_{0}\\times\\mathbf{k}_{s\\perp}|} \\tag{53}\\]
with \\(\\hat{\\mathbf{e}}_{i\\parallel}\\) and \\(\\hat{\\mathbf{e}}_{s\\parallel}\\) defined by (49,50). For \\(r\\rightarrow\\infty\\)
\\[\\left(\\begin{array}{c}\\mathbf{E}\\cdot\\hat{\\mathbf{e}}_{s\\parallel}\\\\ \\mathbf{E}\\cdot\\hat{\\mathbf{e}}_{s\\perp}\\end{array}\\right)\\;=\\;i\\exp(i \\mathbf{k}_{0}\\cdot\\mathbf{r}-i\\omega t)\\!\\!\\left(\\begin{array}{cc}(S_{2}^{(2 d)}\\!-\\!i)&0\\\\ 0&(S_{1}^{(2d)}\\!-\\!i)\\end{array}\\right)\\!\\!\\left(\\begin{array}{c}\\mathbf{E}_ {0}\\cdot\\hat{\\mathbf{e}}_{i\\parallel}\\\\ \\mathbf{E}_{0}\\cdot\\hat{\\mathbf{e}}_{i\\perp}\\end{array}\\right). \\tag{54}\\]
### Far-Field Scattering Amplitude Matrices
The scattering amplitude matrices \\(S_{i}^{(\
u d)}(\\mathbf{k}_{s})\\) are directly related to the \\(\\mathbf{F}_{\\mathrm{TUC}}\\) for the three cases, \\(\
u=0,1,2\\):
\\[S_{1}^{(\
u d)} = C_{\
u}\\hat{\\mathbf{e}}_{s\\perp}\\cdot\\mathbf{F}_{\\mathrm{TUC}}( \\hat{\\mathbf{k}}_{s},\\mathbf{E}_{0}=\\hat{\\mathbf{e}}_{i\\perp}) \\tag{55}\\] \\[S_{2}^{(\
u d)} = C_{\
u}\\hat{\\mathbf{e}}_{s\\parallel}\\cdot\\mathbf{F}_{\\mathrm{ TUC}}(\\hat{\\mathbf{k}}_{s},\\mathbf{E}_{0}=\\hat{\\mathbf{e}}_{i\\parallel})\\] (56) \\[S_{3}^{(\
u d)} = C_{\
u}\\hat{\\mathbf{e}}_{s\\parallel}\\cdot\\mathbf{F}_{\\mathrm{ TUC}}(\\hat{\\mathbf{k}}_{s},\\mathbf{E}_{0}=\\hat{\\mathbf{e}}_{i\\perp})\\] (57) \\[S_{4}^{(\
u d)} = C_{\
u}\\hat{\\mathbf{e}}_{s\\perp}\\cdot\\mathbf{F}_{\\mathrm{TUC}}( \\hat{\\mathbf{k}}_{s},\\mathbf{E}_{0}=\\hat{\\mathbf{e}}_{i\\parallel}) \\tag{58}\\]
\\[C_{0} = -i \\tag{59}\\] \\[C_{1} = -\\left(\\frac{2\\pi i}{\\sin\\alpha_{s}}\\right)^{1/2}\\frac{i}{k_{0}L_ {y}}\\] (60) \\[C_{2} = \\frac{2\\pi}{k_{0}^{2}A_{\\mathrm{TUC}}\\sin\\alpha_{s}} \\tag{61}\\]
for finite targets (\\(C_{0}\\)), and targets that are periodic in one or two dimensions (\\(C_{1}\\) or \\(C_{2}\\)).
## 5 Far-Field Scattering Matrix for Stokes Vectors
For a given scattering direction \\(\\hat{\\bf k}_{s}\\), the \\(2\\times 2\\) complex amplitude matrix \\(S_{i}^{(\
u d)}({\\bf k}_{s})\\) fully characterizes the far-field scattering properties of the target. The far-field scattering properties of an isolated finite target are characterized by the \\(4\\times 4\\) dimensionless Mueller matrix \\(S_{\\alpha\\beta}^{(0d)}\\), with the Stokes vector of radiation scattered into direction \\(\\hat{\\bf k}_{s}\\) at a distance \\(r\\) from the target given by
\\[I_{\\rm sca,\\alpha}\\equiv\\frac{1}{(k_{0}r)^{2}}\\sum_{\\beta=1}^{4}S_{\\alpha\\beta }^{(0d)}I_{\\rm inc,\\beta}\\quad, \\tag{62}\\]
where \\(I_{\\rm inc,\\beta}=(I,Q,U,V)_{\\rm inc}\\) is the Stokes vector for the radiation incident on the target. For 1-d targets, we define the dimensionless scattering matrix \\(S_{\\alpha\\beta}^{(1d)}\\) by
\\[I_{\\rm sca,\\alpha}\\equiv\\frac{1}{k_{0}R}\\sum_{\\beta=1}^{4}S_{\\alpha\\beta}^{(1 d)}I_{\\rm inc,\\beta}\\quad, \\tag{63}\\]
where \\(R\\) is the distance from the one-dimensional target. For 2-d targets, we define \\(S_{\\alpha\\beta}^{(2d)}\\) by
\\[I_{\\rm sca,\\alpha}\\equiv\\sum_{\\beta=1}^{4}S_{\\alpha\\beta}^{(2d)}I_{\\rm inc, \\beta}\\quad. \\tag{64}\\]
The \\(4\\times 4\\) scattering intensity matrix \\(S_{\\alpha\\beta}^{(\
u d)}\\) is obtained from the scattering amplitude matrix elements \\(S_{i}^{(\
u d)}\\). Except for the special case of forward scattering (\\({\\bf k}_{s}={\\bf k}_{0}\\)) for 2-d targets, the equations are the same as eq. (3.16) of Bohren and Huffman [16]. For example
\\[S_{11}^{(\
u d)} = \\frac{1}{2}\\left(|S_{1}^{(\
u d)}|^{2}+|S_{2}^{(\
u d)}|^{2}+|S_{ 3}^{(\
u d)}|^{2}+|S_{4}^{(\
u d)}|^{2}\\right) \\tag{65}\\] \\[S_{21}^{(\
u d)} = \\frac{1}{2}\\left(|S_{2}^{(\
u d)}|^{2}-|S_{1}^{(\
u d)}|^{2}-|S_{ 4}^{(\
u d)}|^{2}+|S_{3}^{(\
u d)}|^{2}\\right)\\] (66) \\[S_{14}^{(\
u d)} = {\\rm Im}\\left(S_{2}^{(\
u d)}S_{3}^{(\
u d)*}-S_{1}^{(\
u d)}S_{ 4}^{(\
u d)*}\\right). \\tag{67}\\]
For the special case of forward scattering (\\(\\hat{\\bf k}_{s}=\\hat{\\bf k}_{0}\\)) for 2-d targets, it is necessary to replace \\(S_{1}^{(2d)}\\) and \\(S_{2}^{(2d)}\\) with \\((S_{1}^{(2d)}-i)\\) and \\((S_{2}^{(2d)})-i)\\) [cf. Eq. (54)]. Thus, for example,
\\[S_{11}^{(2d)}({\\bf k}_{s}\\!=\\!{\\bf k}_{0})=\\frac{1}{2}\\left(|S_{1}^{(2d)}\\!-\\! i|^{2}+|S_{2}^{(2d)}\\!-\\!i|^{2}+|S_{3}^{(2d)}|^{2}+|S_{4}^{(2d)}|^{2}\\right). \\tag{68}\\]
## 6 Transmission and Reflection Coefficients for 2-D Targets
For targets with 2-d periodicity, it is natural to define generalized transmission and reflection coefficients for the Stokes vectors: for scattering order \\((M,N)\\), \\(I_{\\rm sca,\\alpha}=\\sum_{\\beta}T_{\\alpha\\beta}(M,N)I_{\\rm inc,\\beta}\\) is the Stokes vector component \\(\\alpha\\) for radiation with \\(k_{sx}k_{\\rm inc,x}>0\\), and \\(R_{\\alpha\\beta}(M,N)\\) is the fraction of the incident Stokes vector component \\(\\beta\\) that emerges in Stokes vector component \\(\\alpha\\) with \\(k_{sx}k_{\\rm inc,x}<0\\). These can be related to the \\(S_{\\alpha\\beta}^{(2d)}\\):
\\[R_{\\alpha\\beta}(M,N) = \\frac{\\sin\\alpha_{s}}{\\sin\\alpha_{0}}S_{\\alpha\\beta}^{(2d)}\\quad \\mbox{for }k_{sx}k_{0x}<0\\quad, \\tag{69}\\] \\[T_{\\alpha\\beta}(M,N) = \\frac{\\sin\\alpha_{s}}{\\sin\\alpha_{0}}S_{\\alpha\\beta}^{(2d)}\\quad \\mbox{for }k_{sx}k_{0x}>0\\quad, \\tag{70}\\]Figure 4: Scattering by an infinite cylinder with diameter \\(D\\) and \\(m=1.33+0.01i\\), for radiation with \\(\\pi D/\\lambda=50\\), and incidence angle \\(\\alpha_{0}=60^{\\circ}\\). (a) Exact solution (solid curve) and DDA results for \\(D/d=512\\) and various values of the interaction cutoff parameter \\(\\gamma\\); (b) fractional error in \\(S_{11}^{(ld)}\\); (c,d) same as (a,b), but expanding the region \\(0<\\zeta<20^{\\circ}\\). For this case, results computed with \\(\\gamma=0.002\\) and \\(0.001\\) are nearly indistinguishable.
Figure 3: Scattering by an infinite cylinder with diameter \\(D\\) and \\(m=1.33+0.01i\\), for radiation with \\(x=\\pi D/\\lambda=50\\) and incidence angle \\(\\alpha_{0}=60^{\\circ}\\). (a) \\(S_{11}^{(ld)}\\). Solid curve: exact solution. Broken curves: DDA results for \\(D/d=256\\), \\(360\\), and \\(512\\) (\\(N=51676\\), \\(102036\\), \\(206300\\) dipoles per TUC); (b) fractional error in \\(S_{11}^{(ld)}\\)(DDA). (c) \\(S_{21}^{(ld)}\\); (d) error in \\(S_{21}^{(ld)}\\).
The fraction of the incident power that is absorbed by the target is
\\[\\frac{P_{\\rm abs}/{\\rm Area}}{|{\\bf E}_{0}|^{2}c\\sin\\alpha_{0}/8\\pi}=1-\\sum_{M,N} \\sum_{\\beta=1}^{4}\\left[R_{1\\beta}(M,N)+T_{1\\beta}(M,N)\\right]\\frac{I_{\\rm inc, \\beta}}{I_{\\rm inc,1}}\\quad, \\tag{71}\\]
where \\(I_{\\rm inc,\\beta}\\) is the Stokes vector of the incident radiation.
For unpolarized incident radiation, \\(R_{11}(M,N)\\) is the fraction of the incident power that is reflected in diffraction component \\((M,N)\\), \\(T_{11}(M,N)\\) is the fraction that is transmitted in component \\((M,N)\\), and \\(1-\\sum_{M,N}[R_{11}(M,N)+T_{11}(M,N)]\\) is the fraction of the incident power that is absorbed.
## 7 Example: Infinite Cylinder
DDSCAT 7 has been used to calculate scattering and absorption by an infinite cylinder consisting of a periodic array of disks of thickness \\(d\\) and period \\(L_{y}=d\\) (where \\(d\\) is the interdipole spacing). Fig. 3a shows \\(S_{11}^{(1d)}\\) for refractive index \\(m=1.33+0.01i\\) and \\(\\pi D/\\lambda=50\\) (\\(D\\) is the cylinder
Figure 5: Light scattered by an infinite cylinder with \\(m=2+i\\), for radiation with \\(x=2\\pi R/\\lambda=25\\) and incidence angle \\(\\alpha_{0}=60^{\\circ}\\). (a) \\(S_{11}^{(1d)}\\). Solid curve: exact solution. Broken curves: DDA results for \\(D/d=128\\), \\(180\\), and \\(256\\) (\\(N=12972\\), \\(25600\\), \\(51676\\) dipoles per TUC). (b) Fractional error in \\(S_{11}^{(1d)}\\).
diameter and \\(\\lambda\\) the wavelength of the incident radiation), and incidence angle \\(\\alpha_{0}=60^{\\circ}\\). Because \\(k_{0}(1+|\\cos\\alpha_{0}|)d<2\\pi\\), equations (22, 23) allow only \\(M=0\\) scattering, with \\(\\alpha_{s}=\\alpha_{0}\\). Also shown is the exact solution, calculated using a code written by D. Mackowski (private communication). Light scattering by cylinders is generally described by scattering amplitudes \\(T_{i}\\); in Appendix B we provide expressions relating these \\(T_{i}\\) to the \\(S_{i}\\) used here. Fig. 3b shows the fractional error in \\(S_{11}^{(1d)}\\) calculated using DDSCAT. As \\(d\\) is decreased, the errors decrease. Excellent accuracy is obtained when the validity criterion [3]\\(|m|kd\\lesssim 0.5\\) is satisfied: the fractional error in \\(S_{11}\\) is typically less than a few %, except near deep minima in \\(S_{11}\\).
Fig. 3c shows \\(S_{21}^{(1d)}\\), characterizing scattering of unpolarized light into the Stokes parameter \\(Q\\) (\\(S_{21}<0\\) corresponds to linear polarization perpendicular to the scattering plane). DDSCAT 7 and the exact solution are in very good agreement when \\(|m|kd\\lesssim 0.5\\). Note that although the error in \\(S_{21}^{(1d)}(\\theta=0)\\approx 2\\) is large compared to \\(S_{21}(0)=-6\\), this is small compared to \\(S_{11}^{(1d)}(0)\\approx 1500\\): the scattered radiation is only slightly polarized.
The results in Fig. 3 were obtained using \\(\\gamma=0.001\\) to truncate the integrations. To see how the results depend on \\(\\gamma\\), Figure 4 shows \\(S_{11}^{(1d)}\\) computed for the problem of Fig. 3 but using different values of \\(\\gamma\\). For azimuthal angles \\(\\zeta>20^{\\circ}\\), the results for \\(\\gamma=0.005\\) and \\(0.001\\) are nearly indistinguishable; the difference between the computed result and the exact solution is evidently due to the finite number of dipoles used, rather than the choice of cutoff parameter \\(\\gamma\\). However, the results for forward scattering are more sensitive to the choice of \\(\\gamma\\), as is seen in Fig. 3c,d: it is necessary to reduce \\(\\gamma\\) to \\(0.001\\) to attain high accuracy in the forward scattering directions.
Table 1 gives the CPU times to calculate \\(\\mathbf{\\tilde{A}}\\), to then iteratively solve the scattering problem to a fractional error \\(<10^{-5}\\) (using double-precision arithmetic), and finally to evaluate the scattering intensities, for several of the cases shown in Figs. 3 and 4. For most cases the CPU time is dominated by the iterative solution using the conjugate gradient algorithm. While the time required to evaluate \\(\\mathbf{\\tilde{A}}\\) might be reduced using the strategies suggested by [10], this step is generally a subdominant part of the computation for targets with \\(kd\\gtrsim 0.1\\).
The above results have been for a weakly-absorbing cylinder. To confirm that the DDA can be applied to strongly-absorbing material, Fig. 5 shows scattering calculated for a cylinder with \\(m=2+i\\) and \\(x=\\pi D/\\lambda=25\\). Once again, the accuracy is very good, with small fractional errors provided \\(|m|kd\\lesssim 0.5\\).
## 8 Example: Plane-Parallel Slab
Consider a homogeneous plane-parallel slab with thickness \\(h\\) and refractive index \\(m\\). Radiation incident on it at angle of incidence \\(\\theta_{i}\\) will either be specularly reflected or transmitted. The reflection and transmission coefficients \\(R\\) and \\(T\\) can be calculated analytically, taking into account
\\begin{table}
\\begin{tabular}{c c c c c c c} \\(\\pi D/\\lambda\\) & \\(N\\) & \\(\\gamma\\) & calc. \\(\\mathbf{\\tilde{A}}\\) & solution & scat. & Total \\\\ & & & (min) & (min) & (min) & (min) \\\\ \\hline
25 & 51676 & 0.005 & 3.29 & 17.6 & 0.65 & 22.2 \\\\
25 & 51676 & 0.001 & 16.2 & 17.8 & 0.65 & 35.3 \\\\
50 & 102036 & 0.005 & 4.58 & 59.7 & 1.27 & 66.8 \\\\
50 & 102036 & 0.001 & 22.8multiple reflections within the slab [15]. With an exact solution in hand, we can evaluate the accuracy of the DDA applied to this problem. Figure 6 shows results for two cases: a dielectric slab with \\(m=1.50\\), and an absorbing slab, with \\(m=1.50+0.02i\\).
DDSCAT 7 was used to calculate reflection, transmission, and absorption by an infinite slab, generated from a TUC consisting of a single line of dipoles extending in the \\(x\\) direction, with \\(L_{y}=L_{z}=d\\). The selection rules (34,35,38) allow only \\(M=N=0\\): transmission or specular reflection. The reflection and transmission coefficients for radiation polarized parallel or perpendicular to the plane containing \\(\\hat{\\bf k}\\) and the surface normal are
\\[R_{\\parallel} = S_{11}^{(1d)}(k_{sx}=-k_{0x})+S_{12}^{(1d)}(k_{sx}=k_{0x}) \\tag{72}\\] \\[R_{\\perp} = S_{11}^{(1d)}(k_{sx}=-k_{0x})-S_{12}^{(1d)}(k_{sx}=k_{0x})\\] (73) \\[T_{\\parallel} = S_{11}^{(1d)}(k_{sx}=k_{0x})+S_{12}^{(1d)}(k_{sx}=k_{0x})\\] (74) \\[T_{\\perp} = S_{11}^{(1d)}(k_{sx}=k_{0x})-S_{12}^{(1d)}(k_{sx}=k_{0x})\\quad. \\tag{75}\\]
The DDA results are in excellent agreement with the exact results when the validity condition \\(|m|k_{0}d<0.5\\) is satisfied, but results with moderate accuracy are obtained even when \\(|m|k_{0}d\\approx 1\\).
## 9 Near-Field Evaluation
The polarizations \\({\\bf P}_{j00}\\) can be used to calculate the electric and magnetic fields at any point, including within or near the target, using the exact expression for \\({\\bf E}\\) and \\({\\bf B}\\) from a point dipole, modified by a function \\(\\phi\\):
\\[{\\bf E}({\\bf r},t) = e^{-i\\omega t}\\sum_{j}{\\sum_{m,n}}^{\\prime}\\ \\frac{\\exp(ik_{0}R_{jmn})}{|R_{jmn}|^{3}}\\phi(R_{jmn}) \\biggl{\\{}k_{0}^{2}{\\bf R}_{jmn}\\times({\\bf P}_{jmn}\\times{\\bf R}_{jmn})\\]
Figure 6: Transmission and reflection coefficients for radiation with wavelength \\(\\lambda\\) incident at angle \\(\\theta_{i}=(\\pi/2-\\alpha_{0})=40^{\\circ}\\) relative to the normal on a slab with thickness \\(h\\), incident \\({\\bf E}\\parallel\\) and \\(\\perp\\) to the scattering plane, as a function of \\({\\rm Re}(m)h/\\lambda\\). (a) Nonabsorbing slab with \\(m=1.5\\) (b) Absorbing slab with \\(m=1.5+0.02i\\). Solid curve: exact solution. Symbols: results calculated with the DDA using dipole spacing \\(d=h/10\\), \\(h/20\\), and \\(h/40\\).
\\[+\\frac{(1-ik_{0}R_{jmn})}{R_{jmn}^{2}}\\left[3{\\bf R}_{jmn}({\\bf R}_{ jmn}\\cdot{\\bf P}_{jmn})-R_{jmn}^{2}{\\bf P}_{jmn}\\right]\\bigg{\\}}+{\\bf E}_{0}\\exp(i{ \\bf k}_{0}\\cdot{\\bf r}-i\\omega t) \\tag{76}\\] \\[{\\bf B}({\\bf r},t) = e^{-i\\omega t}\\sum_{j}{\\sum_{m,n}}^{\\prime}\\,\\,k^{2}\\frac{\\exp(ik _{0}R_{jmn})}{R_{jmn}^{2}}\\phi(R_{jmn})\\left({\\bf R}_{jmn}\\times{\\bf P}_{jmn} \\right)\\left(1-\\frac{1}{ik_{0}R_{jmn}}\\right)\\] (77) \\[+\\,\\,\\hat{\\bf k}_{0}\\times{\\bf E}_{0}\\exp(i{\\bf k}_{0}\\cdot{\\bf r }-i\\omega t)\\] \\[{\\bf R}_{jmn} \\equiv {\\bf r}-{\\bf r}_{jmn}\\] (78) \\[\\phi(R) \\equiv \\exp\\left[-\\gamma(k_{0}R)^{4}\\right)\\right]\\times\\left\\{\\begin{array} []{ll}1&\\mbox{for $R\\geq d$}\\\\ (R/d)^{4}&\\mbox{for $R<d$}\\end{array}\\right. \\tag{79}\\]
The function \\(\\phi(R)\\) smoothly suppresses the (oscillating) contribution from distant dipoles in order to allow the summations to be truncated, just as in eq. (6) for evaluation of \\(\\bf\\tilde{A}_{j,k}\\). If \\({\\bf r}\\) is within the target or near the target surface, the summations over \\((m,n)\\) are limited to \\(|R_{jmn}|\\leq 2/\\gamma k_{0}\\). The \\((R/d)^{4}\\) factor suppresses the \\(R^{-3}\\) divergence of \\({\\bf E}\\) as \\({\\bf r}\\) approaches the locations of individual dipoles, and at the dipole locations results in \\({\\bf E}\\) that is exactly equal to the field that is polarizing the dipoles in the DDA formulation. Evaluation of eq. (76, 77) is computationally-intensive, because the summations \\(\\Sigma_{j}\\Sigma_{m,n}^{\\prime}\\) typically have many terms.
To illustrate the accuracy, we consider the infinite slab of Fig. 6b, with refractive index \\(m=1.5+0.02i\\) and radiation incident at an angle \\(\\theta_{i}=40^{\\circ}\\). Figure 7 shows the time-averaged \\(|{\\bf E}|^{2}/|{\\bf E}_{0}|^{2}\\) for slab thickness \\(h=0.2\\lambda\\) - near a minimum in transmission, and a maximum in reflection (see Fig. 6b. The program DDfield (see Appendix A) was used to evaluate \\({\\bf E}\\) along two lines normal to
Figure 7: \\(|{\\bf E}^{2}|/|{\\bf E}_{0}^{2}|\\) along two tracks normal to the dielectric slab of Fig. 6b, for slab thickness \\(h=0.2\\lambda\\), incidence angle \\(\\alpha_{i}=40^{\\circ}\\), and incident polarizations \\(\\parallel\\) and \\(\\perp\\) to the scattering plane (see text). Results were calculated using eq. (76) with the slab represented by \\(N_{x}=10\\) and \\(N_{x}=20\\) dipole layers (i.e., dipole spacing \\(d=0.1h\\) and \\(0.05h\\)). The circles along track 1 are at points where dipoles are located.
the slab: track 1 passes directly through dipole sites, and track 2 passes midway between the four nearest dipoles as it crosses each dipole layer. The E fields calculated along tracks 1 and 2 are very similar, although of course not identical. Within the slab, \\(|\\mathbf{E}|\\) along track 2 tends to be slightly smaller than along track 1, but for this example the difference is typically less than \\(\\sim\\)1%. Figure 7 shows results for the slab represented by \\(h/d=N_{x}=10\\) and 20 dipole layers (with \\(|m|kd=0.19\\) and \\(0.094\\), respectively).
Even for \\(N_{x}=10\\), the electric field at points more than a distance \\(d\\) from the edge is obtained to within \\(\\sim 2\\%\\) accuracy at worst, which is perhaps not surprising because, as seen in Figure 6, the calculated transmission and reflection coefficients are very accurate. The discontinuity in \\(|E|^{2}\\) at the boundary is spread out over a distance \\(\\sim d\\). The DDA obviously cannot reproduce field structure near the target surface on scales smaller than the dipole separation \\(d\\), but fields on scales larger than \\(d\\) appear to be quite accurate. DDSCAT and DDfield should be useful tools for studying electromagnetic fields around arrays of nanostructures, such as gold nanodisks [17, 18].
## 10 Summary
The principal results of this study are as follows:
1. The DDA is generalized to treat targets that are periodic in one or two spatial dimensions. Scattering and absorption of monochromatic plane waves can be calculated using algorithms that parallel those used for finite targets.
2. A general formalism is presented for description of far-field scattering by targets that are periodic in one or two dimensions using scattering amplitude matrices and Mueller matrices that are similar in form to those for finite targets.
3. The accuracy of the DDA for periodic targets is tested for two examples: infinite cylinders and infinite slabs. The DDA, as implemented in DDSCAT 7, is accurate provided the validity criterion \\(|m|kd\\lesssim 0.5\\) is satisfied.
4. We show how the DDA solution can be used to evaluate \\(\\mathbf{E}\\) and \\(\\mathbf{B}\\) within and near the target, with calculations for an infinite slab used to illustrate the accuracy of near-field calculations.
## Acknowledgments
This research was supported in part by NSF grant AST-0406883, and by the Office of Naval Research. We thank Dan Mackowski for providing his code for light scattering by infinite cylinders, H. A. Yousif for discussions concerning scattering by infinite cylinders, and the anonymous referees for helpful comments.
## Appendix A DDSCAT 7 and DDfield
The theoretical developments reported here have been implemented in a new version of the open-source code DDSCAT ([http://www.astro.princeton.edu/](http://www.astro.princeton.edu/)\\(\\sim\\)draine/DDSCAT.html). DDSCAT 7 is written in Fortran 90, with dynamic memory allocation and the option to use either single- or double-precision arithmetic. DDSCAT 7 includes options for various target geometries, including a number of periodic structures. A program DDfield for near-field calculations is also provided.
DDSCAT 7 offers the option of using an implementation of BiCGstab with enhancement to maintain convergence in finite precision arithmetic [19]. The matrix-vector multiplications \\(\\mathbf{\\tilde{A}P}\\) are accomplished efficiently using FFTs [20]. Documentation for DDSCAT is available from ArXiv [21], with additional information available from [http://ddscat.wikidot.com](http://ddscat.wikidot.com).
In addition to differential scattering cross sections, DDSCAT reports dimensionless \"efficiency factors\" \\(Q_{x}\\equiv C_{x}({\\rm TUC})/\\pi a_{\\rm eff}^{2}\\) for scattering and absorption, where \\(C_{x}({\\rm TUC})\\) is the total cross section for scattering or absorption per TUC, normalized by \\(\\pi a_{\\rm eff}^{2}\\), where \\(a_{\\rm eff}\\equiv(3V_{\\rm TUC}/4\\pi)^{1/3}\\) is the radius of a sphere with volume equal to the solid volume \\(V_{\\rm TUC}\\) in one TUC.
In the case of one-dimensional targets, with periodicity \\(L_{y}\\) in the \\(y\\) direction, the absorption, scattering, and extinction cross sections per unit target length are
\\[\\frac{dC_{x}}{dL}=\\frac{1}{L_{y}}Q_{x}\\pi a_{\\rm eff}^{2} \\tag{10}\\]
for \\(x={\\rm ext}\\), \\({\\rm sca}\\), and \\({\\rm abs}\\), where \\(Q_{x}\\) are the efficiency factors calculated by DDSCAT.
In the case of two-dimensional targets, with periodicities \\(L_{u}\\) and \\(L_{v}\\), the absorption, scattering, and extinction cross sections per unit target area are
\\[\\frac{dC_{x}}{dA}=\\frac{Q_{x}\\pi a_{\\rm eff}^{2}}{L_{u}L_{v}\\sin\\theta_{uv}}\\ \\ . \\tag{11}\\]
## Appendix B Relation Between \\(S_{i}\\) and \\(T_{i}\\) for Infinite Cylinders
The analytic solution for infinite cylinders decomposes the incident and scattered radiation into components polarized parallel and perpendicular to planes containing the cylinder axis and the propagation vector \\(\\hat{\\bf k}_{0}\\) or \\(\\hat{\\bf k}_{s}\\). These polarization basis states differ from the choice that is usual for scattering by finite particles, where it is customary to decompose the incident and scattered waves into components polarized parallel and perpendicular to the _scattering plane_ - the plane containing \\(\\hat{\\bf k}_{0}\\) and \\(\\hat{\\bf k}_{s}\\).
In the notation of Bohren and Huffman [16], the radiation scattered by an infinite cylinder can be written
\\[\\left(\\begin{array}{c}{\\bf E}_{s}\\cdot\\hat{\\bf e}_{s\\parallel}^{(ck)}\\\\ {\\bf E}_{s}\\cdot\\hat{\\bf e}_{s\\perp}^{(ck)}\\end{array}\\right)\\ =\\ i\\exp\\left(i{\\bf k}_{s}\\cdot{\\bf r}-i \\omega t\\right)\\left(\\frac{2i}{\\pi k_{0}R\\sin\\alpha}\\right)^{1/2}\\left( \\begin{array}{cc}T_{1}&-T_{3}\\\\ T_{3}&T_{2}\\end{array}\\right)\\left(\\begin{array}{c}{\\bf E}_{0}\\cdot\\hat{\\bf e }_{i\\parallel}^{(ck)}\\\\ {\\bf E}_{0}\\cdot\\hat{\\bf e}_{i\\perp}^{(ck)}\\end{array}\\right) \\tag{12}\\]
where \\(R\\) is the distance from the cylinder axis, \\(\\alpha\\) is the angle between \\({\\bf k}_{0}\\) and the cylinder axis \\(c\\), and superscript \\((ck)\\) denotes polarization vectors parallel or perpendicular to planes containing the cylinder axis \\(\\hat{\\bf c}\\) and either \\({\\bf k}_{0}\\) or \\({\\bf k}_{s}\\). The azimuthal angle \\(\\zeta\\) is measured around the cylinder axis \\(\\hat{\\bf c}\\), with \\(\\zeta=0\\) for forward scattering. The scattering angle \\(\\theta=\\arccos[\\hat{\\bf k}_{0}\\cdot\\hat{\\bf k}_{s}]\\) is
\\[\\theta=\\arccos\\left[1-(1-\\cos\\zeta)\\sin^{2}\\alpha\\right]\\ \\ . \\tag{13}\\]
The scattering amplitude matrix elements \\(T_{i}\\) appearing in (12) can be related to the matrix elements \\(S_{i}\\) appearing in eq. (51):
\\[{\\bf S}=\\left(\\frac{2i}{\\pi\\sin\\alpha}\\right)^{1/2}{\\bf A}{\\bf T}{\\bf B}^{-1}\\, \\tag{14}\\]
\\[{\\bf S} \\equiv \\left(\\begin{array}{cc}S_{2}^{(1d)}&S_{3}^{(1d)}\\\\ S_{4}^{(1d)}&S_{1}^{(1d)}\\end{array}\\right)\\ \\ \\ \\ \\ \\,\\ \\ \\ \\ \\ \\ {\\bf T}\\equiv\\left(\\begin{array}{cc}T_{1}&-T_{3}\\\\ T_{3}&T_{2}\\end{array}\\right)\\, \\tag{15}\\] \\[{\\bf A} \\equiv \\left(\\begin{array}{cc}\\hat{\\bf e}_{s\\parallel}\\cdot\\hat{\\bf e }_{s\\parallel}^{(ck)}&\\hat{\\bf e}_{s\\parallel}\\cdot\\hat{\\bf e}_{s\\perp}^{(ck)} \\\\ \\hat{\\bf e}_{s\\perp}\\cdot\\hat{\\bf e}_{s\\parallel}^{(ck)}&\\hat{\\bf e}_{s\\perp} \\cdot\\hat{\\bf e}_{s\\perp}^{(ck)}\\end{array}\\right)=\\frac{1}{\\sin\\theta}\\left( \\begin{array}{cc}-\\cot\\alpha(1\\!-\\!\\cos\\theta)&\\sin\\alpha\\sin\\zeta\\\\ -\\sin\\alpha\\sin\\zeta&-\\cot\\alpha(1\\!-\\!\\cos\\theta)\\end{array}\\right)\\,\\] (16) \\[{\\bf B} \\equiv \\left(\\begin{array}{cc}\\hat{\\bf e}_{i\\parallel}\\cdot\\hat{\\bf e }_{i\\parallel}^{(ck)}&\\hat{\\bf e}_{i\\parallel}\\cdot\\hat{\\bf e}_{i\\perp}^{(ck)} \\\\ \\hat{\\bf e}_{i\\perp}\\cdot\\hat{\\bf e}_{i\\parallel}^{(ck)}&\\hat{\\bf e}_{i\\perp} \\cdot\\hat{\\bf e}_{i\\perp}^{(ck)}\\end{array}\\right)=\\frac{1}{\\sin\\theta} \\left(\\begin{array}{cc}\\cot\\alpha(1\\!-\\!\\cos\\theta)&\\sin\\alpha\\sin\\zeta\\\\ -\\sin\\alpha\\sin\\zeta&\\cot\\alpha(1\\!-\\!\\cos\\theta)\\end{array}\\right)\\,\\] (17) \\[{\\bf B}^{-1} = \\frac{\\sin\\theta}{\\cot^{2}\\alpha(1-\\cos\\theta)^{2}+\\sin^{2}\\alpha \\sin^{2}\\zeta}\\left(\\begin{array}{cc}\\cot\\alpha(1-\\cos\\theta)&-\\sin\\alpha \\sin\\zeta\\\\ \\sin\\alpha\\sin\\zeta&\\cot\\alpha(1-\\cos\\theta)\\end{array}\\right). \\tag{18}\\]
## References
* [1] E. M. Purcell and C. R. Pennypacker, \"Scattering and Absorption of Light by Nonspherical Dielectric Grains,\" Astrophys. J. **186**, 705-714 (1973).
* [2] B. T. Draine, \"The discrete-dipole approximation and its application to interstellar graphite grains,\" Astrophys. J. **333**, 848-872 (1988).
* [3] B. T. Draine and P. Flatau, \"Discrete-dipole approximation for scattering calculations,\" J. Opt. Soc. Am. A**11**, 1491-1499 (1994).
* [4] B. T. Draine, \"The Discrete Dipole Approximation for Light Scattering by Irregular Targets,\" in \"Light Scattering by Nonspherical Particles: Theory, Measurements, and Applications,\", M. I. Mishchenko, J. W. Hovenier, and L. D. Travis, eds. (San Diego: Academic Press, 2000), pp. 131-145.
* [5] R. Schmehl, B. M. Nebeker, and E. D. Hirleman, \"Discrete-dipole approximation for scattering by features on surfaces by means of a two-dimensional fast fourier transform technique,\" J. Opt. Soc. Am. A **14**, 3026-3036 (1997).
* [6] M. Paulus and O. J. F. Martin, \"Green's tensor technique for scattering in two-dimensional stratified media,\" Phys. Rev. E **63**, 066615 (2001).
* [7] P. Yang and K. N. Liou, \"Finite Difference Time Domain Method for Light Scattering by Nonspherical and Inhomogeneous Particles,\" in \"Light Scattering by Nonspherical Particles: Theory, Measurements, and Applications,\", M. I. Mishchenko, J. W. Hovenier, and L. D. Travis, eds. (San Diego: Academic Press., 2000), pp. 173-221.
* [8] A. Taflove and S. C. Hagness, _Advances in Computational Electrodynamics: the Finite-Difference Time-Domain Method_ (Artech House, Boston, 2005).
* [9] V. A. Markel, \"Coupled-dipole Approach to Scattering of Light from a One-dimensional Periodic Dipole Structure,\" Journal of Modern Optics **40**, 2281-2291 (1993).
* [10] P. C. Chaumet, A. Rahmani, and G. W. Bryant, \"Generalization of the coupled dipole method to periodic structures,\" Phys. Rev. B **67**, 165404 (2003).
* [11] P. C. Chaumet and A. Sentenac, \"Numerical simulations of the electromagnetic field scattered by defects in a double-periodic structure,\" Phys. Rev. B **72**, 205437-20544 (2005).
* [12] B. T. Draine and P. Flatau, \"User Guide for the Discrete Dipole Approximation Code DDSCAT.6.1,\" [http://arXiv.or/abs/astro-ph/ArXiv/0409262](http://arXiv.or/abs/astro-ph/ArXiv/0409262) (2004).
* Wave propagation on a polarizable point lattice and the discrete dipole approximation,\" Astrophys. J. **405**, 685-697 (1993).
* [14] D. Gutkowicz-Krusin and B. T. Draine, \"Propagation of Electromagnetic Waves on a Rectangular Lattice of Polarizable Points,\" [http://arXiv.org/abs/astro-ph/0403082](http://arXiv.org/abs/astro-ph/0403082) (2004).
* [15] M. Born and E. Wolf, _Principles of Optics_ (Cambridge Univ. Press, Cambridge, 1999).
* [16] C. F. Bohren and D. R. Huffman, _Absorption and Scattering of Light by Small Particles_ (Wiley, New York, 1983).
* [17] Z. N. Utegulov, J. M. Shaw, B. T. Draine, S. A. Kim, and W. L. Johnson, \"Surface-plasmon enhancement of Brillouin light scattering from gold-nanodisk arrays on glass,\" in \"Plasmonics: Metallic Nanostructures and Their Optical Properties V.\", Edited by Stockman, Mark I.. Proceedings of the SPIE, vol. 6641 (2007), 66411M.
* [18] W. L. Johnson, S. A. Kim, Z. N. Utegulov, and B. T. Draine, \"Surface-plasmon fields in two-dimensional arrays of gold nanodisks,\" submitted for publication in SPIE 2008 Optics and Photonics (2008).
* [19] M. A. Botchev, subroutine zbcg2, [http://www.math.uu.nl/people/vorst/zbcg2.f90](http://www.math.uu.nl/people/vorst/zbcg2.f90) (2001).
* [20] J. J. Goodman, B. T. Draine, and P. J. Flatau, \"Application of fast-Fourier transform techniques to the discrete dipole approximation,\" Optics Lett.**16**, 1198-1200 (1990).
* [21] B. T. Draine and P. Flatau, \"User Guide for the Discrete Dipole Approximation Code DDSCAT 7.0,\" [http://arXiv.org/abs/astro-ph/0809.0337](http://arXiv.org/abs/astro-ph/0809.0337) (2008). | The discrete-dipole approximation (DDA) is a powerful method for calculating absorption and scattering by targets that have sizes smaller than or comparable to the wavelength of the incident radiation. The DDA can be extended to targets that are singly- or doubly-periodic. We generalize the scattering amplitude matrix and the \\(4\\times 4\\) Mueller matrix to describe scattering by singly- and doubly-periodic targets, and show how these matrices can be calculated using the DDA. The accuracy of DDA calculations using the open-source code DDSCAT is demonstrated by comparison to exact results for infinite cylinders and infinite slabs. A method for using the DDA solution to obtain fields within and near the target is presented, with results shown for infinite slabs.
050.1755, 050.5298, 260.0260, 290.5825 | Condense the content of the following passage. |
arxiv-format/0810_0927v1.md | # Astronomical site selection: On the use of satellite data for aerosol content monitoring
A.M. Varela\\({}^{1}\\), C. Bertolin\\({}^{2,3}\\), C.Munoz-Tunon\\({}^{1}\\), S. Ortolani\\({}^{3}\\) and J.J. Fuensalida\\({}^{1}\\)
\\({}^{1}\\)Instituto de Astrofisica de Canarias, Spain
\\({}^{2}\\)National Research Council (CNR), Institute of Atmospheric Sciences and Climate, Padova, Italy
\\({}^{3}\\)Department of Astronomy, University of Padova, Italy
E-mail:[email protected]
Accepted 2008 August 5 Received 2008 August 1; in original form 2008 January 10.
## 1 Introduction
Most aerosols reaching the Canary Islands are either marine, CINa, cryogenic emissions or of African (Sahara and Sahel) origin The latter (clays, quarzes, feldspars and calcites), because of their size, can reduce visibility in the optical wavelength range and can therefore affect astronomical observations.
Furthermore, aerosols cause radiative forcing, oceanographic deposits by winds (together with Fe and Al), nutrients and minerals for algae (a coastal increase in chlorophyll-- phytoplankton biomass), sanitary effects, etc. Aerosols also play an important role in astronomical site conditions, producing more stable condensation nuclei, delaying precipitation and causing the extinction, absorption, diffusion and reflection of extraterrestrial radiation.
Most of the airmass flux component reaching the Canarian archipelago comes from the North Atlantic Ocean and consists of sea aerosols, which absorbs chloride in the UV. African dust intrusions affect the western and eastern Canary Islands differently. Moreover, the presence of a stable inversion layer and the pronounced orography of the western islands (La Palma and Tenerife) produce different mass flux patterns in the lower (mixing) layers closer to the sea (up to 800 m) and in the median-upper (or free) troposphere layer (above the thermal inversion layer, i.e. above 1500 m), causing a seasonally dependent vertical drainage of airborne particles.
There are remarkable differences between summer intrusions, which can rise to the peaks of the mountains (high level gloom), at 2400 m, and those of winter, more frequent in the lower troposphere (anticyclonic gloom).
Anticyclonic gloom is associated with strong, stationary anticyclonic conditions forced by dust accumulation between the soil and the inversion layer. It can favour the decrease in height of the inversion layer and the formation of clouds that do not easily precipate as rain, hence persisting for a longer time and providing a more stable sea of clouds. The dust is trapped by the sea of clouds (sea of dust) and is prevented from reaching the topmost level in the islands (above 1500 m).
The aerosol index provided by the TOMS (Total Ozone Mapping Spectrometer) is one of the most widely accepted products for detecting the daily aerosol content. TOMS Level 3 data are gridded in squares of \\(1^{\\circ}\\times 1.25^{\\circ}\\) (latitude and longitude respectively) and are availableonline at [http://toms.gsfc.nasa.gov](http://toms.gsfc.nasa.gov). The spatial coverage is global 90\\({}^{\\circ}\\)S-90\\({}^{\\circ}\\)N and the temporal resolution is daily. Moreover, several techniques have been developed _in situ_ to characterize the presence of dust locally at the Canarian observatories. In particular, a parameter related to sky transparency, the atmospheric extinction coefficient in the \\(V\\) (551 nm) and \\(r\\) (625 nm) bands has been measured at the Observatorio del Roque de los Muchachos (ORM) on La Palma since 1984 by the Carslberg Automatic Meridian Circle (CAMC). The archive is in the public domain at [http://www.ast.cam.ac.uk/](http://www.ast.cam.ac.uk/)\\(\\sim\\)dwe/SRF/camc_extinction.html and provides a good temporal comparison with the values retrieved with remote sensing techniques from the Total Ozone Mapping Spectrometer on board the Nimbus7 satellite and from other probes (Aura/OMI, Terra/MODIS and Aqua/MODIS, MSG1(Met8)/SEVIRI and ENVISAT/SCIAMACHY). Our main aim is to overlap the geographical area of the ORM with the satellite data; for this reason we have used Level 2 data. Level 0 data are the raw data from the satellite; Level 1 data are calibrated and geolocated, keeping the original sampling pattern; the Level 2 data used in this paper are converted into geophysical parameters but still with the original sampling pattern; finally the Level 3 data are resampled, averaged over space, and interpolated/averaged over time (from [http://people.cs.uchicago.edu/](http://people.cs.uchicago.edu/)\\(\\sim\\)yongzh/papers/CM_In_Lg_Scale.html)
On examination, the Level 2 data have the same spatial resolution as the Instantaneous Field of View (IFOV) of the satellite. Through a software procedure, it is possible to create files containing information on geophysical variables (data describing the solid earth, marine, atmosphere, etc., properties over a particular geographical area) and field values such as seconds, latitude, longitude, reflectivity in different channels, the ozone column, the aerosol index, aerosol optical depth, cloud land and ocean fraction, SO\\({}_{2}\\) and radiance. From remote sensing and _in situ_ data it is possible to trace back to the cloud coverage and climatic trend.
The purpose of this study is the analysis of new approaches to the study of the aerosol content above astronomical sites. Our objective is to calibrate the extinction values in the \\(V\\) band (550 nm) (more details in section 3.1) with remote sensing data retrieved from satellite platforms.
The paper is organized as follows: Section 2 describes the problem and background; Sections 3 and 4 concern the comparison of _in situ_ atmospheric extinction data with the aerosol index provided by TOMS (Level 3 data); Sections 5 and 6 deal with the analysis of Level 2 data from other satellites and their validity for site characterization by comparing the satellite results with _in situ_ measurements; and the summary and outlook are given in Section 7.
Two appendices have been included with complementary information. Appendix I describes the format of the satellite data, indicating the official websites for data access, and Appendix II includes a list of acronyms to aid in following the terminology used in the paper.
## 2 Meteorological and geophysical scenarios
### The trade wind inversion as a determining factor of aerosol distribution
Site testing campaigns are at present performed within the classical scheme of optical seeing properties, meteorological frequencies, sky darkness, cloudiness, etc. New concepts related to geophysical properties (seismicity and microseismicity), local climate variability, atmospheric conditions related to the optical turbulence (tropospheric and ground wind regimes) and aerosol presence have recently been introduced in the era of selecting the best sites for hosting a new generation of extremely large telescopes (which feature a filled aperture collector larger than 40 m, and which are considered worldwide as one of the highest priorities in ground-based astronomy), telescope and dome designs, and for feasibility studies of adaptive optics (Munoz-Tunon, 2002; Munoz-Tunon et al., 2004; Varela et al., 2002; Varela et al., 2006; Munoz-Tunon et al., 2007).
The Canarian Observatories are among the top sites for astronomical observations and have been monitored and characterized over several decades (Vernin et al., 2002). The trade wind scenario and the cold oceanic stream, in combination with the local orography, play an important role in the retention of low cloud layers well below the summits to the windward (north) side of the islands, above which the air is dry and stable (the cloud layers also trap a great deal of light pollution and aerosols from the lower troposphere).
The trade winds and the thermal inversion layer between 1000 and 1500 m (shown in Fig. 1) have been the object of many studies over the last 50 years, either indirectly from observations of the stratocumulus layer (known locally as the \"sea of clouds\"), which forms at the condensation level (Font-Tullot, 1956) or from radiosondes (Huetz-de-Lemps, 1969). For much of the year, the trade wind inversion layer separates two very different airmasses: the maritime mixing layer (MML) and the free troposphere layer (TL) (Torres et al., 2001). The presence of the inversion layer is crucial in the airmass flows reaching the islands and in the downflow at high elevations that they undergo. Saharan dust invasions mostly affect the eastern islands, but can occasionally reach the islands of Tenerife and La Palma; normally, however, they do not reach the level of the Observatories
Figure 1: Trade wind behaviour in the low mixing maritime layer (MML) and in the tropospheric layer (TL). From [http://cip.es/personales/oaa/nubes/nubes.htm](http://cip.es/personales/oaa/nubes/nubes.htm).
(the ORM is 2396 m and the OT 2390 m above mean sea level).
### Airmass intrusions in the Canary Islands
Airmasses are classified into three types according to their origin and permanence over continental landmasses and sea: Atlantic (marine, ClNa), European (anthropogenic emissions from sulphates and carbon) and African (dust, mineral aerosols). The most frequent are chlorides, i.e. clean salts of oceanic origin that do not affect astronomical observations, and that reach or exceed half the total contribution of mass flux over the summits. European airmasses are almost always of anthropogenic origin (sulphates and carbon) and are of scant importance (between 0.8 and 7.2%; see Romero 2003).
A study of the origins and distribution of air flows performed by Torres et al. (2003) distinguished the origins of airmasses that affect the lower (MML) and upper (TL) layers, the latter being at the height of the astronomical observatories. Daily isentropic retrotrajectories (at 00:00 and 12:00 GMT) were used during the AEROCE (Atmosphere/Ocean Chemistry Experiment) at Izana (2367 m above MSL, 1986-97) and the Punta del Hidalgo Lighthouse (sea level, 1988-97), each airmass being assigned an origin: North American, North Atlantic, Subtropical Atlantic, Europe, Local and Africa, the summer and winter periods being separated.
The airmass provenance in the lower (mixing) layer close the sea (MML) is Northern Atlantic (the mean is 59.6%), European (the mean is 19%) and African (0% in summer and 23% in winter), whereas the troposphere layer (TL) is mainly Northern Atlantic (44.2%) and African (17.4%), with a minimum in April (5.3%) and a maximum in August (34.5%) (Torres et al. 2003).
African summer intrusions are therefore almost absent in the MML and more intense in the TL because of the daily thermal convection reaching the higher atmospheric layers. In winter intrusions into the TL are less frequent. The airmass is carried horizontally by the prevailing wind and is affected by a process of separation, the larger particles (\\(>10\\)\\(\\mu\\)m) leaving sediments at ground level over a short time and the smaller ones being carried across the Atlantic Ocean to distances of hundreds or thousands of kilometres from their place of origin. This sand creates a large feature (plume) that is visible in satellite images (see, for example, the EP/TOMS or Aura/OMI websites) and extends from the African coast across a band about 20\\({}^{\\circ}\\) in latitude. During winter, the prevailing wind carries the dust from the south of Canary islands through an average of 10\\({}^{\\circ}\\) in latitude to the Cape Verde Islands. Considering that the dust plumes reach an extension of about 2000 km, during the winter they rarely reach La Palma. Instead, in summer, winds above 4 km in altitude can take these particles as far north as 30\\({}^{\\circ}\\) in latitude. In these conditions the dust plume over the ORM is composed mainly of small quartz particles in the range 0,5-10 \\(\\mu\\)m and the biggest particles precipitate (dry deposition). Typical dust storms take 3-8 days to disperse and deposit 1-2.4 million tonnes/year. Finally, the clouds of aerosols dissipate through advective processes or through rain (sumid deposition).
Database for the analysis: _in situ_ extinction coefficient values and aerosol indexes provided by the atoms and OMI spectrographs on board satellites
### Atmospheric extinction coefficient measured above the ORM (La Palma)
Atmospheric extinction is the astronomical parameter that determines transparency of the sky. Extinction is associated with the absorption/scattering of incoming photons by the Earth's atmosphere and is characterized by the extinction coefficient, \\(K\\). Sources of sky transparency degradation are clouds (water vapour) and aerosols (dust particles included). This coefficient is wavelength-dependent and can be determined by making observations of a star at different airmasses. For details of the astronomical technique for deriving the extinction coefficient values we refer the reader to King (1985).
Long baseline extinction values for the ORM have been measured continously at the Carlsberg Automatic Meridian Circle (CAMC, [http://www.ast.cam.ac.uk/dwe/SRF/camc_extinction.html](http://www.ast.cam.ac.uk/dwe/SRF/camc_extinction.html)) in the \\(V\\) band (550 nm) and more recently in the Sloan \\(r^{\\prime}\\) band (625 nm). To our knowledge, this is the largest available homogeneous database for an observing site.
Extinction values and their stability throughout the night are essential for determining the accuracy of astronomical measurements. As nights with low and constant extinction are classified as photometric, this parameter is considered among those relevant for characterizing an observing site.
On photometric dust-free nights the median of the extinction is 0.19 mag/airmass at 480 nm, 0.09 mag/airmass at 625 nm and 0.05 mag/airmass at 767 nm. The extinction coefficients reveal that on clear days (denoted coronal-pure) the extinction values at 680 nm are below about 0.07-0.09 mag/airmass, while on dusty days (diffuse-absorbing) they are always higher (Jimenez & Gonzalez Jorge 1998). The threshold that identifies the presence of dust is at 0.153 mag/airmass (Guerrero et al. 1998).
At the ORM, the extinction in \\(V\\) is less than 0.2 mag/airmass on 88% of the nights, and extinction in excess of 0.5 mag/airmass occurs on less than 1% of nights. A statistical seasonal difference is also detected (see Guerrero et al. 1998).
Figure 2 shows the cumulative frequency of extinction over the ORM during winter (October-May) and summer (June-September). In summer, 75% of the nights are free of dust, while at other times of the year over 90% of the nights are dust-free (Guerrero et al. 1998). These results are consistent with those provided by Torres et al. 2003.
### Characterization of aerosols in the near-UV
Data from Total Ozone Monitoring Spectrometer (TOMS) and from the Ozone Monitoring Instrument (OMI) are analysed to detect absorbing and non-absorbing aerosols at ultraviolet (UV) wavelengths. A detailed description of these instruments and the products are given at the official Website ([http://toms.gsfc.nasa.gov/](http://toms.gsfc.nasa.gov/)). Absorbing aerosols in clude smoke deriving from biomass burning, industrial activity, mineral dust, volcanic aerosols and soot. Non-absorbing aerosols are mostly sulphates. The UV spectral contrast is useful for retrieving values over land and ocean because of its low reflectivity in the spectrometer range. Backscattered radiation at \\(\\lambda\\) 340, 360 and 380 nm is caused mainly by molecular Rayleigh scattering, terrestrial reflection and diffusion by aerosols and clouds (through Mie scattering). Quantitatively, aerosol detection from TOMS and OMI is given by:
\\[\\Delta N_{\\lambda}=-100\\left\\{\\log_{10}\\left[\\left(\\frac{I_{331}}{I_{360}} \\right)_{\\rm meas}\\right]-\\log_{10}\\left[\\left(\\frac{I_{331}}{I_{360}}\\right)_{ \\rm calc}\\right]\\right\\},\\]
where \\(I_{\\rm meas}\\) is the backscattered radiance at the wavelengths measured by TOMS and OMI (with Mie and Rayleigh scattering, and absorption) and \\(I_{\\rm calc}\\) is the radiance calculated from the model molecular atmosphere with Rayleigh scatterers. \\(\\frac{I_{330}}{I_{360}}\\) depends strongly on the absorbing optical thickness of the Mie scatterers. \\(\\Delta N_{\\lambda}\\) is also called the aerosol index (AI). AI \\(>\\) 0 indicates the presence of absorbing aerosols and clouds (AI\\(\\pm\\)0.2) and negative AI values indicate non-absorbing aerosols (Herman et al. 1997; Torres et al. 1998). Significant absorption has been set at AI \\(>\\) 0.7 by Siber et al. 2004 and at AI \\(>\\) 0.6 in this work (see explanation in next section and in Fig. 10).
The next section (Results I) concerns the first comparison of the _in situ_ atmospheric extinction coefficient with the aerosol index provided by TOMS using Level 3 data. In Section 5 we present recent instruments on board satellites suitable for aerosol content monitoring and in Section 6 (Results II) we compare the atmospheric extinction coefficient with the aerosol index and aerosol optical depth provided by OMI and MODIS respectively, using Level 2 data.
## 4 Results I: Comparison of AI Provided by TOMS with KV from the CAMC
Atmospheric extinction is related to the internationally recognized term aerosol optical depth (AOD) (or thickness) and to the aerosol index. In this section we shall analyse the first results of comparing the atmospheric extinction coefficient with the aerosol index provided by TOMS (Total Ozone Mapping Spectrograph) on board NASA's Earth Probe satellite. We use Level 3 aerosol data, which are available at:
[ftp://toms.gsfc.nasa.gov/pub/eptoms/data/aerosol/](ftp://toms.gsfc.nasa.gov/pub/eptoms/data/aerosol/).
In a previous paper (Varela et al. 2004a) we presented the aerosol index from the sector corresponding to the Teide Observatory (Tenerife) against the atmospheric extinction coefficient recorded at Roque de los Muchachos Observatory (La Palma). The Teide Observatory (OT) is situated 2390 m above sea level in Lana and the geographical coordinates are 16\\({}^{\\circ}\\) 30\\({}^{\\prime}\\) 35\" West and 28\\({}^{\\circ}\\) 18\\({}^{\\prime}\\) 00\" North. The Observatorio del Roque de los Muchachos is situated on the edge of the Caldera de Taburiente National Park, 2396 m above sea level and the geographical coordinates are 17\\({}^{\\circ}\\) 52\\({}^{\\prime}\\) 34\" West and 28\\({}^{\\circ}\\) 45\\({}^{\\prime}\\) 34\" North. The two observatories are about 133 km apart. We compared the AI from EP/TOMS data centred at the OT and at the ORM sectors (see figure 5 of Varela et al. 2004b).
The consistency of AI from both boxes centred on the OT and on the ORM shown in Fig. 3 points to a similar tropospheric aerosol distribution at both observatories.
In the next section (Results II) we show the results of comparing the atmospheric extinction coefficient with the aerosol index provided by OMI on board NASA's Aura satellite and with the aerosol optical depth provided by MODIS on board NASA's Terra and Aqua satellites.
To demonstrate why there is not necessarily any correlation between the AI and atmospheric extinction we have selected an intense invasion of African dust over the Canary Islands that occurred on 2000 February 26. Figure 4 shows the plume of dust as recorded by TOMS on board the Earth Probe. We have used AI values obtained from TOMS for the Canarian Observatories and atmospheric extinction values provided by the CAMC at the ORM during the night before and after this episode. AI reaches its maximum (2.4 at the OT and 2.7 at the ORM) on February 27, precisely on the day when the plume reached Tenerife. Nevertheless, the CAMC measures an extinction value of less than 0.2 mag, with a high number of photometric hours. The reason is that when the plume arrived at La Palma it did not reach the level of the Observatory.
In Fig. 5 we represent the atmospheric extinction in the V band (KV) provided by the Carlsberg Meridian Circle against the aerosol index provided by EP/TOMS from 1996 to 2004.
We have classified four quadrants using KV=0.15 mag/airmass as the threshold for dusty nights and AI=0 as the threshold of the presence of absorbing aerosols.
Figure 5 shows a large number of TOMS data indicating the presence of absorbing aerosols coincident with CAMC
Figure 2: Frequency of extinction over the ORM during winter (top) and summer (centre). In both cases the modal value is 0.11 mag/airmass. Their corresponding cumulative frequencies are also shown (bottom). The vertical line indicates the extinction coefficient limit for dusty nights, KV \\(\\geq\\) 0.153 mag/airmass (Guerrero et al. 1998).
values that show low or no atmospheric extinction (bottom right in Fig. 5). This result is due to the presence of a layer of dust below the Observatory level. This dust layer is high and/or thick enough to be detected by the TOMS. This condition appears in 17% of all cases.
The case of agreement between large extinction coefficient and large AI (top right in Fig. 5) is associated with dust presence in the upper troposphere layer (TL). We have verified that this case occurs in only 11% of all measurements, with 58% of these corresponding to the summer months (June-September), when the warmest surface winds sweep dust from the African continent and rise towards the upper layers by convective processes (Romero & Cuevas 2002; Torres et al. 2003).
The opposite case, i.e. low extinction coefficient and low aerosol index (bottom left in Fig. 5) happens in 59% of cases.
In the top left quadrant in Fig. 5, we find low AI values of aerosol index but large KV (13% of the data points). A possible explanation is given by Romero & Cuevas 2002, who argue that the cause could be local and concentrated dust in a small area intercepting the light from a star (and therefore measured by CAMC) but that is not representative of the average values from a \\(1^{\\circ}\\times 1.25^{\\circ}\\) region provided by TOMS; or is due to the presence of high fine dust layers above the Observatory that are not detected by TOMS. Here we propose cirrus or other clouds as the explanation; in fact, 78% of the points located in the top-left quadrant correspond to the winter months when the possibility of the presence of medium-high clouds is greater.
In the four cases (four quadrants), the correlation coefficient and the square of the coefficient of the Pearson correlation is smaller than 0.1, so there is no correlation at all between both parameters. The maximum correlation is found in the upper-right quadrant (mathematically, this is the first quadrant), i.e. AI positive and extinction coefficient larger than 0.15. When this interval is narrowed to AI larger than 0.7 and KV larger than 0.2 mag/airmass (only 4% of cases), this correlation increases slightly. Only when
Figure 4: Dust plume obtained from TOMS data over the western Sahara Desert and extending over the Atlantic Ocean and Canary islands. From [http://toms.gsfc.nasa.gov/aerosls/africa/canary.html](http://toms.gsfc.nasa.gov/aerosls/africa/canary.html).
Figure 3: Aerosol index provided by EP/TOMS from 1996 to 2004 at the OT (red triangles) and at the ORM (blue crosses). These profiles indicate similar aerosol tropospheric distributions at both observatories.
the summer period is considered under these last conditions can the correlation reach 0.55.
This last result coincides with Siher et al. 2004 (see Fig.2 in this paper), correlating AI TOMS data recorded by Nimbus7 and the atmospheric extinction coefficient at the ORM during summertime dusty events (AI\\(>\\)0.7 and KV\\(>\\)0.2). The correlation found is 0.76. It is important, however, to emphasize that this relatively high (0.76) correlation occurs for a subsample (4%) of the total cases that occur when AI\\(>\\)0.7 and KV\\(>\\)0.2. As result of the lack of statistical meaning in this result, the yearly average extinction derived from TOMS (figure 6 in Siher et al. 2004) does not reflect the real values measured _in situ_ at the Observatory.
This correlation occurs in summertime, when the warmest winds can drive the dust to the level of the Observatory.
The non-correlation between AI satellite remote sensing data and in situ extinction data is also reported by Romero & Cuevas 2002 (see figure 3 in this paper), who compare the aerosol index (AI) provided by TOMS against the optical aerosol depth (AOD) obtained with a multifilter rotating shadow band radiometer (MFRSR) at Izana Atmospheric Observatory (OAI) (close to the Teide Observatory, 133 km the ORM), an institution belonging to the Instituto Nacional de Meteorologia (INM), now called Agenica Estatal de Meteorologia (AEMET). The MFRSR measures global and diffuse radiation in six narrow band channels between 414 nm and 936 nm (with bandwidths between 10 and 12 nm).
Again, in Romero & Cuevas 2002 a low correlation coefficient is attained when the AOD is larger than 0.1 at 414.2 nm and the AI is larger than 0.5--coinciding with dust invasions taking place owing to convective processes from the African continent towards high levels--and will then be detected by both TOMS and the MFRSR. Otherwise, in dust-free conditions, there is no correlation between the _in situ_ measurements (AOD) and AI.
The prevailing trade winds, the inversion layer presence and the abrupt orography of the western Canary islands (La Palma and Tenerife) play an important role in the downflow of dust at high elevations.
The low spatial resolution of the EP/TOMS for astronomical site evaluation explains the absence of correlation; this spatial resolution cannot distinguish local effects since it averages over a surface equivalent to an entire island (a \\(1^{\\circ}\\times 1.25^{\\circ}\\) region given for TOMS Level 3 data made available in GSFC TOMS archives, i.e. 111 km \\(\\times\\) 139 km); neither does it distinguish the vertical dust drainage. This effect is known as anticyclonic gloom, in many cases, even in such dust storms episodes, the airborne dust particles do not necessarily reach the level of the Observatory (more explanations in Section 2 of this paper).
It is possible to retrieve high resolution data from TOMS instrument also with the spatial resolution of the IFOV (35 km \\(\\times\\) 35 km) or TOMS Level 2 data (see Bertolin 2007) in order to show that the correlation between CAMC KV and EP/TOMS AI for the ORM site on La Palma shows a little improvement when data are analysed at higher spatial resolution.
The next step in our study is to explore the use of other instruments on board different satellites that operate in bands of astronomical interest (the visible and NIR) with higher spatial resolution than TOMS and with long-term databases (longer than a few years). These instruments will provide updated high resolution images for astronomical site evaluation (to date, only TOMS data have been used for aerosol content characterization above an astronomical observatory).
Figure 5: Aerosol index provided by TOMS against the atmospheric extinction coefficient in \\(V\\) (integrated values) from the CAMC above the ORM. The red horizontal line is a threshold indicating the presence of dust in the atmosphere (Guerrero et al. 1998). The blue vertical dashed line is the threshold line indicating the presence of absorbing aerosols in the atmosphere. The top-right quadrant corresponds to points seen as dusty (from CAMC KV \\(>\\) 0.15) and of high absorbing aerosol presence (AI \\(>\\) 0.7 has been used by Siher et al. 2004).
Recent Instruments on Board Satellites for Aerosol Content Monitoring for Astronomical Site Evaluation
### Updated images
We have explored the use of other detectors on board different satellites that operate in bands of astronomical interest (the visible and NIR) with higher spatial resolution than TOMS.
The new generation of satellites Terra&Aqua/MODIS, Aura/OMI, MSG1/SEVIRI, etc.--acronyms described in Appendix II--provide better-resolution images that can be used to demonstrate the existence or otherwise of any correlation between the presence of aerosols and atmospheric extinction. Similarly to the case mentioned above (February 2000), we examine 2007 March 10, when a thick plume of dust blew over the Canary islands from the west coast of Africa.
The Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Aqua satellite took the picture shown in Fig. 6 for this day (with much better resolution than TOMS, see Table 1). The MODIS Aerosol (MOD04_L2) product monitors the ambient aerosol optical thickness over the oceans globally and over a portion of the continents and contains data that have a spatial resolution (pixel size) of 10 km \\(\\times\\) 10 km (at nadir). More information about grids and granule coverage is available in the official MODIS website ([http://modis.gsfc.nasa.gov/](http://modis.gsfc.nasa.gov/)). We can see that the eastern islands are more affected by the dust plume than the western ones. Also, the abrupt orography and the inversion layer play an important role in retaining the dust below the summits (where the observatories are located) of the highest western islands (Tenerife and La Palma). This effect is the above-mentioned anticyclonic gloom.
This situation has been verified by data provided by the CAMC that indicate no atmospheric extinction at all:
* Extinction Coefficient in r\\({}^{\\prime}\\)= 0.083 (good quality dust-free night) i.e. KV = 0.12 mag/airmass.
* Number of hours of photometric data taken=10.22
* Number of hours of non-photometric data taken=0.00
In the next section we summarize data retrieved from the new generation of satellites. Despite the much better spatial and spectral resolution of the recent satellite aerosol measurements (AI and AOD), they are not at the moment sufficient for the aerosol content monitoring of an astronomical observatory. _In situ_ data are also required, in particular at those astronomical sites with abrupt orography (ORM, Mauna Kea or San Pedro Martir). Spatial resolution of the order of the observatory area is needed.
### Updated data
In order benefit from satellite data for local site characterization, we have gathered and studied NASA and ESA satellite data planned to retrieve information about aerosol, clouds, ozone and other trace gases (N\\({}_{2}\\), O\\({}_{2}\\), H\\({}_{2}\\)O, CO\\({}_{2}\\), CH\\({}_{4}\\)) that are found in the terrestrial atmosphere. In this paper we have centred our analysis on the aerosol content.
An overview of selected satellites, parameters and sampled periods is given in Fig. 7, which includes other parameters (ozone, cloud condensei nuclei (CCN) and cloud fraction) to be analysed in a future paper.
To ensure that the retrieved remote sensing data fields from different satellites (such as aerosol values or geolocation parameters) are precisely over the ORM site coordinates to compare with the atmospheric extinction by CAMC, we decided to work with Level 2 data, which have a projected effective pixel size given by the instantaneous field of view (IFOV). We have also selected the longer term database (to retrieve more data for the KV comparison). The satellites/instruments and parameters used in this work are summarized in Table 1.
The previous parameters selected and retrieved are the aerosol index (AI) provided by OMI (Ozone Monitoring Instrument) on board Aura--with visible and ultraviolet channels and with a spatial resolution from 13 km \\(\\times\\) 24 km to 24 km \\(\\times\\) 48 km--and the aerosol optical depth (AOD) provided by MODIS (Moderate Resolution Imaging Spectroradiometer) on board Terra (from 2000) and Aqua (from 2002)--with its 36 spectral bands, from 0.47 to 14.24 \\(\\mu\\)m, including two new channels 0.405 and 0.550 \\(\\mu\\)m, with a spatial resolution of 10 km \\(\\times\\) 10 km. A detailed description of the instruments, parameters and data access url is given by Varela et al. 2007. Data from ERS-2/GOME and MSG1/SEVIRI have not been included on this work because databases are too short-term for statistical analysis, but they do offer valuable potential as instruments for the future when the database increases. The ENVISAT/SCIAMACHY database has not been used in this analysis because it does not provide higher spatial resolutions than OMI and MODIS.
In Appendix I we indicate the official websites to retrieve datasets and the data formats. The correlation analysis between AI and AOD with KV is given in Section 6.
### Using Aura/OMI data
Here, we show the results of comparing AI provided by TOMS and OMI. OMI on the EOS (Earth Observing Systems) Aura platform continues the TOMS record for total ozone and other atmospheric parameters and can distinguish between aerosol types, such as smoke, dust and sulphates.
Figure 8 shows the AI values provided by EP/TOMS and OMI/Aura over the Roque de los Muchachos Observatory on La Palma. Note the dispersion in the TOMS data. This improvement shown in the OMI data (much less dispersion) derives from better horizontal and vertical spatial resolution compared with its predecessor, TOMS in Earth Probe. Moreover, TOMS data from 2002 should not be used for trend analysis because of calibration errors ([http://jwocky.gsfc.nasa.gov/nes/news.html](http://jwocky.gsfc.nasa.gov/nes/news.html)).
### Using Terra/MODIS and Aqua/MODIS data
MODIS on board the NASA Terra and Aqua satellites provides not AI values but an equivalent parameter, the aerosol optical depth (AOD).
In Fig. 9 we show AOD for Terra and Aqua; we see that there exists a good relation between the maximum and minimum AOD values. The consistency of both data sets is excellent.
The AI threshold for dusty nights is 0.6; we now have to determine the AOD threshold for dusty events, but AI and AOD are not in a 1:1 ratio (they depend on refractive index, particle size distribution and height of the atmospheric layer).
To determine the AI and AOD threshold for dusty days we used information provided in a collaboration between the Spanish Environment Ministry, the Upper Council of Scientific Researches (CSIC) and the National Institute of Meteorology (INM) for the study and analysis of the atmospheric pollution produced by airborne aerosols in Spain. They provide us with the days in which _calima_ (dust intrusion) occurred in the Canary islands. As we later demonstrate in the plots, in our opinion, these events happened lower altitudes than those of the observatories so they do not influence the measurements of atmospheric extinction. In Fig. 10 we show AOD for Terra and Aqua and AI for OMI. AI values are larger than 0.6, and most of AOD data points for Terra and Aqua fall above a limit greater than 0.10. This limit is consistent with that provided by Romero & Cuevas (2002). Therefore, AI\\(>\\)0.6 (0.1 units smaller than provided by Siher et al. 2004) and AOD\\(>\\)0.1 will be the thresholds for dusty episodes.
## 6 Results II: Comparison of AI Provided by OMI and AOD Provided by MODIS with KV from the CAMC
In this section we analyse the correlation between the aerosol index (AI) and the aerosol optical depth (AOD) provided by the satellites and the atmospheric extinction coefficient (KV) measured by _in situ_ techniques (CAMC).
### AI from OMI vs KV
In this subsection we argue about the correlation between atmospheric extinction in \\(V\\) (551nm) and the aerosol index measured for OMI in the same channels of TOMS (331 and 360 nm) represented in Figure 11. In order to determine whether there exists a correlation between both parameters we also consider the presence or otherwise of clouds and the situation for days with _calima_.
In Fig. 11 we observe an interesting situation not revealed in the same graph for the aerosol EP/TOMS index (Fig. 5). Once we put the atmospheric extinction (KV) threshold (red line) and the limit for absorbing aerosols (green line), the situation reveals four quadrants. In the first (the upper right) we have a situation with absorbing aerosol and values of atmospheric extinction over the limit of photometric dusty nights. In the second quadrant (upper left) there are no points that fall inside; this is very important because it means that non-absorbing aerosols do not influence the extinction above this threshold. In the third quadrant (bottom left) there are still non-absorbing aerosols but for clear nights, and this means that this type of sulphate and/or marine aerosol does not influence the extinction as we expect. In the last quadrant (bottom right) absorbing aerosols are again seen in the presence of low extinction values. An explanation could be the presence of weakly
\\begin{table}
\\begin{tabular}{l c c c} SATELLITE/INSTRUMENT & HORIZONTAL RESOLUTION & PARAMETER & PERIOD \\\\ \\hline Terra/MODIS & 10\\(\\times\\)10 km\\({}^{2}\\) & Aerosol Optical Depth (AOD) & from 2000 \\\\ Aqua/MODIS & 10\\(\\times\\)10 km\\({}^{2}\\) & Aerosol Optical Depth (AOD) & from 2002 \\\\ Aura/OMI & From 13\\(\\times\\)24 km\\({}^{2}\\) to 24\\(\\times\\)48 km\\({}^{2}\\) & Aerosol Index (AI) & from 2004 \\\\ \\end{tabular}
\\end{table}
Table 1: Overview of instruments on board satellites that provide parameters useful for our work
Figure 6: Plumes of dust over the Canary Islands from the west coast of Africa observed by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Aqua satellite on 2007 March 10 (full image at [http://earthobservatory.nasa.gov/Newsroom/NewImages/Images/canary_amo_2007069_lrg.jpg](http://earthobservatory.nasa.gov/Newsroom/NewImages/Images/canary_amo_2007069_lrg.jpg)). The easternmost island is just over 100 kilometres from the African coast. The peaks of La Palma and Tenerife remain clear of dust.
absorbent particles, such as carbonaceous grains or clouds, or a more complex situation with a mixture of cloudy and aerosol scenarios, or the presence of absorbing aerosols below the level of the observatory. The lack of correlation indicates that AI from OMI should not be used for atmospheric extinction characterization at the Roque de los Muchachos Observatory.
We added pink points as dust episodes of _calima_ retrieved at the ground level, yellow being those that correspond to cloud presence (reflectivity at 331 nm greater than 15%). This plot is very informative because from the pink points we can see that these _calima_ episodes are mainly below the threshold of dusty nights, this means that they do not affect the measurements of extinction because the _calima_ is at a lower level with respect to the altitude of the observing site, such dusty events reaching the Roque and being detected only on some occasions. We also note that the clouds are almost all below this limit.
This means that values above the threshold of dusty nights are dust absorbing particles at the height of the observatory. We see that in this zone of the plot there are almost no clouds and some _calima_ events that reach the astronomical site. Thanks to strong convective motions, they may be driven along the orographic contours of the caldera that borders Roque de los Muchachos. In fact, the spatial resolution of OMI ranges from 13 km \\(\\times\\) 24 km to 24 km \\(\\times\\) 48 km and can contain part of the caldera.
Figure 8: Aerosol Index provided by EP/TOMS (blue points on the left) and Aura/OMI (pink points on the right) in time over the Roque de los Muchachos Observatory on La Palma. The improvement shown in the OMI data (much less dispersion than TOMS data) derives from better horizontal and vertical spatial resolution compared with its predecessor, TOMS in Earth Probe.
Figure 7: Overview of parameters (vertical labels) and periods provided by instruments on board ESA and NASA satellites (shown with different colour codes). For the aerosol content only Aura/OMI, Terra/MODIS and Aqua/MODIS results are shown in this paper to provide high spatial resolution and a longer-term database. Data from **ERS2/GOME and MSG1/SEVIRI** have not been included in this work because the databases are too short-term for statistical analysis. The **ENVISAT/SCIAMACHY** database has not been used in this analysis because it does not provide higher spatial resolutions than OMI and MODIS.
### AOD from Terra/MODIS vs KV
The MODIS Aerosol Product monitors the ambient aerosol optical depth over the oceans globally and over a portion of the continents. The aerosol size distribution is also derived over the oceans, and the aerosol type is derived over the continents. Therefore, MODIS data can help us to understand the physical parameters of aerosols affecting the Canarian Observatories. Daily Level 2 data are produced at 10 km \\(\\times\\) 10 km spatial resolution. Aerosols are one of the greatest sources of uncertainty in climate modelling. They vary in time and in space, and can lead to variations in cloud microphysics, which could impact on cloud radiative properties and on climate. The MODIS aerosol product is used to study aerosol climatology, sources and sinks of specific aerosol types (e.g. sulphates and biomass-burning aerosols), interaction of aerosols with clouds, and atmospheric corrections regarding remotely sensed land surface reflectance. Above land, dynamic aerosol models will be derived from ground-based sky measurements and used in the net retrieval process. Over the ocean, three parameters that describe aerosol loading and size distribution will be retrieved.
Figure 10: AI from OMI (blue rhombus) and AOD from MODIS (pink squares correspond to Terra values and green triangles correspond to Aqua data) under dusty conditions detected by the INM from 2004 to 2006.
Figure 9: Aerosol optical thickness provided by Terra/MODIS (blue points) and Aqua/MODIS (pink points) in time over the Roque de los Muchachos Observatory on La Palma.
There are two necessary pre-assumptions in the inversion of MODIS data: the first concerns the structure of the size distribution in its entirety and the second one log-normal modes that will be needed to describe the volume-size distribution: a single mode to describe the accumulation mode particles (radius \\(<0.5~{}\\mu\\)m) and a single coarse mode to describe dust and/or salt particles (radius \\(>1.0~{}\\mu\\)m). The aerosol parameters we therefore expect to retrieve from the aerosol Level 2 product are the ratio between the two modes, the spectral optical thickness and the mean particle size.
The quality control of these products will be based on comparison with ground and climatology stations.
The parameters that we have used from Level 2 in our work are: latitude and longitude as geolocation values, and AOD (aerosol optical thickness at 0.55 \\(\\mu\\)m for both Ocean [best] and Land [corrected] in a valid range from 0 to 3), aerosol type land that contains 0 = mixed, 1 = dust, 2 = sulphate, 3 = smoke, 4 = heavy absorbing smoke, and cloud fraction land and ocean in percentage.
In Fig. 12 we show AOD measured by MODIS on board Terra against KV.
In this plot we find two tails of data within a certain dispersion of values, and once more the majority of points that fall under the threshold of KV smaller than 0.15. The flatter tail groups together most of the AOD values lower than 0.2, i.e. non-absorbing (\\(<0.1\\)) or weakly absorbing aerosols (marine particles, clouds above ocean and mixed scenarios with salt, sulphate particles and clouds).
In Fig. 12 we distinguish among terrestrial aerosols (composed of a mixture of dust and smoke sulphates) and marine particles marked respectively by blue and pink points and we distinguish the presence of dust events coming from Africa (yellow points) using the data provided by INM at ground level.
We see that pink values are situated in the tails of the plots and are present mainly at lower KV and AOD values, where we expect sulphate sea-salt aerosols and CCN; only in some cases are they detected at higher AOD with small KV values, perhaps because these aerosols and clouds are below the level of the observatory. The second tail covers two quadrants corresponding to small AOD but large KV due to the presence of weakly absorbent aerosols at the level of the Observatory or to a mixture of particles and clouds, and to large AOD and large KV, mostly terrestrial aerosols at the level of the observatory. The _calima_ (yellow points) is also below the KV threshold, perhaps because of their closeness to ground level, except for some cases in which it can reach the level of the observatory.
### AOD from AQUA vs KV
We now perform the same analysis for the MODIS instrument on board the Aqua satellite that retrieves the same aerosol Level 2 data. In the Fig. 13 we show all the Aqua data in order to see the correlation between AOD and KV. In this case also there is evidence of the two tails with a wide dispersion in the data, here, most of the points being below the KV threshold.
We plot terrestrial (blue) and marine (pink) aerosols, the yellow points indicating days with _calima_. The comments are very similar to those made previously, i.e. the marine aerosols are more clustered at small AOD values in a lower tail, whereas the terrestrial values are somewhat uniformly distributed with some points at large KV indicating absorbing aerosols or clouds at the level of the observatory. Most of points fall below the threshold for dusty nights just as the majority of _calima_ days. The interesting fact is that only the episodes of dust with large KV follow a good linear relation with the AOD.
## 7 Summary and outlook
From this study we may draw the conclusions listed below.
Figure 11: Correlation between AI provided by Aura/OMI and KV from the CAMC. Pink points indicate dust episodes of _calima_ retrieved by the INM, and yellow points correspond to cloud presence. Red and green lines indicate the KV and AI limits for dusty nights respectively. We can see that most of the points indicating the presence of absorbing aerosols (AI larger than 0.6) but non atmospheric extinction (KV smaller than 0.15 mag/airmass) correspond to the presence of clouds and _calima_ below the level of the Observatory.
We have compared the AI measurements provided by the TOMS on board Earth Probe satellite (Level 3) with the atmospheric extinction coefficient provided by the CAMC at the ORM (2400 m above MSL). The main causes of the lack of correlation between both parameters are:
* The TOMS Level 3 data considered in this paper have a resolution of \\(1^{\\circ}\\times 1.25^{\\circ}\\), so the AI is averaged over areas whose size covers the entire islands of La Palma and Tenerife. High resolution Level 2 data should be used for a better fit (IFOV of 35 km \\(\\times\\) 35 km).
* The TOMS is very sensitive to the presence of highly reflective clouds because it uses channels centred on the UV to measure AI. Moreover, AI incorporates absorbing particles in ranges that do not affect atmospheric transparency in the visible range.
* TOMS measurements are retrieved at local noon while CAMC values are averaged over night hours.
Figure 12: Correlation between AOD provided by Terra/MODIS and the KV measured by the CAMC, distinguishing among the terrestrial (mixed dust, smoke, sulphate—blue points), marine (pink points) aerosols and the presence of dust events over the Canary islands coming from Africa and collected at ground level (yellow points). The red and green lines indicate the KV and AOD limits respectively for dusty nights.
Figure 13: Correlation between AOD provided by Aqua/MODIS and the KV measured by the CAMC, distinguishing among the terrestrial (mixed dust, smoke, sulphate—blue points), marine (pink points) aerosols and the presence of dust events over the Canary islands coming from Africa and collected at ground level (yellow points). The red and green lines indicate the KV and AOD limits respectively for dusty nights.
For this reason the EP/TOMS database is not useful for the characterization of the presence of dust above either the Canarian astronomical observatories (2400 m above mean sea level) or for other high mountain sites (Mauna Kea and San Pedro Martir).
We have explored the use of other detectors on board different satellites that operate in bands of astronomical interest (the visible and NIR) and with better spatial resolution than TOMS. The selected parameters were the aerosol index provided by Aura/OMI--with visible and ultraviolet channels and with a spatial resolution from 13 km \\(\\times\\) 24 km to 24 km \\(\\times\\) 48 km--and the aerosol optical depth (AOD) provided by Terra (from 2000) and Aqua (from 2002) in MODIS--with its 36 spectral bands, from 0.47 to 14.24 \\(\\mu\\)m, including two new channels at 0.405 and 0.550 \\(\\mu\\)m, with a spatial resolution of 10 km \\(\\times\\) 10 km. In order ot obtain the best spatial, spectral, radiometric and temporal resolutions, we have decided to work only with Level 2 data that have the same resolution as the IFOV satellite.
We conclude that the OMI instrument detects aerosol presence with more precision than TOMS and does not detect non-absorbing particles with high atmospheric extinction values (larger than 0.15 mag/airmass). This fact coincides with expectations because non-absorbing aerosols such as sulphates or marine aerosols do not give high extinction values (threshold greater than 0.15 mag/airmass). We can see that most of points fall at lower extinction values below the threshold for dusty nights, suggesting presence of non-absorbing or weakly absorbing (e.g. carbonaceous) aerosols.
In order to obtain the limits for dusty episodes on the AOD scale, we have checked the _calima_ days from the records of the Instituto Nacional de Meteorologia de Canarias Occidental, following NAAPS, ICOd/DREAM or SKIRON models, and we obtain a threshold near AOD\\(>\\)0.1 units and AI\\(>\\)0.6 for dusty episodes.
We study where the _calima_ events and cloud presence fall in the plot of correlation between atmospheric extinction and AI. Dust episodes measured at ground level are mainly below the threshold for dusty nights on the atmospheric extinction scale (KV\\(<\\)0.15 mag/airmass), meaning that the presence of _calima_ affects low altitudes, and that only in a few cases does it reach the Roque. We also see that all clouds detected for their high reflectivity (greater than 15%) are below the threshold for dusty nights; in only two cases are they above 0.15 unit in mag/airmass and in such cases they do not correspond to _calima_ events.
Study of Terra/MODIS data has shown that a great number of points fall in a range below 0.40 units for AOD and 0.15 mag/airmass for KV, and that two tails are evident: the first one has high AOD values for low KV and the second has high AOD and high KV, showing a large linear correlation among both parameters. Is important to underline that there are only a few points with low AOD values and high KV (data are decontaminated of cloud presence). Chlorides and marine aerosols can be well identified and normally do not affect the KV and correspond to AOD \\(<\\) 0.2, so this means that marine and sulphate aerosols are not absorbent as we expect. We can see that the most of points corresponding to dust (_calima_) events fall below the threshold of 0.15 mag/airmass because they are detected near the surface; only in certain cases can they reach the level of the observatory, carried by wind or convective motions, providing AOD larger than 0.1 units and KV larger than 0.15. These measurements correspond mostly to terrestrial aerosols.
The study of the Aqua data produces results almost identical to those of Terra. In this case there are also values at high AOD and low KV and values at low AOD and KV. This wide dispersion explains the lack of correlation between both parameters. The most populated tail falls below the threshold of 0.15 unit in mag/airmass.
Marine aerosols are more clustered at low AOD values (\\(<\\) 0.20) and low KV (\\(<\\)0.15), whereas terrestrial aerosols fall in the AOD\\(>\\)0.2 zone. Dusty (_calima_) days correspond mostly to the presence of terrestrial aerosols and are present near ground level (inside the area of 10 km \\(\\times\\) 10 km), so they appear at low KV. Only in some cases do they reach higher altitudes and become detectable from the astronomical observations (large KV); this is the only case that presents a linear correlation between both parameters. We therefore also need _in situ_ data to distinguish between both situations. We must explore other clues for the vertical aerosol drainage analysis (winds, humidity, etc.).
At present, the AI and AOD values provided by the NASA satellites are not useful for aerosol site characterization, and _in situ_ data are required to study drainage behaviour, in particular at those astronomical sites with abrupt orography (ORM, Mauna Kea or San Pedro Martir). Spatial resolution of the order of the observatory area will be required in these cases.
Moreover, in order to obain much better spatial resolution we are now exploring the use of SEVIRI-MSG2 (1.4 km \\(\\times\\) 1.4 km) (December 2005-12) for Europe and Africa, and in the future, ATLID(LIDAR)-EARTHPROBE (with a horizontal sampling interval smaller than 100 m) (2012-15) for global coverage. The CALIPSO satellite will be used for measuring the vertical structure (drainage) and properties of aerosols (the horizontal and vertical spatial resolution are 5 km and 60 m respectively).
Ground measurements will be complemented by LIDAR data (INTA) of 30 m resolution (NASA MPL-NET-AERONET) and by the IAC airborne particle counter (from Pacific Scientific Instruments) installed at the ORM in February 2007 (with six channels: 0.3, 0.5, 1, 3, 5, 10 \\(\\mu\\)m) and by the INM Multifilter Rotating Shadowband Radiometer (MFRSR) programmed to be installed at the ORM in the near future (consisting of six narrow passbands between 414 nm and 936 nm) that will provide the size, density and vertical distribution of the aerosols.
Tests with AERONET _in situ_ data during daytime could be very interesting to make a comparison with remote sensing satellite data.
## Acknowledgments
We express our deepest thanks to the TOMS, OMI and MODIS groups from NASA Goddard Space Flight Center for aerosol index and aerosol optical depth measurements, and to the Carlsberg Meridian Circle of the Isaac Newton Group on La Palma for the coefficient extinction data. Our acknowledgments go to the Main directorate of Quality and Environmental Evaluation of the Environment Ministry, the Superior Council of Scientific Researches (CSIC) and the National Institute of Meteorology (INM) of the Environment Ministry for the information on the atmospheric pollution produced by airborne aerosols in Spain. This study is part of the site characterization work developed by the Sky Quality Group of the IAC and has been carried out within the framework of the European Project OPTICON and under Proposal FP6 for Site Selection for the European ELT. Many thanks are also due to the anonymous referee, whose comments and suggestions helped us to improve the article.
## Appendix I format of satellite data
In this appendix we describe the format of the data for the different satellites used in this work and we indicate the urls from which to retrieve the datasets.
### N7/TOMS (1984-1993)
Level 2 data are available at [http://daac.gsfc.nasa.gov/data/dataset/TOMS/Level_2/N7/](http://daac.gsfc.nasa.gov/data/dataset/TOMS/Level_2/N7/) and were ordered with ftp-push. NASA loaded the data onto our ftp url. These are raw data in hdf5 format that must be seen with HDFview and afterwards processed with computer programs. Level 3 aerosol data are available at [ftp://toms.gsfc.nasa.gov/pub/nimbus7/data/aerosol](ftp://toms.gsfc.nasa.gov/pub/nimbus7/data/aerosol) and are already available in ftp format. These are daily average data with a resolution of 1.25\\({}^{\\circ}\\) in longitude and 1\\({}^{\\circ}\\) in latitude. These data are in ASCII format with 288 bins in longitude, centred on 179.375 W to 179.35 E, every bin has a 1.25\\({}^{\\circ}\\) step. In latitude there are 180 bins centred on 89.5 S to 89.5 N with a step of 1\\({}^{\\circ}\\). The values are in 3 digit groups, missing data being tagged 999 and the other numbers being multiplied by 10.
### EP/TOMS (1996-2005)
Level 2 data are available at [http://daac.gsfc.nasa.gov/data/dataset/TOMS/Level_2/EP/](http://daac.gsfc.nasa.gov/data/dataset/TOMS/Level_2/EP/). The format and the data processing is the same as for Nimbus7. Level 2 data are produced with a spatial resolution of 39 km \\(\\times\\) 39 km at nadir. Level 3 aerosol data are immediately available at [ftp://toms.gsfc.nasa.gov/pub/eptoms/data/aerosol](ftp://toms.gsfc.nasa.gov/pub/eptoms/data/aerosol).
It is important to underline that the aerosol monthly average datasets are computed using only positive values (i.e. absorbing aerosol indices) of the aerosol index for each month. Values of zero are used in the averaging whenever the aerosol index is negative. The final monthly average datasets contain aerosol index values greater than or equal to 0.7.
### Aura/OMI (2004-NOW)
Level 2 data with a spatial resolution of 13 \\(\\times\\) 24 km at nadir are available at [http://daac.gsfc.nasa.gov/data/dataset/OMI/Level2/OMT03/](http://daac.gsfc.nasa.gov/data/dataset/OMI/Level2/OMT03/). They can be ordered via ftp-push in the same way as the others. To obtain information about effective cloud pressure and fraction data go to [http://daac.gsfc.nasa.gov/data/dataset/OMI/Level2/OMLDRRJ4](http://daac.gsfc.nasa.gov/data/dataset/OMI/Level2/OMLDRRJ4) Aerosol Index
OMTO3 provides aerosol index, total column ozone and aerosol optical thickness, as well as ancillary information produced from the TOMS Version 8 algorithm applied to OMI global mode measurements. In the global mode each file contains a single orbit of data covering a width of 2600 km. Compared to TOMS, OMI's smaller field of view results in a larger \"sea glint\" per unit field of view and a correspondingly larger error in derived ozone under these conditions. The OMTO3 aerosol index is not valid for solar zenith angles greater than 60\\({}^{\\circ}\\). Because the OMI solar zenith angles are typically higher than the solar zenith angles for TOMS at the same latitude, the OMI AI becomes invalid at somewhat lower latitudes than TOMS. This may show a cross-track dependence in the OMI AI and is not corrected by the radiance measurement adjustments (error in the AI up to 4%). Compared to TOMS, the OMTO3 Aerosol Index is 0.5 NVALUE high. Users of the aerosol index are advised to make this correction for consistency with the TOMS data record. For users not interested in the detailed information provided in OMTO3 dataset several gridded products are being developed. Initially, DAAC will grid OMTO3 data in a format identical to that used for TOMS (1\\({}^{\\circ}\\)\\(\\times\\) 1.25\\({}^{\\circ}\\) lat/long) and will make it available through the TOMS website. However, to take advantage of the higher spatial resolution of the OMI products DAAC intends to produce higher resolution gridded products for all OMI datasets, including OMTO3.
### I.4 Terra/MODIS (2000-NOW)
Level 2 data are available at [http://daac.gsfc.nasa.gov/data/dataset/MODIS/02_Atmosphere/01_Level_2](http://daac.gsfc.nasa.gov/data/dataset/MODIS/02_Atmosphere/01_Level_2). We can order them with ftp-push. These are raw data in hdf5 format and so we must follow a similar procedure to the one we use with TOMS. Daily Level 2 (MOD04-Terra) aerosol data are produced at the spatial resolution of 10 \\(\\times\\) 10 km at nadir. We can also retrieve data via ftp (they are already available on the web) through the url [ftp://gdps01u.ecs.nasa.gov/MODIS_terra_Atmosphere/MOD04_L2.004](ftp://gdps01u.ecs.nasa.gov/MODIS_terra_Atmosphere/MOD04_L2.004). Level 3 data are available at [http://disc.sci.gsfc.nasa.gov/data/dataset/MODIS/02_Atmosphere/02_Level](http://disc.sci.gsfc.nasa.gov/data/dataset/MODIS/02_Atmosphere/02_Level). These are different atmospheric data, daily, weekly and monthly averaged in a global 1\\({}^{\\circ}\\)\\(\\times\\) 1\\({}^{\\circ}\\) grid. The method to order this dataset is ftp-push: The aerosol information is stored in MOD08_D3, MOD08_E3 and MOD08_M3. Note that in MODIS there is no aerosol index but only the aerosol optical thickness.
### I.5 Aqua/MODIS (2002-NOW)
The same as for Terra/MODIS. The urls are: [http://daac.gsfc.nasa.gov/data/dataset/MODIS-Aqua/02_Atmosphere/01_L2](http://daac.gsfc.nasa.gov/data/dataset/MODIS-Aqua/02_Atmosphere/01_L2) (the ftp address [ftp://gdps01u.ecs.nasa.gov/MODIS_Aqua_Atmosphere/MY](ftp://gdps01u.ecs.nasa.gov/MODIS_Aqua_Atmosphere/MY)) for Level 2 data and [http://disc.sci.gsfc.nasa.gov/data/dataset/MODIS-Aqua/](http://disc.sci.gsfc.nasa.gov/data/dataset/MODIS-Aqua/) for the Level 3 data.
## Appendix II AEMET Agencia Estatal de Meteorologia
AEROCE Atmosphere/Ocean Chemistry Experiment
AERONET Aerosol Robotic NETwork
AEROCO Aerosol Index
AOD Aerosol Optical DepthAQUA Earth Observing System Post Meridian (PM)
ASCII American Standard Code for Information Interchange
ATLID ATmospheric LIDAR
AURA Earth Observing System Chemistry mission
CALIPSO Cloud-Aerosol Lidar and Infrared Pathfinder Satellite
CAMC Carlsberg Automatic Meridian Circle Telescope
CCN Cloud Condensation Nuclei
CSIC Consejo Superior de Investigaciones Cientificas
ELT Extremely Large Telescopes
ENVISAT ENVIronmental SATellite
EP Earth Probe
ERS European Remote Sensing Satellites
ESA European Space Agency
EUMETSAT EUropean METco SATellite
ENVISAT ENVironmental SATellite
FP6 Sixth Framework Programme
FTP File Transfer Protocol
GMT Greenwich Mean Time
GOME Global Ozone Monitoring Experiment
GSFC Goddard Space Flight Center
HDF Hierarchical Data Format
HTTP Hyper Text Transfer Protocol
IAC Instituto de Astrofisica de Canarias
IAU International Astronomical Union
ICoD/DREAM Dust Loading model forecast from Insular Coastal Dynamics
IFOV Istantaneous Field of View
INM Instituto Nacional de Meteorologia
INTA Instituto Nacional de Tecnica Aerospacial
IP Internet Protocol
L2 Level 2 data
L3 Level 3 data
KV Atmospheric extinction coefficient in V-band
LIDAR Light Detection And Ranging
MFRSR MultiFilter Rotating Shadowband Radiometer
MML Maritime Mixing Layer
MODIS Moderate Resolution Imaging Spectroradiometer
MPLNET MicroPulse Lidar NETwork
MSG Meteosat Second Generation
MSL Mean Sea Level
N7 Nimbus-7
NAAPS Navy Aerosol Analysis and Prediction System
NASA National Aeronautics and Space Administration
NEODC NERC Earth Observation Data Center
NERC Natural Environment Research Council
NIR Near Infra Red
NRT Near Real Time
NUV Near Ultra Violet
OO Ozone
OAI Lana Atmospheric Observatory
OMI Ozone Monitoring Instrument
OMTO3 OMI Total Column Ozone
OPTICON Optical Infrared Coordination Network for Astronomy
ORM Roque de Los Muchachos Observatory
OT Teide Observatory
SCIAMACHY Scanning Imaging Absorption SpectroMeter Atmospheric Chatography
SEVIRI Spinning Enhanced Visible and InfraRed Imager
SKIRON Weather Forecasting Model operated by University of Athens
TERRA Earth Observing System Anti Meridian (AM)
TL Proposphere Layer
TOMS Total Ozone Mapping Spectrometer
UK United Kingdom
UV Ultra Violet
## References
* Bertolin (2007) Bertolin, C. 2007, _U_tilizzo dei dati di Earth Probe per lo studio dell'estinzione atmosferica dovuto ad aerosol, Phd. Thesis, Univ. of Padova.
* Font-Tullot (1956) Font-Tullot, I. 1956, _Servicio Meteorologico Nacional_ (INM), _Serie A_, No.26.
* Guerrero et al. (1998) Guerrero, M.A., Garcia Lopez, R., Corradi, R.M.L., Jimenez, A., Fuensalida, J.J., Rodriguez-Espinosa, J.M., Alonso, A., Centurion, M. & Prada, F. 1998, New Astronomy Reviews, Special Issue on Site Properties of the Canarian Observatories, C. Munoz-Tunon Ed., Elsevier, 42, 529.
* Herman et al. (1997) Herman, J.R., Bhartia, B.K., Torres, O., Hsu, C., Seftor, C. & Celarier, E. 1997, J. Geophys. Res., 102, 16911.
* Huetz-de Lemps (1969) Huetz-de Lemps, A. 1969, Publ. Des Lettres et des Sciences
* Humanines de Paris (Sorbonne) Humeas de Paris (Sorbonne). Ed. Societe d'Edition et d'Enseignement Superieur, Serie Recherche, 54, 15.
* Jimenez et al. (1998) Jimenez, A., Gonzalez Jorge, H. 1998, New Astronomy Reviews, Special Issue on Site Properties of the Canarian Observatories, C. Munoz-Tunon Ed., Elsevier, 42, 521.
* King (1985) King, D.L. 1985, RGO/La Palma Technical note no. 31.
* Munoz-Tunon (2002) Munoz-Tunon, C. 2002,, IAU Technical Workshop on Astronomical Site Evaluation in the Visible and Radio Range, J. Vernin, Z. Benkhaldoun & C. Munoz-Tunon Eds., ASP Conference Series, 266, 498.
* Munoz-Tunon et al. (2004) Munoz-Tunon, C., Vernin, J., & Sarazin, M. 2004, Proceedings of SPIE 2nd Backaskog Workshop on Extremely Large Telescopes, Backaskog, Vol.5382, 607.
* Munoz-Tunon et al. (2007) Munoz-Tunon, C., Fuensalida, J.J. & Varela, A.M. 2007, Rev.Mex.AA (_Serie de Conferencias_), 31, 36.
* Romero & Cuevas (2002) Romero, P.M. & Cuevas, E. 2002, Proceedings of _3\\({}^{\\rm a}\\) Asamblea Hispano Portuguese de Geodesia y Geofisica_, February 3-8, Valencia, 1.
* Romero (2003) Romero, P.M. 2003, Proceedings of _1er Encuentro sobre Meteorologia y Atmosfera de Canarias_, November 12-14, Puerto de la Cruz, Tenerife, Centro de Publicaciones, Secretaria General Tecnica, Ministerio de Medio Ambiente, 194.
* Siher et al. (2004) Siher, E.A., Ortolani, S., Sarazin, M.S. & Benkhaldoun, Z. 2004, Proceedings of SPIE Conference on Astronomical Instrumentation and Telescopes, Glasgow, Vol.5489, 138.
* Torres et al. (1998) Torres, O., Bhartia, P.K., Herman, J.R. & Ahmad, Z., J. Geophys. Res., 1998, 112, 17099.
* Torres et al. (2001) Torres C., Cuevas E., Guerra J.C. & Carreno, V. 2001, _V Simposio Nacional de Prediccion del INM_, November 20-23, Madrid, 2001.
* Torres et al. (2003) Torres, C., Cuevas, E. & Guerra, J.C. 2003, Proceedings of _1er Encuentro sobre Meteorologia y Atmosfera de Canarias_, November 12-14, Puerto de la Cruz, Tenerife, Centro de Publicaciones, Secretaria General Tecnica, Ministerio de Medio Ambiente, 74.
* Varela et al. (2004) Varela, A.M., Bertolin, C., Munoz-Tunon, C., Fuensalida,* [14] J.J. & Ortolani, S. 2007, Proceedings of SPIE Conference on Remote Sensing of Clouds and the Atmosphere XII, Florence, Vol.6745, 674508-1.
* [15] Varela, A.M., Fuensalida, J.J., Munoz-Tunon, C., Rodriguez Espinosa, J.M., Garcia-Lorenzo, B. & Cuevas, E. 2004a, Proceedings of SPIE Conference on Astronomical Instrumentation and Telescopes, Glasgow, Vol.5489, 245.
* [16] Varela, A.M., Fuensalida, J.J., Munoz-Tunon, C., Rodriguez Espinosa, J.M., Garcia-Lorenzo, B. & Cuevas, E. 2004b, Proceedings of SPIE Conference on Remote Sensing of Clouds and the Atmosphere IX, Gran Canaria, Vol.5571, 105.
* [17] Varela, A.M., Munoz-Tunon, C., Garcia-Lorenzo, B. & Fuensalida, J.J. 2006, Proceedings of SPIE Conference on Ground Based and Airborne Telescopes, Orlando, Vol.6267, 62671X-1.
* [18] Varela, A.M., Munoz-Tunon, C., Gurtubai, A., & Saviron, C. 2002, IAU Technical Workshop on Astronomical Site Evaluation in the Visible and Radio Range, ASP Conference Series, J. Vernin, Z. Benkhaldoun, C. Munoz-Tunon Eds., 266, 454.
* [19] Vernin, J., Benkhaldoun, Z. & C. Munoz-Tunon Eds. 2002, IAU Technical Workshop on Astronomical Site Evaluation in the Visible and Radio Range, ASP Conference Series, Vol. 266. | The main goal of this work is the analysis of new approaches to the study of the properties of astronomical sites. In particular, satellite data measuring aerosols have recently been proposed as a useful technique for site characterization and searching for new sites to host future very large telescopes. Nevertheless, these data need to be critically considered and interpreted in accordance with the spatial resolution and spectroscopic channels used. In this paper we have explored and retrieved measurements from satellites with high spatial and temporal resolutions and concentrated on channels of astronomical interest. The selected datasets are OMI on board the NASA Aura satellite and MODIS on board the NASA Terra and Aqua satellites. A comparison of remote sensing and _in situ_ techniques is discussed. As a result, we find that aerosol data provided by satellites up to now are not reliable enough for aerosol site characterization, and _in situ_ data are required.
keywords: **Site testing -- Atmospheric effects -- Telescopes -- EP/TOMS -- Aura/OMI -- Terra/MODIS -- Aqua/MODIS** | Provide a brief summary of the text. |
arxiv-format/0810_4029v2.md | **Impacts of Typhoons on Kuroshio Large Meander: Observation Evidences**
Liang SUN\\({}^{1,2}\\), Yuan-Jian Yang\\({}^{1}\\), and Yun-Fei Fu\\({}^{1}\\)
\\({}^{1}\\) Laboratory of Satellite Remote Sensing and Climate Environment, School of Earth and Space Sciences,
\\({}^{2}\\) University of Science and Technology of China, Hefei, Anhui, 230026, China;
\\({}^{3}\\) 2. LASG, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing 100029, China
\\({}^{4}\\) Manuscript to be submitted to Atmos. Ocean. Sci. Letts.
\\({}^{5}\\) Corresponding author address:
\\({}^{6}\\) Liang SUN
\\({}^{7}\\) School of Earth and Space Sciences, University of Science and Technology of China
\\({}^{8}\\) Hefei, Anhui 230026, China
\\({}^{9}\\) Phone: 86-551-3600175; Fax: 86-551-3606459
\\({}^{10}\\) Email: [email protected]; [email protected]
## 1. Introduction
When the Kuroshio flows in Shikoku Basin, the sea south of Japan, it might have three different paths: the nearshore non-large-meander path, the offshore non-large-meander path and the large-meander path (Kawabe, 1985, 1995; Akitomo et al., 1991; Miyazawa et al., 2004, 2008). The shift from one path to another would significantly cause oceanic changes and correspondingly climatic changes due to the strong eddy-current interactions and air-sea interactions at the sea surface along the Kuroshio path (Qiu, 2002, 2003; Taguchi et al., 2005). Besides, the variations of both shape and position of the current also have large influences on fisheries, ship navigation, marine resource, etc (Kawabe, 1985; Taguchi et al., 2005).
Though the Kuroshio is basically driven by wind stress curl over the North Pacific, the mechanisms of Kuroshio path meander are mainly attributed to ocean dynamics, including local topography (Taguchi et al., 2005), upstream volume transport at Tokara Strait (Akitomo et al., 1991, 1996; Kawabe, 1995; Nakamura et al., 2006) and low potential-vorticity (PV) water at Shikoku Basin (Qiu and Miao, 2000; Taguchi et al., 2005). To understand the large meander of the Kuroshio path, the processes are divided into two parts: the trigger process and the formation process (Kawabe, 1995), which have been simulated by ocean models. First, there is a trigger meander at Tokara Strait, which can be either a large inflow (Akitomo et al., 1991; Nakamura et al., 2006), a small meander of path (Kawabe, 1995; Miyazawa et al., 2008), low PV water (Qiu and Miao, 2000), or high PV water (Akitomo and Kurogi, 2001). Then the initial meander is advected downstream to Shikoku Basin, where the large meander of Kuroshio path eventually occurs.
Recently, the Kuroshio took the large meander path from summer 2004 to summer 2005 after a non-large meander period of 13 years [Usui et al., 2008]. The trigger and formation of large meander formation were analyzed in JCOPE (the Japan Coastal Ocean Predictability Experiment) ocean forecast system [Miyazawa et al., 2008], which is a high-resolution ocean model [Kagimoto et al., 2008] with assimilated observation data including the sea surface height anomaly (SSHA), sea surface temperature (SST), and temperature/salinity profiles, etc. Due to the high performance of the forecast system, it can simulate the real path meander south of Japan in some sense.
However, differences were noted between the simulated Kuroshio path and the observations. For example, the model skill is worse than both the persistence and the climatology during the period from 21 May to 21 June [Miyazawa et al., 2008]. And the simulated meander in July 2004 was notably smaller than the observations. Moreover, both numerical simulations have not yet investigated the larger meander formation in August 2004, when the Kuroshio path had a drastic larger meander, even larger than that in July 2004.
To understand what happened in August 2004, the trigger force beyond oceanic processes is considered. It is noted that there were many typhoons in 2004, which passed the East China Sea and Tokara Strait. At the sea Tokara Strait, two typhoons blew across the water in June 2004 (Fig. 1a). Besides, from July 29th to August 4th, 2004, there were two consecutive typhoons (Namtheun and Malou) passing over the sea south of Japan. Meanwhile, there was a cyclonic eddy right near the typhoon track. The vigorous air-sea interactions induced vertical mixing and strong upwelling, which eventually changed Kuroshio path. In this research, the variations of the Kuroshio axis are examined before and after typhoons Namtheun and Malou. Therefore, the impacts of typhoons on the meander of the Kuroshio are discussed.
Sea surface wind vector and stress with spatial resolution of 0.25\\({}^{\\circ}\\)\\(\\times\\)0.25\\({}^{\\circ}\\), were obtained from the daily QuikSCAT (Quick Scatterometer) provided by the Remote Sensing Systems ([http://www.remss.com/](http://www.remss.com/)). The wind stress was calculated with the bulk formula [Garrett, 1977], and the upwelling due to wind was calculated by using the Ekman pumping formula [Price et al., 1994]. Altimeter data were derived from multi-sensors. These sensors were Jason-1, TOPEX/POSEIDON, GFO, ERS-2 and Envisat. Data were produced and distributed by AVSIO (Archiving, Validation and Interpretation of Satellite Oceanographic data). Near-real time merged (TOPEX/POSEIDON or Jason-1 + ERS-1/2 or Envisat) sea surface height anomaly (SSHA) data, which are high resolution of 1/3\\({}^{\\circ}\\)\\(\\times\\)1/3\\({}^{\\circ}\\) Mercator grid, are available at www.aviso.oceanobs.com. The Kuroshio axis data are from the weekly Quick Bulletin Ocean Conditions provided by the Hydrographic and Oceanographic Department of the Japan Coastal Guard (JCG). Vertical profile data for temperature at cruise stations on 07/16 and 08/10 were from Japan Oceanographic Data Center (JODC). And the vertical profiles of float 2900320 from 07/16 to 08/13 were extracted from the real-time quality controlled Argo data base of China Argo Real-time Data center.
## 3 Results
### 3.1Role of cyclonic eddy in Kuroshio path
Figure 0(b) shows the SSHA and the Kuroshio axis form June 23, 2004 to July 20, 2004. There was a large cyclonic eddy in the sea south of Japan, with recirculation gyre center located about at 32\\({}^{\\circ}\\)N. The Kuroshio axis, therefore, had a significant meander around the recirculation gyre, where the commonly east-north-ward slant axis turned to south-north-ward straight axis at 135.5\\({}^{\\circ}\\)E. As there was no significant anticyclonic eddy, it seemed that the northern cyclonic eddy occupied the nearshore region, and extruded the Kuroshio axis offshore, which made a right protrusion of the Kuroshio axis. During the time, the cyclonic eddy was advected eastward downstream with the Kuroshio, and there was an anticyclonic eddy emerging at south of the Kuroshio path. The
anticyclonic eddy then pushed the Kuroshio close to the sea east of Kyushu, which made the Kuroshio axis straightly along 32\\({}^{\\circ}\\)N. Meanwhile, the straight Kuroshio axis moved eastward with the advection of cyclonic eddy along 32\\({}^{\\circ}\\)N from 135.5\\({}^{\\circ}\\)E to 136.9\\({}^{\\circ}\\)E, about 80 km away from its original position.
The above results prove that the large cyclonic eddy plays an important role in the Kuroshio path. The similar results were previously obtained by model simulations (Akitomo and Kurogi, 2001), and then by the observations (Ebuchi and Hanawa 2003). The recent model simulation also noted that the cyclonic eddy contributes to large meander (Miyazawa et al., 2008). Here, the observations confirmed such conclusion that large cyclonic eddy plays an important role in the meander of the Kuroshio path.
### Impact of typhoons on large meanders
From 29 July 2004, two typhoons consecutively passed over the sea south of Japan (Fig. 1a). They stirred the surface sea water and upwelled deep sea water. Correspondingly, there was strong upwelling near the region at 32\\({}^{\\circ}\\)N, 137\\({}^{\\circ}\\)E due to the strong winds, where the maximum upwelling velocity was over 12\\(\\times\\)10\\({}^{-4}\\) m/s, which is more than 3 times that in model simulations (Miyazawa et al., 2008). Meanwhile, there was a large weakly downwelling region around the upwelling area. Compared the position of the cyclonic with that of typhoon track, it is clearly that the strong upwelling occurred at right around the cyclonic eddy region. This can be seen from Fig 2b, where the vertical profiles of temperature at cruise stations (in the cyclonic eddy) significantly cooled after the typhoon's passage. And the temperature of water beneath 100 m became little warmer than before at Argo float (in the anti-cyclonic eddy) due to downwelling of surface warm water. However, there was also vertical mixing due to upwelling and downwelling processes, which cooled the surface temperature and warmed the water beneath 50 m (solid curves in Fig 2b). Furthermore, the strong upwelling also made the strong sea surface cooling, and accordingly the phytoplankton bloom (not shown).
In consequence, the cyclonic eddy enhanced due to the upwelling by typhoons, and played the major role in the formation of the large meander (Fig. 2c), where the Kuroshio axis took a larger meander shortly after typhoons passage. Compared to the previous state on July 23 2004, the cyclonic eddy enlarged and intensified due to strong upwelling on 6 August 2004 shortly after the typhoons passage. It pushed the Kuroshio path to the south, and immediately made a right shift of the Kuroshio path. The Kuroshio path bent consequently around the cyclonic eddy. The bend continued till the Kuroshio path was around the enhanced recirculation gyre totally on 13 August 2004, which made the largest meander of the Kuroshio path with about more that 100 km on 27 August 2004. The enhanced recirculation gyre was so strong that this large meander persisted till October 2004, when another extremely large typhoon Ma-on passed the same region again (not shown). The large cyclonic eddy then enhanced again, which made the large meander last for about one year until the cyclonic eddy declined due to dissipation.
To summarize, the typhoons, when they pass over cyclonic eddies, have strong air-sea interaction, and successively enhance the cyclonic eddies due to upwelling. The enhanced cyclonic eddies bend the Kuroshio path, which make large meander formation eventually. This implies that typhoons might have great impacts on the meander of the Kuroshio path via cyclonic eddies.
## 4. Discussion
As mentioned above, the model skill is worst during the period from 21 May to 21 June [Miyazawa et al., 2008]. Our investigation here points out that two typhoons in June 2004 that caused the Kuroshio axis meandering might be able to explain the poor performance of the model skill. According to the wind stresses by typhoons Conson and Dianmu, there was strong upwelling occurring at Tokara Strait and the sea south of Shikoku (Fig. 3a). The cyclonic eddy enhanced due to this, which consequently changed the Kuroshio axis (Fig. 3b). As the Kuroshio axis took changes due to typhoons, this might also be one of the trigger effects of large meander formation unresolved by the oceanic model. The unexpected typhoons made the model skill become worse than before.
Although typhoons might trigger initial small meanders, we should point out that there have been initial meander before typhoons' passages, and that the amplitude of initial meander was quite large, which can be seen from Fig. 3b. The investigations of long-range initial small meander were focused on the anticyclonic eddies (Akitomo, 1996; Qiu and Miao, 2000; Ichikawa, 2001; Endoh and Hibiya, 2001). For the large meander in 2004, the investigations traced the initial small meander back from October 2003 to April 2004 (Usui et al., 2008; Miyazawa et al., 2008). It is concluded that the strong anticyclonic eddies lead to the amplification of the trigger meander, and that the initial meander must be large enough to trigger the large meander (Usui et al., 2008; Miyazawa et al., 2008). It seems that the initial small meander, accompanied with typhoon induced upwelling, triggered the large meander on July 2004.
Moreover, not all the typhoons played crucial roles in the large meander. For example, the typhoons Conson and Malou played minor roles in this case, which can be seen from both the upwelling and the Kuroshio axis (Fig. 2 and Fig. 3). Only when the typhoons have great interactions with eddies, the impacts of typhoons might make sense. This makes the forecast more difficult, as the predictions of typhoon tracks are not well solved yet.
The above result clearly showed the impact of typhoons on formation of the Kuroshio meander. There are other factors which are also necessary for the formation of the Kuroshio meander. First is the cyclonic and/or anticyclonic eddy in the sea south of Japan (Qiu and Miao 2000; Akitomo and Kurogi, 2001; Endoh and Hibiya, 2001; Ebuchi and Hanawa, 2003; Waseda et al., 2005). Different from some previous studies (Waseda et al., 2005), the cyclonic eddy more than anti-cyclonic eddy played key role in our study according to the observations, which agrees with other numerical simulations (Miyazawa et al., 2008). Secondly, the Izu Ridge, which produces a cyclonic torque over the western slope of the ridge when the flow impinges upon it, is important for blocking the Kuroshio large meander from propagating eastward across the ridge (Mitsudera et al.,2006]. Third, the Kuroshio volume transport at Tokara Strait was notably larger on April and May 2004 from the observation [Andres et al., 2008], which might be the original perturbation on April 2004 [Miyazawa et al., 2008].
In the North Atlantic, there are Gulf Stream system and hurricanes [Sriver and Huber, 2007], which are similar to the Kuroshio system and typhoons in the North Pacific. The variations of the Gulf Stream were also recognized mainly as oceanic processes [Dijkstra, 2005]. Although there were some works discussed the effects of the strong atmospheric disturbances on the Gulf Stream or Kuroshio [Xue and Bane, 1997; Miyazawa and Minato, 2000, Wu et al., 2008]. Our results here also imply that hurricanes might also be important for the variations of the Gulf Stream, like that typhoons for the Kuroshio meanders.
In a word, typhoons can trigger the formation of the Kuroshio meander via typhoon-eddy-Kuroshio interactions, which is an alternative mechanism of Kuroshio meander. It is argued that not only oceanic processes, but also typhoons can have significant influence on the Kuroshio system. This implies that to accurately predict the Kuroshio path, the large synoptic processes, especially tropic cyclones, should also be considered.
## 5. Conclusion
In summary, the impacts of typhoons on the formation of the Kuroshio large meander in summer 2004 are investigated via observational data. We firstly confirmed the former conclusion that cyclonic eddy contributes to the large meander. Moreover, using the observation data, it was found that the cyclonic eddy accompanied with typhoons is the major factor of the Kuroshio large meander formation in summer 2004. From 29 July to 4 August, the typhoons stirred the ocean and upwelled the deep sea water, which enhanced the existed cyclonic eddy. The enhanced cyclonic eddy pushed the Kuroshio path to the south, and immediately made a right shift of the Kuroshio path with more that 100 km. This large meander of the Kuroshio path existed for about 1 year.
Then the impact of typhoons on the initial small meander was also discussed. It was found that the unexpected typhoons in June 2004 affected the model skill to be worse than before, and that they also contributed to the initial meander at Tokara Strait. In this case, the pre-existed eddy was indispensable, as the most vigorous air-sea interaction due to typhoons occurred always near the pre-existed eddies.
The results suggest an alternative meander mechanism of Kuroshio path via typhoon-eddy-Kuroshio interactions. As only the oceanic processes (inflow velocity, mesoscale eddies and local topography) were proposed to be the triggers of Kuroshio meander, it is argued that typhoons accompanied with cyclonic eddies, might also be crucial in the Kuroshio path meander. It implies that to accurately predict the Kuroshio path, the large synoptic processes, especially the typhoons, should be taken into account at least. This will be likely to provide a more comprehensive understanding of the dynamics of the west boundary flows like the Kuroshio and the Gulf Stream, and will be useful in the numerical models, especially in eddy-resolution models.
**Acknowledgements:**
This work is supported by the National Basic Research Program of China (No. 2007CB816004), the National Foundation of Natural Science (Nos. 40705027, 40730950 and 40675027), and the Program for New Century Excellent Talents in University. We thank STI for providing typhoon track data, Remote Sensing Systems for QuikScat wind-vector data, AVISO for SSHA data, JCG for the Kuroshio axis data, JODC for cruise station data, and China Argo Real-time Data center for float profiles.
**References:**
Andres, M., M. Wimbush, J.-H. Park, K.-I. Chang, B.-H. Lim, D. R. Watts, H. Ichikawa, and W. J. Teague (2008), Observations of Kuroshio flow variations in the East China Sea, J. Geophys. Res., 113, C05013, doi:10.1029/2007JC004200.
Akitomo, K., T. Awaji and N. Imasato (1991), Kuroshio path variation south of Japan. 1. Barotropicinflow-outflow model. _J. Geophys. Res._, **96**, 2549-2560.
* [2] Akitomo, K., and M. Kurogi, (2001), Path transition of the Kuroshio due to mesoscale eddies: A two-layer, wind-driven experiment. J. Oceanogr., 57, 735-741.
* [3] Dijkstra, H. A. (2005), Nonlinear Physical Oceanography: A Dynamical Systems Approach to the Large Scale Ocean Circulation and El Nino, Springer, New York.
* [4] Ebuchi, N. and K. Hanawa (2003), Influence of Mesoscale Eddies on Variations of the Kuroshio Path South of Japan, J. Oceanogr, 59, 25-36
* [5] Emanuel, K. A. (1999), Thermodynamic control of hurricane intensity, _Nature_, 401,665-669.
* [6] Endoh, T. and T. Hibiya (2001): Numerical simulation of the transient response of the Kuroshio leading to the large meander formation south of Japan. _J. Geophys. Res._, **106**, 26,833-26,850.
* [7] Garratt, J., R. (1977), Review of drag coefficients over oceans and continents, Mon.Wea. Rev., 105, 915-929.
* [8] Ichikawa, K. (2001): Variation of the Kuroshio in the Tokara Strait induced by meso-scale eddies. _J. Oceanogr._, **57**, 55-68.
* [9] Kagimoto, T., Y. Miyazawa, X. Guo, and H. Kawajiri (2008), High resolution Kuroshio forecastsystem -Description and its applications. in _High Resolution Numerical Modeling of the Atmosphere and Ocean_, W. Ohfuchi and K. Hamilton (eds), Springer, New York,209-234,.
* [10] Kawabe, M. (1985), Sea level variations at the Izu islands and typical stable paths of the Kuroshio. _J. Oceanogr. Soc. Japan_, **41,** 307-326.
* [11] Kawabe, M. (1995), Variations of current path,velocity, and volume Transport of the Kuroshio in Relation with the Large Meader. _J. Phys. Oceanogr._, 25, 3103-3117
* [12] Kawabe, M. (1996), Model study of flow conditions causing the large meander of the Kuroshio south of Japan. _J. Phys. Oceanogr._, **26**, 2449-2461.
* [1] Mitsudera, H., B. Taguchi, T. Waseda, and Y. Yoshikawa (2006), Blocking of the Kuroshio Large Meander by Baroclinic Interaction with the Izu Ridge. _J. Phys. Oceanogr._, **36**, 2042-2059.
* [2] Miyazawa, Y., and S. Minato (2000), POM and Two-Way Nesting POM Study of Kuroshio Damping Phenomenon Caused by a Strong Wind, J. Oceanogr., 56, 275- 294.
* [3] Miyazawa, Y., X. Guo, and T. Yamagata (2004), Roles of meso-scale eddies in the Kuroshio paths, J. Phys. Oceanogr., 34, 2203- 2222.
* [4] Miyazawa, Y., S. Yamane, X. Guo, and T. Yamagata (2005), Ensemble forecast of the Kuroshio meandering, J. Geophys. Res., 110, C10026, doi:10.1029/2004JC002426.
* [5] Miyazawa, Y., T. kagimoto, X. Guo, and H. Sakuma (2008), The Kuroshio large meander formation in 2004 analyzed by an eddy-resolving ocean forecast system, J. Geophys. Res., 113, C10015, doi:10.1029/2007JC004226.
* [6] Nakamura, H., T. Yamashiro, A. Nishina, and H. Ichikawa (2006), Time-frequency variability of Kuroshio meanders in Tokara Strait, Geophys. Res. Lett., 33, L21605, doi:10.1029/2006GL027516.
* [7] Price, J. F., T. B. Sanford, and G. Z. Forristall (1994), Forced stage response to a moving hurricane, _J. Phys. Oceanogr._, 24, 233-260.
* [8] Qiu, B. and W. Miao (2000), Kuroshio path variations south of Japan: Bimodality as a self-sustained oscillation. _J. Phys. Oceanogr._, **30**, 2124-2137.
* [9] Qiu, B., (2002), Large-scale variability in the midlatitude subtropical and subpolar North Pacific Ocean: Observations and causes. _J. Phys. Oceanogr._, **32**, 353-375.
* [10] Qiu, B., (2003), Kuroshio Extension variability and forcing of the Pacific decadal oscillations: Responses and potential feedback. J. Phys. Oceanogr., 33, 2465-2482
* [11] Sriver, R. L., and M. Huber (2007), Observational evidence for an ocean heat pump induced by tropical cyclones. _Nature,_**447,** 577-580, doi:10.1038/nature05785.
* [2] Taguchi, B., S. P. Xie, H. Mitsudera, and A. Kubokawa (2005), Response of the Kuroshio Extension to Rossby Waves Associated with the 1970s Climate Regime Shift in a High-Resolution Ocean Model. J. Climate, 18, 2979-2995.
* [3] Usui, N., H. Tsujino, Y. Fujii, and M. Kamachi (2008), Generation of a trigger meander for the 2004Kuroshio large meander, _J. Geophys. Res._, **113**, C01012, doi:10.1029/2007JC004266.
* [4] Waseda, T., H. Mitsudera, B. Taguchi and K. Kutsuwada (2005), Significance of High-Frequency Wind Forcing in Modelling the Kuroshio, J. Oceanogr.61,539-548.
* [5] Wu, C.-R., Y.-L. Chang, L.-Y. Oey, C.-W. J. Chang, and Y.-C. Hsin (2008), Air-sea interaction between tropical cyclone Nari and Kuroshio, Geophys. Res. Lett., 35, L12605, doi:10.1029/2008GL033942.
* [6] Xue, H. and J. M. Bane, Jr. (1997), A numerical investigation of the Gulf Stream and its meanders in response to cold air outbreaks. _J. Phys. Oceanogr._, 27, 2606-2629.
* [7] Figure 1. (a) The study region and the typhoons' tracks, 08/01 representing August 1st, etc. (b) The SSHA and the Kuroshio axis (bold curve). The cyclonic eddy played crucial role in the Kuroshio path.
* [8] Figure 2. (a) The wind stress (arrows) and upwelling (filled) during the typhoons' passages from 07/29 to 08/04, and the notation 06/30-07/02 presents the average value from 06/30 to 07/02, etc. (b) The SSHA (shadowed) and the locations of the cruise stations and Argo float (left); the vertical profiles of temperature at cruise stations (on 07/16 and 08/10) and Argo float (07/16 to 08/13) pre and post typhoon (right). (c) The SSHA and the Kuroshio axis (bold curve) before and after typhoons Namtheun and Malou from 07/21 to 08/27, and the notation 07/21-23 represents the average value from 07/21 to 07/23, etc.
* [9] Figure 3. (a) The wind stress (arrows) and upwelling (filled) during the typhoons' passages from 05/11-05/21. (b)The SSHA and the Kuroshio axis (bold curve) before and after typhoons Namtheun and Malou from 05/26 to 06/28, and the notation 05/26-28 represents the average value from 05/26 to 05/28, etcFigure 3.
Figure 2. | The formation of the Kuroshio large meander in summer 2004 was investigated by using the cruise data, Argo profiles data, and satellite remote sensing data. We validated the point that cyclonic eddy contributes to the large meander. Besides, the impacts of typhoons on Kuroshio meanders were studied. From 29 July to 4 August, the typhoons stirred the ocean and upwelled the deep water, which enhanced the existed cyclonic eddy, and immediately made a drastic meander of the Kuroshio. Moreover, the unexpected typhoons in June 2004 also contributed to the initial meander at Tokara Strait.
The result suggests an alternative meander mechanism of Kuroshio path via typhoon-eddy-Kuroshio interactions. It is argued that typhoons accompanied with cyclonic eddies, might play crucial roles in meanders of the Kuroshio. This will provide a more comprehensive understanding of the dynamics of the west boundary flows like the Kuroshio and the Gulf Stream, and will be useful in eddy-resolution models. | Summarize the following text. |
arxiv-format/0811_0613v3.md | Self-Calibration of Gravitational Shear-Galaxy Intrinsic Ellipticity Correlation in Weak Lensing Surveys
Pengjie Zhang
Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030, China [email protected]
## 1. Introduction
Weak gravitational lensing is one of the most powerful probes of the dark universe (Refregier, 2003; Albrecht et al., 2006; Munshi et al., 2008; Hoekstra & Jain, 2008). It is rich in physics and contains tremendous information on dark matter, dark energy and the nature of gravity at large scales. Its modeling is relatively clean. At the multipole \\(\\ell<2000\\), gravity is the dominant force shaping the weak lensing power spectrum while complicated gas physics only affects the lensing power spectrum at less than 1% level (White, 2004; Zhan & Knox, 2004; Jing et al., 2006; Rudd et al., 2008). This makes the weak lensing precision modeling feasible, through high precision simulations. On the other hand, weak lensing has been measured with high significance. The most sophisticated method so far is to measure the cosmic shear, lensing induced galaxy shape distortion. After the first detections in the year 2000 (Bacon et al., 2000; Kaiser et al., 2000; Van Waerbeke et al., 2000; Wittman et al., 2000), data quality has been improved dramatically (e.g. Fu et al., 2008).
However, it is still challenging to perform precision lensing measurement. A potentially severe problem is the non-random galaxy intrinsic ellipticity (intrinsic alignment). Its existence is supported by many evidences. Angular momentum of dark matter halos is influenced by the tidal force of the large scale structure. More importantly, mass accretion is more preferentially along the filaments (e.g. Zhang et al., 2009). This causes the dark matter halo ellipticities (and also halo orientations) to be correlated over tens of Mpc (Croft & Metzler, 2000; Heavens et al., 2000; Lee & Pen, 2000; Catelan et al., 2001; Crittenden et al., 2001; Jing, 2002; Zhang et al., 2009). Although galaxies are not perfectly aligned with the halo angular momentum vector, with the halo shape, or with each other (Heymans et al., 2004; Yang et al., 2006; Kang et al., 2007; Okumura et al., 2008; Wang et al., 2008), the resulting correlations in galaxy spin (Pen et al., 2000) and ellipticity (Brown et al., 2002; Mackey et al., 2002; Heymans et al., 2004; Hirata et al., 2004; Lee & Pen, 2007; Okumura et al., 2008; Schneider & Bridle, 2009) are still measurable and contaminate cosmic shear measurement, especially for elliptical galaxies.
The intrinsic alignment biases the cosmic shear measurement. It introduces an intrinsic ellipticity-intrinsic ellipticity correlation (the II correlation). Since this correlation only exists at small physical separation and hence small line-of-sight separation, it can be effectively eliminated by the lensing tomography technique (King & Schneider, 2002, 2003; Heymans & Heavens, 2003), with moderate loss of cosmological information (Takada & White, 2004)
However, as pointed out by Hirata & Seljak (2004) and then confirmed by observations (Mandelbaum et al., 2006; Hirata et al., 2007; Okumura & Jing, 2009), the galaxy intrinsic ellipticity is also correlated with the large scale structure and thus induces the gravitational (cosmic) shear-intrinsic ellipticity correlation (the GI correlation). This contamination could bias the cosmic shear measurement at 10% level and dark energy constraints at more significant level (Bridle & King, 2007). For this reason, there are intensive efforts to correct the GI contamination (e.g. Hirata & Seljak, 2004; Heymans et al., 2006; Joachimi & Schneider, 2008, 2009; Okumura & Jing, 2009; Joachimi & Bridle, 2009; Shi et al., 2010).
This paper proposed a new method to alleviate this problem. It is known that weak lensing surveys contains information other than the ellipticity-ellipticity correlation (e.g. Hu & Jain (2004); Bernstein (2009)). These information not only helps improve cosmological constraints, but also helps reduce errors in lens ing measurements, such as photometric redshift errors (Schneider et al., 2006; Zhang et al., 2009). We show that these extra information allows for a promising self-calibration of the GI contamination. The self-calibration technique we propose relies on no assumptions on the intrinsic ellipticity nor expensive spectroscopic redshift measurements.1 It is thus applicable to ongoing or proposed lensing surveys like CFHTLS, DES, Euclid, LSST and SNAP/JDEM. Under simplified conditions, we estimate the performance of this technique and its robustness against various sources of error. We find that it has the potential to reduce the GI contamination significantly.
Footnote 1: We still need sufficiently accurate photo-z, which controls the I-g measurement error §3.1 and the accuracy of the scaling relation ( §3.3 ). Since photo-z algorithm relies on calibration against spectroscopic samples (not necessarily of the same survey), in this sense, our self-calibration does rely on spectroscopic redshift measurements.
The paper is organized as follows. In SS2, we explain the basic idea of the self-calibration technique. We investigate the self-calibration performance in SS3, discuss extra sources of error in SS4 and summarize in SS5. We leave some technical details in the appendices SSA, B & C.
## 2. A Simplified Version of the GI Self-Calibration
In the section, we present the basic idea of the self-calibration technique. To highlight the key ingredients of this technique, we adopt a simplified picture and neglect many complexities until SS3 & SS4.
### The information budget in weak lensing surveys
In lensing surveys, we have several pieces of information (refer to Bernstein (2009) for a comprehensive review). What relevant for the self-calibration technique is the shape, angular position and photometric redshift of each galaxy. Only galaxies sufficiently large and bright are suitable for cosmic shear measurement. To avoid possible sample bias, we will restrict the discussion to this sub-sample. We split galaxies into a set of photo-z bins according to their photo-z \\(z^{P}\\). The \\(i\\)-th bin has \\(\\bar{z}_{i}-\\Delta z_{i}/2\\leq z^{P}\\leq\\bar{z}_{i}+\\Delta z_{i}/2\\). Our convention is that, if \\(i<j\\), then \\(\\bar{z}_{i}<\\bar{z}_{j}\\). \\(\\bar{n}_{i}^{P}(z^{P})\\) and \\(\\bar{n}_{i}(z)\\) are the mean galaxy distribution over the \\(i\\)-th redshift bin, as a function of photo-z \\(z^{P}\\) and true redshift \\(z\\) respectively. The two are related by the photo-z probability distribution function (PDF) \\(p(z|z^{P})\\). Intrinsic fluctuations in the 3D galaxy distribution, \\(\\delta_{g}\\), cause fluctuations in the galaxy surface density of a photo-z bin, \\(\\delta_{\\Sigma}\\). This is one piece of information in weak lensing surveys.
The galaxy shape, expressed in the term of ellipticity, measures the cosmic shear \\(\\gamma_{1,2}\\) induced by gravitational lensing. For each galaxy, this signal is overwhelmed by galaxy intrinsic ellipticity. If galaxy intrinsic ellipticity is randomly distributed, it only causes shot noise in the cosmic shear measurement, which is straightforward to correct. For this reason, we will not explicitly show this term in relevant equations, although we do take it into account in the error estimation. However, gravitational tidal force induces correlations in ellipticities of close galaxy pairs. For this, the measured shear \\(\\gamma_{1,2}^{s}=\\gamma_{1,2}+I_{1,2}\\), where \\(I\\) denotes the correlated part of the galaxy intrinsic ellipticity and \\(1,2\\) denote the two components of the cosmic shear and the intrinsic alignment. \\(\\gamma_{1,2}\\) describe the shape distortion along the x-y axis and the direction of \\(45^{\\circ}\\) away, respectively. \\(\\gamma\\) is equivalent to the lensing convergence \\(\\kappa\\) (in Fourier space, this relation is local). Thus we will work on \\(\\kappa\\) instead of \\(\\gamma_{1,2}\\). From the measured \\(\\gamma^{s}\\), we obtain \\(\\kappa^{s}=\\kappa+I\\). Here, \\(I\\) is the E mode of \\(I_{1,2}\\), analogous to \\(\\kappa\\), which is the E mode of \\(\\gamma_{1,2}\\)(Stebbins, 1996; Crittenden et al., 2002). Although cosmic shear, to a good approximation, does not have B-mode, the intrinsic alignment can have non-vanishing B-mode (e.g. Hirata & Seljak, 2004; Schneider & Bridle, 2009). This piece of information is useful to diagnose and hence calibrate the intrinsic alignment. However in the current paper, we will focus on the E-mode.
\\(\\kappa\\) is the projection of the matter over-density along the line of sight. For a flat universe and under the Born approximation, the lensing convergence \\(\\kappa\\) of a galaxy (source) at redshift \\(z_{G}\\) and direction \\(\\hat{\\theta}\\)(Refregier, 2003) is
\\[\\kappa(\\hat{\\theta})=\\int_{0}^{\\chi_{G}}\\delta_{m}(\\chi_{L},\\hat{\\theta})W_{L} (z_{L},z_{G})d\\chi_{L}. \\tag{1}\\]
Here, \\(\\hat{\\theta}\\) is the direction on the sky. \\(\\delta_{m}(\\chi_{L},\\hat{\\theta})\\) is the matter overdensity (lens) at direction \\(\\hat{\\theta}\\) and comoving angular distance \\(\\chi_{L}\\equiv\\chi(z_{L})\\) to redshift \\(z_{L}\\). \\(\\chi_{G}\\equiv\\chi(z_{G})\\) is the comoving angular diameter distance to the source. Both \\(\\chi_{L}\\) and \\(\\chi_{G}\\) are in unit of \\(c/H_{0}\\), where \\(H_{0}\\) is the present day Hubble constant. The lensing kernel
\\[W_{L}(z_{L},z_{G})=\\frac{3}{2}\\Omega_{m}(1+z_{L})\\chi_{L}\\left(1-\\frac{\\chi_{L} }{\\chi_{G}}\\right). \\tag{2}\\]
when \\(z_{L}<z_{G}\\) and is zero otherwise. \\(\\Omega_{m}\\) is the present day matter density in unit of the critical density. The lensing power spectrum is given by the following Limber equation,
\\[C^{GG}_{ij}(\\ell)=\\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_{m}^{2}(k=\\frac{\\ell }{\\chi_{L}},z_{L})W_{i}(z_{L})W_{j}(z_{L})\\chi_{L}d\\chi_{L}. \\tag{3}\\]
Here, \\(W_{i}\\) is the lensing kernel \\(W_{L}\\) averaged over the \\(i\\)-th redshift bin, defined by Eq. A2.
For two-point statistics, weak lensing surveys thus provide three sets of correlations. Throughout this paper, we will work in the Fourier space (multipole \\(\\ell\\) space) and focus on the corresponding angular power spectra. The first one is the angular cross correlation power spectrum between galaxy ellipticity (\\(\\kappa^{s}\\)) in the \\(i\\)-th redshift bin and the one in the \\(j\\)-th redshift bin, \\(C^{(1)}_{ij}(\\ell)\\).
\\[C^{(1)}_{ij}(\\ell)=C^{GG}_{ij}(\\ell)+C^{IG}_{ij}(\\ell)+C^{IG}_{ji}(\\ell)+C^{ II}_{ij}(\\ell). \\tag{4}\\]
Here, \\(C^{\\alpha\\beta}_{ij}\\) is the angular cross correlation power spectrum between quantity \\(\\alpha\\) in the \\(i\\)-th redshift bin and quantity \\(\\beta\\) in the \\(j\\)-th redshift bin. \\(\\alpha,\\beta=G,I,g\\), where the superscript (or subscript in denoting the redshift and distance) \\(G\\) denotes gravitational lensing (\\(\\kappa\\)), \\(I\\) the galaxy intrinsic alignment (non-random intrinsic ellipticity) and \\(g\\) the galaxy number density distribution in the corresponding redshift bin (\\(\\delta_{\\Sigma}\\)).
The ellipticity-ellipticity pair relevant to the current paper has \\(i<j\\). Since \\(C^{IG}_{ij}\\ll C^{IG}_{ij}\\) as long as the catastrophic error is reasonably small, we have
\\[C^{(1)}_{ij}(\\ell)\\simeq C^{GG}_{ij}(\\ell)+C^{IG}_{ij}(\\ell)+2C^{II}_{ij}(\\ell) \\ \\ \\mbox{when}\\ i<j. \\tag{5}\\]For bin size \\(\\Delta z\\gtrsim 0.2\\), as long as the catastrophic photo-z errors are reasonably small, \\(C_{ij}^{II}\\) of non-adjacent bins (\\(i<j-1\\)) is in general negligible with respect to the GI correlation (e.g. Schneider & Bridle, 2009), because the II correlation only exists at small line-of-sight separation. However, for adjacent bins (\\(i=j-1\\)), the II correlation can be non-negligible for \\(\\Delta z\\sim 0.2\\)(Schneider & Bridle, 2009). The self-calibration technique proposed in the current paper is not able to correct for the II correlation, for which other methods (Joachimi & Schneider, 2008, 2009; Zhang, 2010) may be applicable. It only applies to correct for the GI correlation \\(C^{GI}\\). We express the GI correlation as a fractional error to the lensing measurement,
\\[f_{ij}^{I}(\\ell)\\equiv\\frac{C_{ij}^{IG}(\\ell)}{C_{ij}^{GG}(\\ell)}. \\tag{6}\\]
\\(f_{ij}^{I}\\) is the fractional GI contamination to the lensing measurement. \\(f_{ij}^{I}\\) (\\(i<j\\)) and \\(f_{ik}^{I}\\) (\\(i<k\\)) are not independent, since both describe the intrinsic alignment in the \\(i\\)-th redshift bin and thus
\\[\\frac{f_{ij}^{I}(\\ell)}{f_{ik}^{I}(\\ell)}\\simeq\\left(\\frac{W_{ij}}{W_{ik}} \\right)\\left(\\frac{C_{ik}^{GG}(\\ell)}{C_{ij}^{GG}(\\ell)}\\right). \\tag{7}\\]
However, for its clear meaning as a fractional error in the lensing measurement, and for uncertainties in the intrinsic alignment modeling, we adopt it, instead of \\(C_{ij}^{IG}\\) itself, to express the GI contamination throughout the paper. The measured GI correlation is an anti-correlation (\\(f_{ij}^{I}<0\\)), because the lensing induced shear is tangential to the gradient of the gravitational potential while the intrinsic shear is parallel to the gradient. Throughout the paper, we often neglect this negative sign, since we will work under the limit \\(f_{ij}^{I}\\ll 1\\) and thus its sign does not affect our error analysis. Our self-calibration technique works in principle for any value of \\(f_{ij}^{I}\\). The results presented in this paper can be extended to other values of \\(f_{ij}^{I}\\) straightforwardly.
The second correlation is between the galaxy density (\\(\\delta_{\\Sigma}\\)) in the \\(i\\)-th redshift bin and the galaxy ellipticity (\\(\\kappa^{s}\\)) in the \\(j\\)-th redshift bin, \\(C_{ij}^{(2)}\\). Galaxy-galaxy lensing in general focuses on \\(C_{ij}^{(2)}\\) (\\(i<j\\)). Zhang et al. (2009) showed that the measurement \\(C_{ij}^{(2)}\\) (\\(i>j\\)) contains valuable information on photo-z outliers. The one relevant for our GI self-calibration is
\\[C_{ii}^{(2)}(\\ell)=C_{ii}^{gG}(\\ell)+C_{ii}^{gI}(\\ell)\\ . \\tag{8}\\]
The \\(C_{ii}^{gI}\\) term contains here clearly contains valuable information on the intrinsic alignment. The two terms on the right hand side has different dependences on photo-z error. Larger photo-z errors tend to increase \\(C^{gG}\\) and decrease \\(C^{Ig}\\).
The third set of cross correlation is between galaxy density (\\(\\delta_{\\Sigma}\\)) in the \\(i\\)-th redshift bin and \\(j\\)-th redshift bin, \\(C_{ij}^{(3)}\\). It has been shown that \\(C_{ij}^{(3)}\\) (\\(i\
eq j\\)) is a sensitive probe of photo-z outliers (Schneider et al., 2006; Zhang et al., 2009). Our GI self-calibration requires the measurement
\\[C_{ii}^{(3)}(\\ell)=C_{ii}^{gg}(\\ell). \\tag{9}\\]
Our self-calibration aims to estimate and eliminate the GI contamination \\(C_{ij}^{IG}\\) in Eq. 5 from the measurements \\(C_{ii}^{(2)}\\) (Eq. 8) and \\(C_{ii}^{(3)}\\) (Eq. 9), both are available in the same survey. This method is thus dubbed the GI self-calibration.
For the self-calibration to work, \\(f_{ij}^{I}(\\ell)\\) must be sufficiently large for the band power \\(C_{ii}^{Ig}(\\ell)\\) at the corresponding \\(\\ell\\) bin to be detected through the measurement \\(C_{ii}^{(2)}(\\ell)\\). We denote the threshold as \\(f_{ij}^{\\rm thresh}(\\ell)\\). For brevity, we often neglect the argument \\(\\ell\\) in \\(f_{ij}\\). When \\(f_{ij}^{I}\\geq f_{ij}^{\\rm thresh}\\), we can apply the self-calibration to reduce the GI contamination. The residual GI contamination after the self-calibration is expressed in as _residual fractional_ error on the lensing measurement, in which \\(\\Delta f_{ij}\\) denotes statistical error and \\(\\delta f_{ij}\\) denotes systematical error. Thus the self-calibration performance is quantified by \\(f_{ij}^{\\rm thresh}\\), \\(\\Delta f_{ij}\\) and \\(\\delta f_{ij}\\). The smaller these quantities are, the better the performance. We will numerically evaluate these quantities later.
### The self-calibration
The starting point of the GI self-calibration is a simple scaling relation between \\(C_{ij}^{IG}\\) and \\(C_{ii}^{Ig}\\) that we find,
\\[C_{ij}^{IG}(\\ell)\\simeq\\left[\\frac{W_{ij}\\Delta_{i}}{b_{i}(\\ell)}\\right]C_{ii} ^{Ig}(\\ell). \\tag{10}\\]
Figure 1.— A schematic description of the GI self-calibration technique. Here, \\(G\\) denotes gravitational lensing, \\(I\\) the non-random galaxy intrinsic alignment and \\(g\\) the galaxy number density distribution in the corresponding redshift bin. Here, galaxies in the \\(j\\)-th redshift bin have higher photo-z\\({}_{0}^{\\rm deg}\\) than those in the \\(i\\)-th redshift bin. The GI contamination \\(C_{ij}^{IG}\\) is expressed as fractional error in lensing power spectrum \\(C_{ij}^{GG}\\), namely, \\(f_{ij}^{I}\\equiv C_{ij}^{IG}/C_{ij}^{GG}\\). The self-calibration operates when \\(f_{ij}^{I}\\) is bigger than certain threshold \\(f_{ij}^{\\rm thresh}\\). Depending on the fiducial value of \\(f_{ij}^{I}\\), residual GI contamination from different sources dominates, which we highlight as bold lines in lower part of the figure. Refer to Eq. 6, 19, 21 & 24 for definitions of corresponding variables.
Here, \\(\\chi_{i}\\equiv\\chi(\\bar{z}_{i})\\), \\(b_{i}(\\ell)\\) is the galaxy bias in this redshift bin at the corresponding angular scale \\(\\ell\\). \\(W_{ij}\\) is the weighted lensing kernel defined by
\\[W_{ij}\\equiv\\int_{0}^{\\infty}dz_{L}\\int_{0}^{\\infty}dz_{G}\\left[W_{L}(z_{L},z_{G })\\bar{n}_{i}(z_{L})\\bar{n}_{j}(z_{G})\\right]. \\tag{11}\\]
Notice that \\(\\bar{n}_{i}\\) is normalized such that \\(\\int_{0}^{\\infty}\\bar{n}_{i}(z)dz=1\\). \\(\\Delta_{i}\\) is an effective width of the \\(i\\)-th redshift bin, defined by
\\[\\Delta_{i}^{-1}\\equiv\\int_{0}^{\\infty}\\bar{n}_{i}^{2}(z)\\frac{dz}{d\\chi}dz. \\tag{12}\\]
Refer to the appendix SSA for the derivation.
This scaling relation arises from the fact that, both GI and I-g cross correlations are determined by the matter-intrinsic alignment in the \\(i\\)-th redshift bin. The prefactors in Eq. 10 are simply the relative weighting between the two. In this sense, this scaling relation is rather generic.
The basic procedure of the self-calibration, as illustrated in Fig. 1, is as follows.
* Extract \\(C_{ii}^{Ig}\\) from the measurement \\(C_{ii}^{(2)}\\) in the \\(i\\)-th photo-z bin. This exercise is non-trivial, since \\(C_{ii}^{(2)}\\) actually measures the sum of \\(C_{ii}^{Ig}\\) and \\(C_{ii}^{Gg}\\), and \\(C_{ii}^{Gg}\\) is often non-negligible due to relatively large photo-z error. We find a method to simultaneously measure the two without resorting to spect-z information. The idea will be elaborated in SS2.3 and the measurement error in \\(C_{ii}^{Ig}\\) will be calculated in SS3.1.
* Measure the galaxy bias \\(b_{i}(\\ell)\\) from the measurement \\(C_{ii}^{(3)}(\\ell)\\) (SS3.2).
* Calculate \\(C_{ii}^{IG}\\) from the above measurements and Eq. 10 and then subtract it from Eq. 4.
### Measuring \\(C_{ii}^{Ig}\\)
An obstacle in measuring \\(C_{ii}^{Ig}\\) comes from the contamination \\(C_{ii}^{Gg}\\) (Eq. 8). For spectroscopic sample, this contamination is straightforward to eliminate. We just throw away pairs where the redshift of the galaxy to measure the shape is lower than the one to measure the number density (Mandelbaum et al., 2006; Hirata et al., 2007; Okumura & Jing, 2009). This method is robust, however limited to the spec-z sample.
For photo-z sample, the above technique does not work, since the photo-z error is large. For it, the true galaxy distribution, even for photo-z bin size \\(\\Delta z\\to 0\\), has relatively large width \\(\\geq\\sim 0.1(1+z)\\), no matter how narrow the photo-z bin is. In practice, the photo-z bin size is often \\(\\gtrsim 0.2\\), which further increases the effective width. Thus, a galaxy in this photometric redshift bin has large chance to lens another galaxy in the same redshift bin, with a non-negligible lensing weight. For this reason, \\(C_{ii}^{Gg}\\) may not be negligible comparing to \\(C_{ii}^{Ig}\\), even if the photo-z bin size is infinitesimal. We have numerically compared \\(C_{ii}^{Gg}\\) with \\(C_{ii}^{Ig}\\) calculated based on the intrinsic alignment model of Schneider & Bridle (2009) along their fiducial parameters, and found that the two terms can indeed be comparable. For example, at a typical lensing angular scale \\(\\ell=10^{3}\\) and a typical redshift \\(z=1\\), \\(C_{ii}^{Gg}/C_{ii}^{Ig}\\) is 30% for vanishing bin size and increases with bin size. Only when the redshift is sufficiently low or the redshift error and the bin size are both sufficiently small, may the G-g correlation be safely neglected.
Nevertheless, the photo-z measurement contains useful information and allows us to measure \\(C_{ii}^{Ig}\\). Our method to separate \\(C_{ii}^{Gg}\\) from \\(C_{ii}^{Ig}\\) relies on their distinctive and predictable dependences on the relative position of galaxy pair. Let's denote the redshift of one galaxy in the pair for the shape measurement as \\(z_{G}^{P}\\) and the other one for the number density measurement as \\(z_{g}^{P}\\). The I-g correlation does not depend on the ordering along the line-of-sight, as long as the physical separation is fixed. In another word, the I-g correlation for the pair with \\(z_{G}^{P}>z_{g}^{P}\\) is statistically identical to the pair with \\(z_{G}^{P}<z_{g}^{P}\\), as long as \\(|z_{G}^{P}-z_{g}^{P}|\\) fixes. On the other hand, the G-g correlation cares about the ordering along the line-of-sight. The G-g correlation for the pair with \\(z_{G}^{P}>z_{g}^{P}\\) is statistically larger than the pair with \\(z_{G}^{P}<z_{g}^{P}\\), due to the lensing geometry dependence.
We can then construct two observables on the ellipticity-density correlation. (1) One is \\(C_{ii}^{(2)}(\\ell)\\), weighting all pairs equally. (2) The other is \\(C_{ii}^{(2)}|_{S}(\\ell)\\), in which we only count the cross correlation between those pairs
Figure 2.— \\(Q_{i}(\\ell)\\equiv C_{ii}^{Gg}|_{S}(\\ell)/C_{ii}^{Gg}(\\ell)\\) describes the suppression of the galaxy-galaxy correlation in the same redshift bin after throwing away those pairs with source redshift (photo-z) higher than lens redshift (photo-z). Only when \\(Q\\) deviates significantly from unity, can we extract the I-g correlation from the shape-density correlation measurement in the same redshift bin. \\(Q\\) is nearly scale independent, since it is the ratio of two power spectra of similar shape. Overall, \\(Q\\sim 1/2\\). It increases with redshift, as expected from larger photo-z rms error at higher redshift and hence larger effective redshift width. This figure is a key result to demonstrate the feasibility of the self-calibration technique.
with \\(z_{G}^{P}<z_{g}^{P}\\). The subscript \"\\(S\\)\" denotes the corresponding quantities under this weighting. From the argument above, we have \\(C_{ii}^{Ig}(\\ell)=C_{ii}^{Ig}|s(\\ell)\\). On the other hand, \\(C_{ii}^{Gg}|_{S}(\\ell)<C_{ii}^{Gg}(\\ell)\\), since those \\(z_{G}^{P}>z_{g}^{P}\\) pairs that we disregard contribute more to the lensing-galaxy correlation. We denote the suppression by
\\[Q(\\ell)\\equiv\\frac{C_{ii}^{Gg}(\\ell)|_{S}}{C_{ii}^{Gg}(\\ell)}. \\tag{13}\\]
\\(Q\\) is sensitive to photo-z errors. In general catastrophic errors drive \\(Q\\) towards 1. \\(Q=1\\) if the photo-z is completely wrong and has no correlation with the true redshift and \\(Q=0\\) if the photo-z is 100% accurate. Usually \\(0<Q<1\\). \\(Q\\) can be calculated from the galaxy redshift distribution (the appendix SSB) and is thus in principle an observable too. From the two observables
\\[C_{ii}^{(2)}(\\ell) = C_{ii}^{Ig}(\\ell)+C_{ii}^{Gg}(\\ell)\\ \\,\\] \\[C_{ii}^{(2)}|_{S}(\\ell) = C_{ii}^{Ig}(\\ell)+C_{ii}^{Gg}|s(\\ell)\\ \\, \\tag{14}\\]
We obtain the solution to \\(C_{ii}^{Ig}\\),
\\[\\hat{C}_{ii}^{Ig}(\\ell)=\\frac{C_{ii}^{(2)}|_{S}(\\ell)-Q(\\ell)C_{ii}^{(2)}( \\ell)}{1-Q(\\ell)}. \\tag{15}\\]
For it to be non-singular, \\(Q\\) must deviate from unity (\\(Q<1\\)). We numerically evaluate this quantity and find that, for the survey specifications presented in this paper, \\(Q\\sim 1/2\\) (Fig. 2). The significant deviation of \\(Q\\) from unity is not a coincidence. It is in fact a manifestation that the lensing efficiency changes significantly across the redshift interval \\(\\sigma_{P}\\). We thus expect the \\(C_{ii}^{Ig}\\) estimator (Eq. 15) to be applicable in general.
## 3. Error Estimation
Unless explicitly specified, we will target at LSST throughout this paper to estimate the performance of the self-calibration technique proposed above. LSST plans to cover half the sky (\\(f_{\\rm sky}=0.5\\)) and reach the survey depth of \\(\\sim 40\\) galaxies per arcmin2. We adopt the galaxy number density distribution as \\(n(z)\\propto z^{2}\\exp(-z/0.5)\\), the rms shape error \\(\\gamma_{\\rm rms}=0.18+0.042z\\) and photo-z scatter \\(\\sigma_{P}=0.05(1+z)\\). We split galaxies into photometric redshift bins with \\(\\Delta z_{i}=0.2\\) centered at \\(\\bar{z}_{i}=0.2(i+1)\\) (\\(i=1,2,\\cdots\\)).2
Footnote 2: The choice of redshift bin is somewhat arbitrary. For example, we can include a lower redshift bin with \\(\\bar{z}_{i}=0.2\\) (\\(0.1<z^{P}<0.3\\)). The self-calibration certainly works for this bin, since the I-g cross correlation in this bin is easier to extract due to weaker lensing signal and hence weaker G-g in this bin. The major reason that we do not include this bin is that the lensing signal in this bin is weak. Weak lensing signal may cause confusion on the performance of the self-calibration, due to our choice to express the GI correlation before and after the self-calibration as ratios with respect to the lensing signal. For example, Fig. 3 shows that \\(\\Delta f_{ij}\\) increases toward low redshifts. But it is an manifestation of weak lensing signal instead of poor performance of the self-calibration.
### Measurement error in \\(C_{ii}^{Ig}\\)
Both \\(C_{ii}^{(2)}\\) and \\(C_{ii}^{(2)}|_{S}\\) have cosmic variance and shot noise errors, which propagate into \\(C_{ii}^{Ig}\\) extracted from the estimator Eq. 15. The error estimation on \\(C_{ii}^{Ig}\\) is non-trivial since errors in \\(C_{ii}^{(2)}\\) and \\(C_{ii}^{(2)}|_{S}\\) are neither completely uncorrelated, nor completely correlated. The derivation is lengthy, so we leave it in the appendix SSC. The final result of the rms error \\(\\Delta C_{ii}^{Ig}\\) in a bin with width \\(\\Delta\\ell\\) is
\\[(\\Delta C_{ii}^{Ig})^{2} = \\frac{1}{2\\ell\\Delta\\ell f_{\\rm sky}}\\left(C_{ii}^{gg}C_{ii}^{GG} +\\left[1+\\frac{1}{3(1-Q)^{2}}\\right]\\right. \\tag{16}\\] \\[\\times\\left[C_{ii}^{gg}C_{ii}^{GG,N}+C_{ii}^{gg,N}(C_{ii}^{GG}+C _{ii}^{II})\\right]\\] \\[\\left.+C_{ii}^{gg,N}C_{ii}^{GG,N}\\left[1+\\frac{1}{(1-Q)^{2}} \\right]\\right)\\.\\]
Here, the superscript \"N\" denotes measurement noise such as shot noise in galaxy distribution and random shape noise. \\(C_{ii}^{gg,N}=4\\pi f_{\\rm sky}/N_{i}\\) and \\(C_{ii}^{GG,N}=4\\pi f_{\\rm sky}\\gamma_{\\rm rms}^{2}/N_{i}\\), where \\(N_{i}\\) is the total number of galaxies in the \\(i\\)-th redshift bin.
From Eq. 16, \\(\\Delta C_{ii}^{Ig}\\) is insensitive to the intrinsic alignment, as long as the II correlation is sub-dominant with respect to the GG correlation. We will work at this limit. If the intrinsic alignment is too small, the measurement error \\(\\Delta C_{ii}^{Ig}\\) will be larger than \\(C_{ii}^{Ig}\\). \\(\\Delta C_{ii}^{Ig}=C_{ii}^{Ig}\\) hence
Figure 3.— The threshold of the applicability of the self-calibration technique \\(f_{ij}^{\\rm thresh}\\) and the residual statistical uncertainty \\(\\Delta f_{ij}^{(a)}\\) (solid lines with data points). Refer to Eq. 19 for the definition of \\(\\Delta f_{ij}^{(a)}\\). The self-calibration applies for sufficiently large GI contamination (\\(f_{ij}^{I}>f_{ij}^{\\rm thresh}\\)) and reduces the fractional error from \\(f_{ij}^{I}\\) to \\(\\Delta f_{ij}^{(a)}\\). Here, the superscript \\((a)\\) denotes the error from the \\(C_{ii}^{Ig}\\) measurement. Notice that \\(f_{ij}^{\\rm thresh}=\\Delta f_{ij}^{(a)}\\) and both are insensitive to the fiducial \\(f_{ij}^{I}\\). The cosmic variance in the lensing field and the random shape fluctuation sets the minimum _fractional_ statistical error \\(e_{ij}^{\\rm min}\\) in \\(C_{ij}^{GG}\\) measurement, which we plot as dash lines. The numerical estimation shown is for redshift bins \\(\\bar{z}_{i}=0.2(i+1)\\) (\\(i=1,2,\\cdots\\)) and multipole bin size \\(\\Delta\\ell=0.2\\ell\\) in LSST. In general \\(\\Delta f_{ij}^{(a)}<e_{ij}^{\\rm min}\\) and thus the residual error after the self-calibration is negligible. Since both \\(\\Delta f_{ij}^{(a)}\\) and \\(e_{ij}^{\\rm min}\\) scale in similar ways, this conclusion holds in general for other lensing surveys.
sets a threshold \\(f^{I}_{ij}\\). Combining Eq. 10 and the definition \\(f^{I}_{ij}\\equiv C^{IG}_{ij}/C^{GG}_{ij}\\) (Eq. 6), we obtain this threshold as
\\[f^{\\rm thresh}_{ij}=\\left(\\frac{\\Delta C^{Ig}_{ii}}{C^{GG}_{ij}}\\right)\\left( \\frac{W_{ij}\\Delta_{i}}{b_{i}(\\ell)}\\right). \\tag{17}\\]
\\(f^{\\rm thresh}_{ij}\\) has two important meanings. (1) It describes the minimum intrinsic alignment (\\(f^{I}_{ij}=f^{\\rm thresh}_{ij}\\)) that can be detected through our method with S/N=1. Thus it also defines the lower limit beyond which our self-calibration technique is no longer applicable. (2) It describes the self-calibration accuracy resulting from \\(C^{Ig}_{ii}\\) measurement error. The measurement error \\(\\Delta C^{Ig}_{ii}\\) propagates into an error in \\(C^{IG}_{ij}\\) determination through Eq. 10 and hence leaves a residual statistical error in the GG measurement. We denote this error as \\(\\Delta f^{(a)}_{ij}\\). Since \\(\\Delta f^{(a)}_{ij}/f^{I}_{ij}=\\Delta C^{Ig}_{ii}/C^{Ig}_{ii}\\), combining Eq. 10, we have
\\[\\Delta f^{(a)}_{ij}=\\left(\\frac{\\Delta C^{Ig}_{ii}}{C^{GG}_{ij}}\\right)\\left( \\frac{W_{ij}\\Delta_{i}}{b_{i}(\\ell)}\\right). \\tag{18}\\]
Also, we find an important relation between the two,
\\[\\Delta f^{(a)}_{ij}=f^{\\rm thresh}_{ij}. \\tag{19}\\]
This relation can be understood in this way. When the intrinsic alignment \\(f^{I}_{ij}>f^{\\rm thresh}_{ij}\\), it can be inferred through measurement in \\(C^{Ig}_{ii}\\) and hence be corrected. Since measurement \\(C^{Ig}_{ii}\\) has statistical error \\(\\Delta C^{Ig}_{ii}\\) (Eq. 16), this correction is imperfect. The residual error in \\(f^{I}_{ij}\\) is set by the same \\(\\Delta C^{Ig}_{ii}\\) determining \\(f^{\\rm thresh}\\), so we have the relation Eq. 19. If only our method is applied, we are only able to detect the intrinsic alignment with an amplitude \\(f^{I}_{ij}>f^{\\rm thresh}\\) and correct it to a level of \\(\\Delta f^{(a)}_{ij}=f^{\\rm thresh}\\).
Once the intrinsic alignment is sufficiently large to be detected (\\(f^{I}_{ij}>f^{\\rm thresh}_{ij}\\)), our self-calibration technique can detect and thus eliminate the GI contamination. It renders a systematical error in lensing measurement with amplitude \\(f^{I}_{ij}\\) into a statistical error with rms \\(\\Delta f^{(a)}_{ij}=f^{\\rm thresh}_{ij}<f^{I}_{ij}\\). We notice that \\(\\Delta f^{(a)}_{ij}\\) is insensitive to \\(f^{I}_{ij}\\).
Numerical results on \\(\\Delta f^{(a)}_{ij}=f^{\\rm thresh}_{ij}\\) are evaluated through Eq. 18 along with Eq. 16 and are shown in Fig. 3. For comparison, we also plot _the minimum G-G error_ in the \\(C^{GG}_{ij}\\) measurement. It is the rms fluctuation induced by the cosmic variance in the G-G correlation and shot noise due to random galaxy ellipticities, in idealized case of no other error sources such as the intrinsic alignment. This minimum G-G error sets the ultimate lower limit of the fractional measurement error on \\(C^{GG}_{ij}\\) (\\(i\
eq j\\)),
\\[\\left(e^{\\rm min}_{ij}\\right)^{2}=\\frac{C^{GG,2}_{ij}+(C^{GG}_{ii}+C^{GG,N}_{ii })(C^{GG}_{jj}+C^{GG,N}_{jj})}{2\\ell\\Delta\\ell f_{\\rm sky}C^{GG,2}_{ij}}. \\tag{20}\\]
When \\(\\Delta f^{(a)}_{ij}\\) is smaller than \\(e^{\\rm min}_{ij}\\), the residual error after the self-calibration will then be negligible, meaning a self-calibration with little cosmological information loss. We find that this is indeed the case in general. Fig. 3 is the forecast for LSST.3 Since both \\(\\Delta f^{(a)}_{ij}\\) and \\(e^{\\rm min}_{ij}\\) scale in a similar way with respect to survey specifications such as the sky coverage and galaxy number density, this conclusion also holds for other lensing surveys such as CFHTLS, DES, Pan-Starrs, Euclid and JDEM.
Footnote 3: We do notice that in some cases especially when one of the photo bin is at low redshift, \\(\\Delta f^{(a)}_{ij}>e^{\\rm min}_{ij}\\) (Fig. 3), leading to non-negligible loss in cosmological information for relevant redshift bins.
How can the GI contamination be corrected to an accuracy even below the statistical limit of GG measurement? Equivalently, why is \\(f^{I}_{ij}\\) as small as the one shown in Fig. 3 detectable? This surprising result requires some explanation. The reason is that \\(C^{Ig}_{ii}\\) is amplified with respect to \\(C^{IG}_{ij}\\) by a large factor \\(1/W_{ij}\\Delta_{i}\\sim O(10^{2})\\) (Eq. 10). Thus a small GI contamination (\\(f^{I}_{ij}\\ll 1\\)) can still cause a detectable \\(C^{Ig}_{ii}\\). This explains the small \\(f^{\\rm thresh}_{ij}=\\Delta f^{(a)}_{ij}\\).
### Measuring the galaxy bias
The second uncertainty in the self-calibration comes from the measurement of the galaxy bias, \\(\\Delta b_{i}(\\ell)\\). From Eq. 10, this uncertainty induces a residual statistical error
\\[\\Delta f^{(b)}_{ij}=f^{I}_{ij}(\\Delta b_{i}(\\ell)/b_{i}(\\ell)). \\tag{21}\\]
There are several possible ways to obtain \\(b_{\\ell}\\) such as combining galaxy-galaxy lensing and galaxy clustering measurements (e.g. Sheldon et al., 2004), combining 2-point and 3-point galaxy clustering (e.g. (Guo & Jing, 2009)), fitting against the halo occupation (Zheng et al., 2005) and conditional luminosity function (Yang et al., 2003), or analyzing the counts-in-cells measurements (Szapudi & Pan, 2004). Alternatively, \\(b_{i}(\\ell)\\) can be inferred from the galaxy density-galaxy density correlation measurement alone, \\(C^{(3)}_{ii}(\\ell)=C^{gg}_{ii}(\\ell)\\simeq b^{2}_{\\ell}C^{mm}_{ii}(\\ell)\\). Here, \\(C^{mm}_{ii}\\) is the matter angular power spectrum, with the same weighting as galaxies. Both uncertainties in the theoretical prediction of \\(C^{mm}_{ii}\\) and measurement error in \\(C^{(3)}_{ii}\\) affects the \\(b_{i}(\\ell)\\) measurement. Given a cosmology, one can evolve the matter power spectrum tightly constrained at the recombination epoch by CMB experiment to low redshift and thus predict \\(C^{mm}_{ii}\\). As long as the associated uncertainty is smaller than 10%, the induced error will be sub-dominant to the systematical error discussed later in SS3.3. This is an important issue for further investigation.
On the other hand, the statistical error in \\(b_{i}(\\ell)\\) induced by the galaxy clustering measurement error is
\\[\\frac{\\Delta b_{i}(\\ell)}{b_{i}(\\ell)}\\sim\\frac{1}{2}\\sqrt{\\frac{1}{\\ell\\Delta \\ell f_{\\rm sky}}}\\times\\left(1+\\frac{C^{gg,N}_{ii}(\\ell)}{C^{gg}_{ii}(\\ell)} \\right). \\tag{22}\\]
This rough estimation suffices for the purpose of this paper, for which the reason will be come clear latter soon. This error is negligible for a number of reasons. (1) It is in general much smaller than the systematical error in the scaling relation, \\(\\delta f^{(c)}_{ij}\\). As will be shown in SS3.3, \\(\\delta f^{(c)}_{ij}\\sim 0.1f^{I}_{ij}\\). On other other hand, \\(b_{i}(\\ell)\\) can in general be measured with better than 10% accuracy.
For example, for LSST at \\(\\ell=100\\) with \\(\\Delta\\ell=0.2\\ell\\), \\(\\Delta b_{i}(\\ell)/b_{i}(\\ell)\\simeq 1.6\\%\\). \\(b_{i}(\\ell)\\) can be measured with higher accuracy toward smaller scales, until shot noise dominates. Thus \\(\\Delta f_{ij}^{(b)}\\ll\\delta f_{ij}^{(c)}\\) at relevant scales. (2) It is smaller than the minimum statistical error in \\(C_{ij}^{GG}\\) measurement (Fig. 3). First of all, galaxy clustering measurement is in general more accurate than lensing measurement. Second, the impact of uncertainty in \\(b_{i}(\\ell)\\) on the self-calibration is modulated by a factor \\(f_{ij}^{I}\\) (\\(\\Delta f_{ij}^{(b)}=f_{ij}^{I}(\\Delta b_{i}(\\ell)/b_{i}(\\ell))\\)). Unless \\(f_{ij}^{I}>1\\), \\(\\Delta f_{ij}^{(b)}\\) is suppressed. Thus we expect that the \\(b_{i}(\\ell)\\) induced error \\(\\Delta f_{ij}^{(b)}\\) is negligble even to the minimum statistical error in \\(C_{ij}^{GG}\\) measurement. From the above general argument, this conclusion should hold for most, if not all, lensing surveys. We show one example of LSST. Even for a rather large \\(f_{ij}^{I}=1\\), uncertainty in \\(b_{i}(\\ell)\\) only causes a statistical error of \\(\\Delta f_{ij}^{(b)}=1.6\\%\\) at \\(\\ell=100\\), negligible comparing to statistical uncertainties in \\(C_{ij}^{GG}\\) measurement (Fig. 3). The above conclusions are safe even if Eq. 22 underestimates the error in \\(b_{i}(\\ell)\\) by a factor of a few. This is the reason that we do not seek a more robust estimation on the measurement error in \\(b_{i}(\\ell)\\).
From the above argument, the errors \\(\\Delta f_{ij}^{(a)}\\) and \\(\\Delta f_{ij}^{(b)}\\) arise from different sources and thus are independent to each other. The combined error is then \\(\\sqrt{(\\Delta f_{ij}^{(a)})^{2}+(\\Delta f_{ij}^{(b)})^{2}}\\). Since the galaxy bias induced error is likely sub-dominant to either the error source (\\(a\\)) or to the one will be discussed in SS3.3, we will neglect it for the rest of the paper.
### The accuracy of the \\(C_{ij}^{IG}\\)-\\(C_{ii}^{Ig}\\) relation
A key ingredient in the self-calibration is Eq. 10, which links the observable \\(C_{ii}^{Ig}\\) to the GI contamination. However, Eq. 10 is not exact and we quantify its accuracy by
\\[\\epsilon_{ij}(\\ell)\\equiv\\frac{b_{i}(\\ell)C_{ij}^{IG}(\\ell)}{W_{ij}\\Delta_{i }C_{ii}^{Ig}(\\ell)}-1. \\tag{23}\\]
\\(\\epsilon_{ij}\\) also quantifies the induced residual systematic error,
\\[\\delta f_{ij}^{(c)}=\\epsilon_{ij}f_{ij}^{I}. \\tag{24}\\]
We present a rough estimation by adopting a toy model
\\[\\Delta_{mI}^{2}(k,z)\\propto\\Delta_{gI}^{2}(k,z)\\propto\\Delta_{m}^{2}(k,z)(1+z )^{\\beta} \\tag{25}\\]
with \\(\\beta=-1,0,1\\). Here, \\(\\Delta_{mI}^{2}\\), \\(\\Delta_{gI}^{2}\\) and \\(\\Delta_{m}^{2}\\) are the 3D matter-intrinsic alignment, galaxy-intrinsic alignment cross correlation power spectrum (variance) and matter power spectrum, respectively. The accuracy of Eq. 10 is not only affected by the scale dependence of corresponding 3D power spectra, but also their redshift evolution (the appendix A). Theoretical models of the intrinsic alignment (e.g. Schneider & Bridle 2009) show that the redshift evolution may not follow the evolution in the density field. For this reason, we add an extra redshift dependence \\((1+z)^{\\beta}\\) in Eq. 25. This recipe is completely arbitrary, but it helps to demonstrate the robustness of Eq. 10.
Numerical results are shown in Fig. 4. We see that for most \\(ij\\) pairs, \\(|\\epsilon_{ij}|\\) is less than 10%, meaning that we are able to suppress the GI contamination by a factor of 10 or larger, if other errors are negligible.
We notice that the largest inaccuracy of Eq. 10 occurs for those adjacent bins (\\(i,j=i+1\\)), which is 10%-20%. This is caused by the lensing geometry dependence. Eq. 10 would be quite accurate if the integrand in Eq. A1 & A3 varies slowly with redshift. However, since the \\(i\\)-th bin and the \\(j=i+1\\)-th bin are close in redshift, the lensing kernel \\(W_{j}(z)\\) varies quickly over the \\(i\\)th redshift bin. This fast redshift evolution degrades the accuracy of the scaling relation and thus causes a larger \\(|\\epsilon_{ij}|\\). On the other hand, \\(W_{j}(z)\\) changes more slowly over other bins with \\(i\
eq j-1\\), since now the source-lens separation is larger. For this reason, Eq. 10 is more accurate for these bins. The self-calibration technique does not work excellently for adjacent bins, but a factor of 5 reduction in the GI contamination is still achievable.
The scaling relation accuracy is sensitive to the photo-z accuracy, \\(\\epsilon_{ij}\\propto\\sigma^{2}\\simeq\\sigma_{p}^{2}+(\\Delta z)^{2}/12\\). Here, \\(\\sigma\\) is the rms redshift dispersion in the corresponding redshift bin. One can obtain the above relation by perturbing Eq. A1 & A3 around the median redshift and keeping up to the second order. Thus, if photo-z accuracy can be significantly improved, the accuracy of Eq. 10 can be significantly improved too. For example, if \\(\\sigma_{p}=0.03(1+z)\\) instead of the fiducial value \\(0.05(1+z)\\), the scaling relation accuracy can be improved by a factor of \\(\\sim 2\\)-3. This would allow us to suppress the GI contamination by a factor of
Figure 4.— \\(\\epsilon_{ij}\\) quantifies the accuracy of Eq. 10, which links the observable \\(C_{ii}^{Ig}\\) to the GI contamination \\(C_{ij}^{IG}\\). It thus quantifies a dominant systematical error of the self-calibration technique, \\(\\delta f_{ij}^{(c)}=\\epsilon_{ij}f_{ij}^{I}\\). Solid, dotted and dash lines represent three toy models of the intrinsic ellipticity evolution. Usually, Eq. 10 is accurate within 10%. However, for those adjacent bins with \\(i,j=i+1\\), due to stronger redshift dependence of the lensing kernel, Eq. 10 is least accurate and \\(\\epsilon_{ij}\\) is largest (right panel).
10-20 or even higher. The shape of the photo-z PDF also matters. Depending on which direction it is skewed, the scaling relation accuracy may be improved or degraded. In SS4.2, we will discuss the impact of catastrophic error, which presents as significant deviation from the adopted Gaussian PDF.
The accuracy of the scaling relation may also be improved by better modeling. \\(\\epsilon_{ij}\\) has (much) larger chance to be positive (Fig. 4). This behavior is likely general, not limited to the intrinsic ellipticity toy models we investigate. In deriving the scaling relation, both \\(C_{ii}^{Ig}\\) and \\(C_{ij}^{IG}\\) are evaluated at the middle redshift \\(\\bar{z}_{i}\\). The weighting function in \\(C_{ii}^{Ig}\\) roughly peaks at \\(\\bar{z}_{i}\\) while the one in \\(C_{ii}^{IG}\\) peaks at lower redshift, due to the monotonous decreasing of \\(W_{j}(z)\\) with redshift (until it vanishes). It is this imbalance causing the general behavior \\(\\epsilon_{ij}>0\\). It also shed high on improving \\(C_{ij}^{IG}\\)-\\(C_{ii}^{Ig}\\) relation and reducing the associated systematical error: an interesting project for future investigation.
### General behavior of the residual errors
Depending on the nature of the GI correlation, either the error in \\(C_{ii}^{Ig}\\) measurement or the error in the scaling relation can dominate the error budget of the self-calibration, while the one from the galaxy bias is likely sub-dominant. Fig. 1 demonstrates the relative behavior of the three error sources in the self-calibration. The bold lines highlight the dominant error, as a function of the intrinsic alignment amplitude \\(f_{ij}^{I}\\). There are several regimes.
* \\(f_{ij}^{I}\\leq f_{ij}^{\\rm thresh}\\). The intrinsic alignment is too weak to be detected in the galaxy-lensing correlation. For this reason, the self-calibration technique is not applicable. However, this usually also mean the intrinsic alignment is negligible in lensing-lensing measurement, comparing to the associated minimum statistical error (Fig. 3). Thus there is no need to correct for the GI contamination in this case. However, there are important exceptions to the above conclusion. For example, from Fig. 3, when one photo-z bin is at sufficiently low redshift, the GI contamination is undetectable by our method, but the systematical error it induces is non-negligible. Furthermore, our method is insensitive to the intrinsic alignment which is weakly correlated to the density field. Such intrinsic alignment can cause large contamination to the lensing power spectrum \\(C_{ii}^{GG}\\), but leaves no detectable feature in the ellipticity-density measurement \\(C_{ii}^{(2)}\\). In these cases, other methods (e.g. Joachimi & Schneider, 2008, 2009; Zhang, 2010) shall be applied to correct for the intrinsic alignment.
* \\(f_{ij}^{I}>f_{ij}^{\\rm thresh}\\). The self-calibration begins to work. (1) \\(f_{ij}^{I}<\\Delta f_{ij}^{\\rm thresh}/\\epsilon_{ij}\\). The statistical error induced by I-g measurement uncertainty dominates. However, this residual error, \\(\\Delta f_{ij}^{(a)}=\\Delta f_{ij}^{\\rm thresh}\\), is usually negligible, comparing to the minimum statistical error \\(e_{ij}^{\\rm min}\\) in shear measurement (\\(\\Delta f_{ij}^{(a)}<e_{ij}^{\\rm min}\\), Fig. 3). In this domain, the self-calibration technique is promising to work down to the statistical limit of lensing surveys. (2) \\(f_{ij}^{I}>\\Delta f_{ij}^{\\rm thresh}/\\epsilon_{ij}\\). The systematical error arising from the imperfect scaling relation domiantes. The fractional residual error in lensing-lensing measurement is \\(\\delta f_{ij}^{(c)}=\\epsilon_{ij}f_{ij}^{I}\\sim 0.1f_{ij}^{I}\\). This error will be still sub-dominant to the lensing statistical fluctuation, if \\(f_{ij}^{I}<e_{ij}^{\\rm min}/\\epsilon_{ij}\\). If not the case, the self-calibration can work to suppress the GI contamination by a factor of 10. Other complementary techniques such as the nulling technique proposed by Joachimi & Schneider (2008, 2009) shall be applied to further reduce the residual GI contamination down to its statistical limit.
## 4. Other sources of error
There are other sources of error, beyond the three ones discussed above. We discuss qualitatively on magnification bias, catastropic photo-z error, stochastic galaxy bias and cosmological uncertainties. Based on simplified estimations, we conclude that none of them can completely invalidate the self-calibration technique. Quantitative and comprehensive evaluation of all these errors, including the ones in previous section, will be postpone elsewhere.
### Magnification bias
Gravitational lensing not only distorts the shape of galaxies, but also alter their spatial distribution and induces magnification bias, or cosmic magnification (e.g. Scranton et al. (2005) and references therein). It changes the observed galaxy overdensity to \\(\\delta_{g}^{L}=\\delta_{g}+g(F)\\kappa\\). The function \\(g(F)=2(-d\\ln N(>F)/d\\ln F-1)\\) is determined by the logrithamic slope of the (unlensed) galaxy luminosity function \\(N(>F)\\) and is in principle measurable.
The magnification bias affects both \\(C_{ij}^{(1)}\\), through a subtle source-lens coupling (Hamana, 2001), and \\(C_{ij}^{(3)}\\). However, these impacts are negligible, in the context of this paper. The magnification bias has a relatively larger effect on \\(C_{ii}^{(2)}\\) and modifies Eq. 8 to
\\[C_{ii}^{(2)} = \\!\\!\\!C_{ii}^{gG}+C_{ii}^{gI}+g_{i}(C_{ii}^{GG}+C_{ii}^{GI}) \\tag{26}\\] \\[= \\!\\!\\!\\left[C_{ii}^{gG}+g_{i}C_{ii}^{IG}\\right]+\\left[C_{ii}^{gI} +g_{i}C_{ii}^{GG}\\right]\\.\\]
Here \\(g_{i}\\) is the averaged \\(g(F)\\) over galaxies in the \\(i\\)-th redshift bin, \\(g_{i}=\\langle g_{i}(F)\\rangle\\). \\(g(F)\\) is of order unity (e.g. Scranton et al., 2005). However, since it changes sign from bright end of the luminosity function to the faint end, We expect that the averaged \\(g_{i}\\) to be smaller than 1 for sufficiently deep surveys, \\(g_{i}<1\\).
Our goal is to measure \\(C_{ii}^{gI}\\), with new contaminations from the magnification bias. We can apply the same weighting of the estimator Eq. 15 here. On one hand, both \\(C_{ii}^{gI}\\) and \\(C_{ii}^{GG}\\) are unchanged by this weighting. On the other hand, both \\(C_{ii}^{GI}\\) and \\(C_{ii}^{gG}\\) are reduced by virtually the same \\(1-Q\\). These behaviors mean that, the estimator Eq. 15 eliminates the combination \\(C_{ii}^{gG}+g_{i}C_{ii}^{IG}\\) and measures the combination \\(C_{ii}^{gI}+g_{i}C_{ii}^{GG}\\) in which the term \\(g_{i}C_{ii}^{GG}\\) contaminates the I-g measurement.
\\(g_{i}C_{ii}^{GG}\\) can not be eliminated completely, due to various sources of error. An obvious one is the measurement error in \\(g(F)\\). At bright end, the galaxy number density drops exponentially and lensing modifies \\(N(>F)\\) significantly for its steep slope. At faint end, the flux measurement noise is large. Catastrophic photo-z error is also an issue (Schneider et al., 2000). We will not estimate these errors for realistic surveys. Instead, we ask how stringent the requirement on the \\(g_{i}\\) and \\(C_{ii}^{GG}\\) measurements should be in order to make the impact of the magnification bias negligible.
Suppose the \\(g_{i}\\) measurement has an error \\(\\delta g_{i}\\) and the \\(C_{ii}^{GG}\\) measurement has an error \\(\\delta C_{ii}^{GG}\\), the induced fractional error in \\(C_{ij}^{GG}\\) measurement is
\\[\\frac{W_{ij}\\Delta_{i}}{b_{i}}\\frac{\\delta g_{i}C_{ii}^{GG}+g_{i} \\delta C_{ii}^{GG}}{C_{ij}^{GG}}\\] \\[< \\frac{W_{ij}\\Delta_{i}}{b_{i}}\\left(\\left|\\delta g_{i}\\right|+ \\left|g_{i}\\frac{\\delta C_{ii}^{GG}}{C_{ii}^{GG}}\\right|\\right)\\] \\[= \\,O(10^{-3})\\left(\\left|\\frac{\\delta g_{i}}{0.1}\\right|+\\left| \\frac{g_{i}\\delta C_{ii}^{GG}/C_{ii}^{GG}}{10\\%}\\right|\\right)\\.\\]
The above relation holds since \\(C_{ii}^{GG}<C_{ij}^{GG}\\) (\\(i<j\\)), \\(b_{i}=O(1)\\) and \\(W_{ij}\\Delta_{i}=O(10^{-2})\\). (1) If \\(g_{i}\\) itself is small (\\(\\left|g_{i}\\right|\\lesssim 0.1\\)), then there is no need to correct for the \\(g_{i}C_{ii}^{GG}\\) term since its influence is at the level of 0.1% and thus negligible. (2) If \\(g_{i}\\) is large, but it can be measured with an accuracy \\(\\pm 0.1\\), and if \\(C_{ii}^{GG}\\) can be measured with 10% accuracy, the magnification bias induced error will be \\(O(10^{-3})\\). It can thus be safely neglected, comparing to the minimum statistical error in \\(C_{ii}^{GG}\\) (Fig. 3) or to other residual errors of the self-calibration technique (Fig. 3 & 4). Direct measurement of \\(g_{i}\\) from the observed (lensed) flux galaxy distribution in the redshift bin and the approximation \\(C_{ii}^{(1)}\\simeq C_{ii}^{GG}\\) likely meet the requirement, unless the II contamination is larger than 10%. (3) If the II contamination is larger than 10% and the measurement of \\(C_{ii}^{GG}\\) is heavily polluted, a more complicated method may work. We can split galaxies into flux bins and perform the above analysis to each flux bin. Since \\(g(F)\\) changes in a known way across these flux bins, we are in principle able to eliminate the \\(g_{i}C_{ii}^{GG}\\) term, combining all these measurements. For example, one can find an appropriate weighting function \\(W(F)\\) such that \\(\\langle g(F)W(F)\\rangle=0\\). Although this method requires more accurate \\(g(F)\\) measurement, it does not require measurement on \\(C_{ii}^{GG}\\) and thus avoids the II contamination and other associated errors.
Based on the above arguments, we expect that our self-calibration technique is safe against the magnification bias, although extra care is indeed required.
### Catastrophic photo-z error
Numerical calculations we perform in this paper only consider a Gaussian photo-z PDF. Observationally, the photo-z PDF is more complicated, with non-negligible outliers (e.g. Jouvel et al., 2009; Bernstein & Huterer, 2009). The existence of this catastrophic error affects the self-calibration technique through two ways. (1) It affects the accuracy of the \\(Q\\) estimation. (2) It affects the scaling relation Eq. 10. As shown in the appendix SSA and further discussed in SS3.3, a key condition in deriving Eq. 10 is that the true galaxy distribution in a given photo-z bin is sufficiently narrow and smooth. So likely catastrophic error leads to degradation of the scaling relation (Eq. 10).4
Footnote 4: However, some forms of catastrophic error bring better match in the redshift evolution of the integrands of Eq. A1 & A3 and thus can actually improve the accuracy of the scaling relation.
From the appendix B, catastrophic error introduces bias to \\(Q\\), mainly through its impact on \\(\\eta\\). Stage IV lensing projects need to control the outlier rate \\(f_{\\rm cat}\\) to \\(\\sim 0.1\\%\\) accuracy (Hearin et al., 2010) in order for the induced systematical errors to be sub-dominant. If it is the case, we are able to perturb the photo-z PDF \\(p(z|z^{P})\\) in Eq. B5. We choose \\(|z-z^{P}|>\\Delta\\) as the criteria of the catastrophic error and then have \\(f_{\\rm cat}=\\int_{0}^{z^{P}-\\Delta}p(z|z^{P})dz+\\int_{z^{P}+\\Delta}^{\\infty}p( z|z^{P})dz\\). Since \\(f_{\\rm cat}\\ll 1\\), from Eq. B5, we find that the induced bias \\(\\delta Q=O(f_{\\rm cat})\\). As long as the goal \\(|f_{\\rm cat}|\\lesssim 0.1\\%\\) can be achieved, the induced error in \\(Q\\) is less than 1% and hence not a significant source of error in the self-calibration. Furthermore, we are able to infer the statistically averaged photo-z PDF through self- and cross-calibrations of photo-z errors, even with the presence of large catastrophic errors (e.g. Schneider et al., 2006; Newman, 2008; Zhang et al., 2009; Benjamin et al., 2010). Since \\(Q\\) can be predicted given the photo-z PDF, we are able to reduce the possible bias in \\(Q\\), even if the actual \\(f_{\\rm cat}\\gtrsim 0.1\\%\\).
The catastrophic error also affects the scaling relation. It biases both \\(C^{IG}\\), through the term \\(W_{j}\\) and \\(n_{i}\\) in Eq. A1, and \\(C^{Ig}\\), through the term \\(n_{i}^{2}\\) in Eq. A3. Part of the effect cancels when taking the ratio of the two. The residual error is also of the order \\(O(f_{\\rm cat})\\). Hence from the above order of magnitude estimation, the bias induced by catastrophic error is likely sub-dominant to the major systematical error \\(\\delta f_{ij}^{(c)}\\) in the scaling relation. More sophisticated analysis is required to robustly quantify its impact.
### Stochastic galaxy bias
A key assumption in Eq. 10 is the deterministic galaxy bias with respect to the matter distribution. In reality there is stochasticity in galaxy distributions, which can both cause random scatters and systematic shift in the scaling relation. Quantifying its impact is beyond our capability, since the galaxy stochasticity, the intrinsic alignment and correlation between the two are not well understood. For example, the galaxy bias is likely correlated with the strength of the intrinsic alignment, since both depend on the type of galaxies. Nonetheless, there are hopes to control its impact. (1) The galaxy stochasticity can in principle be measured (e.g. Pen, 1998; Fan, 2003; Bonoli & Pen, 2008; Zhang, 2008) and modeled (e.g. Baldauf et al., 2009). Measurement and modeling of the intrinsic alignment can be improved too (e.g. Hirata & Seljak, 2004; Okumura et al., 2008; Schneider & Bridle, 2009). (2) Recently Baldauf et al. (2009) showed that, by proper weighting and modeling, the galaxy stochasticity can be suppressed to 1% level to \\(k\\sim 1h/\\)Mpc. Thus there is promise to control the error induced by the galaxy stochasticity in the self-calibration to be \\(\\sim 1\\%\\times f_{ij}^{I}\\). This error is sub-dominant to other systematical errors, especially the one induced by the scaling relation inaccuracy (SS3.3).
### Cosmological uncertainties
The self-calibration techniques require evaluation of \\(W_{ij}\\) in Eq. 10 and \\(Q\\) in Eq. 15. Both evaluations involve the cosmology-dependent lensing kernel \\(W_{L}(z_{L},z_{G})\\propto\\Omega_{m}(1+z_{L})\\chi_{L}(1-\\chi_{L}/\\chi_{G})\\). Fortunately, we do not need strong cosmological priors to evaluate it. \\(\\Omega_{m}\\) has already been measured to 5% accuracy (Komatsu et al., 2010) and will be measured to below 1% accuracy by Planck.5 Stage III BAO and supernova surveys will measure the distance-redshift relation to 1% accuracy (e.g. Albrecht et al., 2006). So if we take these priors, uncertainties in cosmology can at most bias the self-calibration at percent level accuracy, negligible to the identified \\(\\sim 10\\%\\) scaling relation error in SS3.3. We need further investigation to robustly quantify the impact of uncertainties in cosmological parameters.
Footnote 5: [http://www.rssd.esa.int/index.php?project=PLANCK&page=perf_top](http://www.rssd.esa.int/index.php?project=PLANCK&page=perf_top)
## 5. Discussions and Summary
We have proposed a self-calibration technique to eliminate the GI contamination in cosmic shear measurement. It contains two original ingredients. (1) This technique is able to extract the I-g cross correlation from the galaxy density-ellipticity correlation of the same redshift bin in the given lensing survey with photo-z measurement. (2) It then converts this I-g measurement into a measure of the GI correlation through a generic scaling relation. The self-calibration technique has only moderate requirement on the photo-z accuracy and results in little loss of cosmological information. We have performed simple estimation on the performance of this self-calibration technique, which suggests that it can either render the systematical GI contamination into a negligible statistical error, or suppress the GI contamination by a factor of \\(\\sim 10\\), whichever is larger.
The GI self-calibration can be combined with the photo-z self-calibration (Zhang et al., 2009) for a joint self-calibration against both the GI contamination and the photo-z outliers. This combination does not over-use the information in weak lensing surveys. The GI self-calibration mainly use the galaxy ellipticity-density correlation in the same redshift bin. On the other hand, the photo-z self-calibration mainly relies on the cross correlation between galaxy ellipticity-density correlation between different redshift bins.
More robust and self-consistent evaluation of the self-calibration (GI and photo-z) performance requires comprehensive analysis of all relevant errors discussed in SS3 & 4, and possibly more, along with realistic model of galaxy bias and intrinsic alignment. We expect that our self-calibration technique will still work under this more complicated and more realistic situation, since the method to extract the I-g correlation and the scaling relation between I-g and I-G are robust against the complexities mentioned above. Recently, Joachimi & Bridle (2009); Kirk et al. (2010) proposed simultaneous fittings of cosmological parameters, galaxy bias and intrinsic alignment. Our self-calibration technique can be incorporated in a similar scheme.
Our self-calibration technique only uses the shape-density and density-density measurements in the _same_ redshift bin to calibrate the intrinsic alignment. Lensing surveys contain more information on the intrinsic alignment, in the shape-shape correlation of the _same_ and between _different_ redshift bins, and shape-density correlation between _different_ redshift bins. These information has been incorporated by Joachimi & Schneider (2008, 2009); Joachimi & Bridle (2009); Okumura & Jing (2009); Kirk et al. (2010); Shi et al. (2010)) to calibrate the intrinsic alignment. These techniques are complementary to each other and shall be combined together for optimal calibration.
_Acknowledgments_: The author thanks Yipeng Jing and Xiaohu Yang for useful information on galaxy intrinsic ellipticity. The author thanks Gary Bernstein and the anonymous referees for many useful suggestions and comments. This work is supported by the one-hundred-talents program of the Chinese academy of science (CAS), the national science foundation of China (grant No. 10821302 & 10973027) and the 973 program grant No. 2007CB815401.
## References
* The Dark Energy Task Force Report (2009) The Dark Energy Task Force Report. Andreas Albrecht, et al. arXiv:astro-ph/0609591
* Bacon et al. (2000) Bacon, D. J., Refregier, A. R., & Ellis, R. S. 2000, MNRAS, 318, 625
* Baldauf et al. (2009) Baldauf, T., Smith, R. E., Seljak, U., & Mandelbaum, R. 2009, arXiv:0911.4973
* Bartelmann (1995) Bartelmann, M. 1995, A&A, 298, 661
* Benjamin et al. (2010) Benjamin, J., Van Waerbeke, L., Menard, B., & Kilbinger, M. 2010, arXiv:1002.2266
* Bernstein (2009) Bernstein, G. M. 2009, Astrophys. J., 695, 652
* Bernstein & Huterer (2009) Bernstein, G., & Huterer, D. 2009, arXiv:0902.2782
* Bonoli & Pen (2008) Bonoli, S., & Pen, U.-L. 2008, arXiv:0810.0273
* Bridle & King (2007) Bridle, S., & King, L. 2007, New Journal of Physics, 9, 444
* Brown et al. (2002) Brown, M. L., Taylor, A. N., Hambly, N. C., & Dye, S. 2002, MNRAS, 333, 501
* Catelan et al. (2001) Catelan, P., Kamionkowski, M., & Blandford, R. D. 2001, MNRAS, 320, L7
* Crittenden et al. (2001) Crittenden, R. G., Natarajan, P., Pen, U.-L., & Theuns, T. 2001, Astrophys. J., 559, 552
* Crittenden et al. (2002) Crittenden, R. G., Natarajan, P., Pen, U.-L., & Theuns, T. 2002, Astrophys. J., 568, 20
* Croft & Metzler (2000) Croft, R. A. C., & Metzler, C. A. 2000, Astrophys. J., 545, 561
* Fan (2003) Fan, Z. 2003, Astrophys. J., 594, 33
* Fu et al. (2008) Fu, L., et al. 2008, A&A, 479, 9
* Guo & Jing (2009) Guo, H., & Jing, Y. P. 2009, Astrophys. J., 702, 425
* Hamana (2001) Hamana, T. 2001, MNRAS, 326, 326
* Hearin et al. (2010) Hearin, A. P., Zentner, A. R., Ma, Z., & Huterer, D. 2010, arXiv:1002.3383
* Heavens et al. (2000) Heavens, A., Refregier, A., & Heymans, C. 2000, MNRAS, 319, 649
* Heymans & Heavens (2003) Heymans, C., & Heavens, A. 2003, MNRAS, 339, 711
* Heymans et al. (2004) Heymans, C., Brown, M., Heavens, A., Meisenheimer, K., Taylor, A., & Wolf, C. 2004, MNRAS, 347, 895
* Heymans et al. (2006) Heymans, C., White, M., Heavens, A., Vale, C., & van Waerbeke, L. 2006, MNRAS, 371, 750
* Hirata et al. (2004) Hirata, C. M., et al. 2004, MNRAS, 353, 529
* Hirata & Seljak (2004) Hirata, C. M., & Seljak, U. 2004, Phys. Rev. D, 70, 063526
* Hirata et al. (2007) Hirata, C. M., Mandelbaum, R., Ishak, M., Seljak, U., Nichol, R., Pimbblet, K. A., Ross, N. P., & Wake, D. 2007, MNRAS, 381, 1197
* Hoekstra & Jain (2008) Hoekstra, H., & Jain, B. 2008, arXiv:0805.0139
* Hu & Jain (2004) Hu, W., & Jain, B. 2004, Phys. Rev. D, 70, 043009
* Jing (2002) Jing, Y. P. 2002, MNRAS, 335, L89
* Jing et al. (2006) Jing, Y. P., Zhang, P., Lin, W. P., Gao, L., & Springel, V. 2006, ApJ, 640, L119Joachimi, B., & Schneider, P. 2008, A&A, 488, 829
* () Joachimi, B., & Schneider, P. 2009, arXiv:0905.0393
* () Joachimi, B., & Bridle, S. L. 2009, arXiv:0911.2454
* () Joavel, S., et al. 2009, arXiv:0902.0625
* () Kaiser, N., Wilson, G., & Luppino, G. A. 2000, arXiv:astro-ph/0003338
* () Kang, X., van den Bosch, F. C., Yang, X., Mao, S., Mo, H. J., Li, C., & Jing, Y. P. 2007, MNRAS, 378, 1531
* () King, L., & Schneider, P. 2002, A&A, 396, 411
* () King, L. J., & Schneider, P. 2003, A&A, 398, 23
* () Kirk, D., Bridle, S., & Schneider, M. 2010, arXiv:1001.3787
* () Komatsu, E., et al. 2009, ApJS, 180, 330
* () Komatsu, E., et al. 2010, arXiv: 1010.4538
* () Lee, J., & Pen, U.-L. 2000, ApJ, 532, L5
* () Lee, J., & Pen, U.-L. 2001, Astrophys. J., 555, 106
* () Lee, J., & Pen, U.-L. 2002, ApJ, 567, L111
* () Lee, J., & Pen, U.-L. 2007, ApJ, 670, L11
* () Mackey, J., White, M., & Kamionkowski, M. 2002, MNRAS, 332, 788
* () Mandelbaum, R., Hirata, C. M., Ishak, M., Seljak, U., & Brinkmann, J. 2006, MNRAS, 367, 611
* () Munshi, D., Valageas, P., van Waerbeke, L., & Heavens, A. 2008, Physics Reports, 462, 67
* () Newman, J. A. 2008, Astrophys. J., 684, 88
* () Okumura, T., Jing, Y. P., & Li, C. 2008, arXiv:0809.3790
* () Okumura, T., & Jing, Y. P. 2009, ApJ, 694, L83
* () Pen, U.-L. 1998, Astrophys. J., 504, 601
* () Pen, U.-L., Lee, J., & Seljak, U. 2000, ApJ, 543, L107
* () Refregier, A. 2003, Annual Review of Astronomy & Astrophysics, 41, 645
* () Rudd, D. H., Zentner, A. R., & Kravtsov, A. V. 2008, Astrophys. J., 672, 19
* () Schneider, P., King, L., & Erben, T. 2000, A&A, 353, 41
* () Schneider, M., Knox, L., Zhan, H., & Connolly, A. 2006, Astrophys. J., 651, 14
* () Schneider, M. D., & Bridle, S. 2009, arXiv:0903.3870
* () Scranton, R., et al. 2005, Astrophys. J., 633, 589
* () Sheldon, E. S., et al. 2004, Astronomical Journal, 127, 2544
* () Shi, X., Joachimi, B., & Schneider, P. 2010, arXiv:1002.0693
* () Stebbins, A. 1996, arXiv:astro-ph/9609149
* () Szapudi, I., & Pan, J. 2004, Astrophys. J., 602, 26
* () Takada, M., & White, M. 2004, ApJ, 601, L1
* () Van Waerbeke, L., et al. 2000, A&A, 358, 30
* () Wang, Y., Park, C., Yang, X., Choi, Y.-Y., & Chen, X. 2008, arXiv:0810.3359
* () White, M. 2004, Astroparticle Physics, 22, 211
* () Wittman, D. M., Tyson, J. A., Kirkman, D., Dell'Antonio, I., & Bernstein, G. 2000, Nature (London), 405, 143
* () Yang, X., Mo, H. J., & van den Bosch, F. C. 2003, MNRAS, 339, 1057
* () Yang, X., van den Bosch, F. C., Mo, H. J., Mao, S., Kang, X., Weinmann, S. M., Guo, Y., & Jing, Y. P. 2006, MNRAS, 369, 1293
* () Zhan, H., & Knox, L. 2004, ApJ, 616, L75
* () Zhan, H., & Knox, L. 2006, arXiv:astro-ph/0611159
* () Zhang, P. 2008, arXiv:0802.2416
* () Zhang, P., Pen, U.-L., & Bernstein, G. 2010, MNRAS, 405, 359
* () Zha, P. 2010, MNRAS letters, in press. arXiv:1003.5219.
* () Zhang, Y., Yang, X., Faltenbacher, A., Springel, V., Lin, W., & Wang, H. 2009, Astrophys. J., 706, 747
* () Zheng, Z., et al. 2005, Astrophys. J., 633, 791
## Appendix A The Scaling Relation
We derive the scaling relation (Eq. 10) under the Limber approximation. Under this approximation, the 2D GI angular cross correlation power spectrum between the \\(i\\)-th and \\(j\\)-th redshift bins is related to the 3D matter-intrinsic alignment cross correlation power spectrum \\(\\Delta_{mI}^{2}(k,z)\\) by
\\[\\frac{\\ell^{2}}{2\\pi}C_{ij}^{IG}(\\ell)=\\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_{ mI}^{2}\\left(k=\\frac{\\ell}{\\chi(z)},z\\right)W_{j}(z)\\chi(z)\\bar{n}_{i}(z)dz\\.\\] (A1)
Here,
\\[W_{j}(z_{L})\\equiv\\int_{0}^{\\infty}W_{L}(z_{L},z_{G})\\bar{n}_{j}(z_{G})dz_{G}\\.\\] (A2)
As a reminder, \\(\\bar{n}_{i}(z)\\) is the true redshift distribution of galaxies in the \\(i\\)-th redshift bin and \\(W_{L}(z_{L},z_{G})\\) is the lensing kernel. The integral limit runs from zero to infinite, to take into account the photo-z errors. On the other hand, the 2D angular power spectrum between the intrinsic alignment and galaxy number density in the \\(i\\)-th redshift bin is
\\[\\frac{\\ell^{2}}{2\\pi}C_{ii}^{Ig}(\\ell)=\\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_ {gI}^{2}\\left(k=\\frac{\\ell}{\\chi(z)},z\\right)n_{i}^{2}(z)\\chi(z)\\frac{dz}{d \\chi}dz=b_{i}(\\ell)\\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_{mI}^{2}\\left(k= \\frac{\\ell}{\\chi(z)},z\\right)n_{i}^{2}(z)\\chi(z)\\frac{dz}{d\\chi}dz\\.\\] (A3)
\\(\\Delta_{gI}^{2}(k,z)\\) is the 3D galaxy-galaxy intrinsic alignment power spectrum. In the last relation we have adopted a deterministic galaxy bias \\(b_{g}(k,z)\\) with respect to matter distribution and thus \\(\\Delta_{gI}^{2}(k,z)=b_{g}(k,z)\\Delta_{mI}^{2}(k,z)\\). \\(b_{i}(\\ell)\\) is defined by the above equation. It is the average of \\(b_{g}(k=\\ell/\\chi,z)\\) over the redshift bin. As long as \\(b_{g}(k,z)\\) does not change dramatically, we have \\(b_{i}(\\ell)=b_{g}(k=\\ell/\\chi_{i},\\bar{z}_{i})\\), to a good approximation.
In the limit that the true redshift distribution of galaxies in the \\(i\\)-th redshift bin is narrow, \\(\\Delta_{mI}^{2}\\) (\\(\\Delta_{gI}^{2}\\)) changes slowly and can be approximated as \\(\\Delta_{mI}^{2}(k=\\ell/\\chi_{i},\\bar{z}_{i})\\) (\\(\\Delta_{gI}^{2}(k=\\ell/\\chi_{i},\\bar{z}_{i})\\)). We then have the following approximations,
\\[\\frac{\\ell^{2}}{2\\pi}C_{ij}^{IG}(\\ell)\\simeq\\frac{\\pi}{\\ell}\\Delta_{mI}^{2} \\left(\\frac{\\ell}{\\chi_{i}},\\bar{z}_{i}\\right)W_{ij}\\chi_{i}\\,\\] (A4)
and
\\[\\frac{\\ell^{2}}{2\\pi}C_{ii}^{Ig}(\\ell)\\simeq b_{i}(\\ell)\\frac{\\pi}{\\ell} \\Delta_{mI}^{2}\\left(\\frac{\\ell}{\\chi_{i}},\\bar{z}_{i}\\right)\\frac{\\chi_{i}}{ \\Delta_{i}}\\.\\] (A5)
The quantity \\(W_{ij}\\) and \\(\\Delta_{i}\\) are already defined by Eq. 11 & 12. Based on the above two equations, we derive Eq. 10, whose accuracy is quantified in SS3.3.
B: Evaluating the \\(Q\\) parameter
To derive the relation between \\(C_{ii}^{Gg}|_{S}\\) and \\(C_{ii}^{Gg}\\), namely, \\(Q\\equiv C_{ii}^{Gg}|_{S}/C_{ii}^{Gg}\\), we will begin with the real space angular correlation function. We denote the angular correlation function between the shear at \\(z_{G}^{P}\\) and galaxies at \\(z_{g}^{P}\\) as \\(w^{Gg}(\\theta;z_{G}^{P},z_{g}^{P})\\). Its average over the distribution of galaxies in the \\(i\\)-th redshift bin is
\\[w_{ii}^{Gg}(\\theta) = \\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{G }^{P}\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{g}^{P}w^ {Gg}(\\theta;z_{G}^{P},z_{g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P})dz_{G}^{ P}dz_{g}^{P}\\] (B1) \\[= \\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{ G}^{P}\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{g}^{P} \\int_{0}^{\\infty}dz_{G}\\int_{0}^{\\infty}dz_{g}\\left[w^{Gg}(\\theta;z_{G},z_{g}) p(z_{G}|z_{G}^{P})p(z_{g}|z_{g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P}) \\right]\\.\\]
Here, \\(p(z|z^{P})\\) is the photo-z PDF. As a reminder, we have normalized such that
\\[\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}-\\Delta z_{i}/2}dz^{P}n_{i}^{P}( z^{P})=\\int_{0}^{\\infty}n_{i}(z)dz=1\\.\\]
Since
\\[w^{Gg}(\\theta;z_{G},z_{g})=\\int\\langle\\delta_{m}(\\theta^{{}^{\\prime}};z_{L}) \\delta_{g}(\\theta^{{}^{\\prime}}+\\theta;z_{g})\\rangle W_{L}(z_{L},z_{G})dz_{L}\\,\\] (B2)
where \\(\\langle\\cdots\\rangle\\) denotes the ensemble average and in practice denotes equivalently the average over \\(\\theta^{{}^{\\prime}}\\) (the ergodicity assumption), we have
\\[w_{ii}^{Gg}(\\theta) = \\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_ {G}^{P}\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{g}^{P }\\int_{0}^{\\infty}dz_{G}\\int_{0}^{\\infty}dz_{g}\\int_{0}^{\\infty}dz_{L}\\] \\[\\times\\left[\\langle\\delta_{m}(\\theta^{{}^{\\prime}};z_{L})\\delta_{ g}(\\theta^{{}^{\\prime}}+\\theta;z_{g})\\rangle W_{L}(z_{L},z_{G})p(z_{G}|z_{G}^{P})p(z_{g}|z_{ g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P})\\right]\\] \\[= \\int_{0}^{\\infty}dz_{L}\\int_{0}^{\\infty}dz_{g}\\left[\\langle\\delta _{m}(\\theta^{{}^{\\prime}};z_{L})\\delta_{g}(\\theta^{{}^{\\prime}}+\\theta;z_{g}) \\rangle W_{i}(z_{L})n_{i}(z_{g})\\right]\\.\\]
Notice that the lensing kernel \\(W_{L}(z_{L},z_{G})=0\\) when \\(z_{L}\\geq z_{G}\\). The average over all pairs with \\(z_{G}^{P}<z_{g}^{P}\\) gives the other correlation function,
\\[w_{ii}^{Gg}|_{S}(\\theta) = \\int\\langle\\delta_{m}(\\theta^{{}^{\\prime}};z_{L})\\delta_{g}( \\theta^{{}^{\\prime}}+\\theta;z_{g})\\rangle W_{i}(z_{L})n_{i}(z_{g})dz_{L}dz_{g} \\eta(z_{L},z_{g})\\.\\] (B4)
Here,
\\[\\eta(z_{L},z_{g})=\\frac{2\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z _{i}/2}2\\,dz_{G}^{P}\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i} /2}dz_{g}^{P}\\int_{0}^{\\infty}dz_{G}W_{L}(z_{L},z_{G})p(z_{G}|z_{G}^{P})p(z_{g}| z_{g}^{P})S(z_{G}^{P},z_{g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P})}{\\int_{ \\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}-\\Delta z_{i}/2}dz_{G}^{P}\\int_{\\bar{ z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}dz_{g}^{P}\\int_{0}^{\\infty}dz_{G}W_{L}(z_{L},z_{G})p(z_{G}| z_{G}^{P})p(z_{g}|z_{g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P})}\\ \\,\\] (B5)
where the selection function \\(S(z_{G}^{P},z_{g}^{P})=1\\) if \\(z_{G}^{P}<z_{g}^{P}\\) and \\(S(z_{G}^{P},z_{g}^{P})=0\\) otherwise. The factor 2 comes from the relation
\\[\\frac{\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}^{\\bar{z}_{i }+\\Delta z_{i}/2}d_{G}^{P}\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+ \\Delta z_{i}/2}d_{G}^{P}p(z_{G}|z_{G}^{P})p(z_{g}|z_{g}^{P})n_{i}^{P}(z_{G}^{P}) n_{i}^{P}(z_{g}^{P})}{\\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}d_{G}^{P} \\int_{\\bar{z}_{i}-\\Delta z_{i}/2}^{\\bar{z}_{i}+\\Delta z_{i}/2}d_{G}^{P}p(z_{G}| z_{G}^{P})p(z_{g}|z_{g}^{P})S(z_{G}^{P},z_{g}^{P})n_{i}^{P}(z_{G}^{P})n_{i}^{P}(z_{g}^{P})} =2\\.\\] (B6)
The power spectra \\(C_{ii}^{Gg}\\) and \\(C_{ii}^{Gg}|_{S}\\) are the Fourier transform of \\(w_{ii}^{Gg}\\) and \\(w_{ii}^{Gg}|_{S}\\), respectively. To evaluate these power spectra, we again follow the Limber approximation, which states that the dominant correlation signal comes from \\(z_{L}=z_{g}\\). We then have
\\[\\frac{\\ell^{2}C_{ii}^{Gg}(\\ell)}{2\\pi} = \\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_{mg}^{2}\\left(k=\\frac{\\ell} {\\chi(z)},z\\right)\\chi(z)W_{i}(z)n_{i}(z)dz\\,\\] (B7) \\[\\frac{\\ell^{2}C_{ii}^{Gg}|_{S}(\\ell)}{2\\pi} = \\frac{\\pi}{\\ell}\\int_{0}^{\\infty}\\Delta_{mg}^{2}\\left(k=\\frac{\\ell} {\\chi(z)},z\\right)W_{i}(z)\\chi(z)n_{i}(z)\\eta(z,z_{g}=z)dz\\.\\] (B8)
The quantity that we want to evaluate is \\(Q(\\ell)\\equiv C^{Gg}|_{S}(\\ell)/C^{Gg}(\\ell)\\). Since it is the ratio of the two power spectra, in which \\(\\Delta_{mg}^{2}\\), \\(W_{i}\\) and \\(n_{i}\\) roughly cancel, to the first order, \\(Q\\simeq\\eta\\). The value of \\(\\eta\\) is determined by the relative contribution to \\(C^{Gg}\\) from pairs with \\(z_{G}^{P}<z_{g}^{P}\\) and pairs with \\(z_{G}^{P}>z_{g}^{P}\\). If the two sets have the same contribution, \\(\\eta=1\\) and \\(Q=1\\). In the limit \\(\\sigma_{P}\\gg\\Delta z\\), the contribution from the pair with \\(z_{G}^{P}<z_{g}^{P}\\) to \\(C_{ii}^{Gg}\\) approaches to that of the pair with \\(z_{G}^{P}>z_{g}^{P}\\). So we have \\(\\eta\\to 1\\) and \\(Q\\to 1\\). In this limiting case, we will be no longer able to use this weighting scheme to separate \\(C^{Gg}\\) and \\(C^{Ig}\\). On the other hand, if \\(\\sigma_{P}\\ll\\Delta z\\), the pair with \\(C: The statistical error in extracting \\(C^{Ig}_{ii}\\)
For the convenience, we will work on the pixel space to derive the statistical error in extracting \\(C^{Ig}_{ii}\\) from the galaxy shape (ellipticity)-density measurement in the \\(i\\)-th photo-z bin. For a given redshift bin, we first pixelize the data into sufficiently fine (and uniform) pixels of photo-z and angular position. Each pixel, with label \\(\\alpha\\), has a corresponding photo-z \\(z^{P}_{\\alpha}\\) and corresponding angular position \\(\\vec{\\theta}_{\\alpha}\\). Each pixel also has a measured overdensity \\(\\delta_{\\alpha}+\\delta^{N}_{\\alpha}\\) and a measured \"shear\", \\(\\kappa_{\\alpha}+I_{\\alpha}+\\kappa^{N}_{\\alpha}\\). Here, the superscript \"N\" denotes the measurement noise, e.g., the shot noise. In total, there are \\(N_{P}\\) pixels. Following the definition of the angular power spectrum, we have
\\[C^{(2)}(\\ell) = N_{P}^{-2}\\sum_{\\alpha\\beta}\\left[\\delta_{\\alpha}+\\delta^{N}_{ \\alpha}\\right]\\left[\\kappa_{\\beta}+I_{\\beta}+\\kappa^{N}_{\\beta}\\right]\\exp \\left[i\\vec{\\ell}\\cdot(\\vec{\\theta}_{\\alpha}-\\vec{\\theta}_{\\beta})\\right]\\,\\] \\[C^{(2)}(\\ell)|_{S} = 2N_{P}^{-2}\\sum_{\\alpha\\beta}\\left[\\delta_{\\alpha}+\\delta^{N}_{ \\alpha}\\right]\\left[\\kappa_{\\beta}+I_{\\beta}+\\kappa^{N}_{\\beta}\\right]\\exp \\left[i\\vec{\\ell}\\cdot(\\vec{\\theta}_{\\alpha}-\\vec{\\theta}_{\\beta})\\right] \\times S_{\\alpha\\beta}\\.\\] (C1)
Here, \\(S_{\\alpha\\beta}=1\\) when \\(z^{P}_{\\alpha}>z^{P}_{\\beta}\\) and vanishes otherwise. In the limit that \\(N_{P}\\gg 1\\), \\(\\sum_{\\alpha\\beta}S_{\\alpha\\beta}=N_{P}^{2}/2\\). Namely, the average \\(\\overline{S_{\\alpha\\beta}}=1/2\\).
The \\(C^{Ig}\\) measurement error, from Eq. 15, is
\\[\\delta C^{Ig} = \\] (C2) \\[= \\]
The last expression has utilized the relation \\(\\overline{S_{\\alpha\\beta}}=1/2\\) and the fact that the density-intrinsic alignment correlation does not depend on the ordering along the line-of-sightof galaxy pairs. The rms error is
\\[(\\Delta C^{Ig})^{2} = \\frac{1}{(1-Q)^{2}}N_{P}^{-4}\\sum_{\\alpha\\beta\\rho\\sigma}\\exp \\left[i\\vec{\\ell}\\cdot(\\vec{\\theta}_{\\alpha}-\\vec{\\theta}_{\\beta})\\right] \\exp\\left[-i\\vec{\\ell}\\cdot(\\vec{\\theta}_{\\rho}-\\vec{\\theta}_{\\sigma})\\right] (2S_{\\alpha\\beta}-Q)(2S_{\\rho\\sigma}-Q)\\]
* The first term \\(C^{gg}C^{GG}\\) is the cosmic variance arising from the lensing and galaxy density fluctuations. The \\(Q\\) dependence drops out, since both \\(C^{(2)}\\) and \\(C^{(2)}|_{S}\\) sample the same cosmic volume and share the identical (fractional) cosmic variance from this term. \\(C^{gg}C^{GG}\\) is a familiar term in the cosmic variance of the ordinary galaxy-galaxy lensing power spectrum. However, the other familiar term, \\(C^{gG,2}\\), does not show up here. This again is caused by the fact that both \\(C^{(2)}\\) and \\(C^{(2)}|_{S}\\) sample the same cosmic volume and the cosmic variances inducing \\(C^{gG,2}\\) cancel in the estimator Eq. 15.
* The last term \\(C^{gg,N}C^{GG,N}[1+1/(1-Q)^{2}]\\) is the contribution from the shot noise in the galaxy distribution and random shape shot noise in shear measurement. The \\(Q\\) dependence can be understood as follows. Such error in \\(C^{(2)}\\) has two contributions, \\(\\delta C_{A}\\) from pairs with \\(z_{g}^{P}>z_{G}^{P}\\) and \\(\\delta C_{B}\\) from pairs with \\(z_{g}^{P}\\leq z_{G}^{P}\\). The total error is \\((\\delta C_{A}+\\delta C_{B})/2\\). Since they come from different pairs, these two errors are uncorrelated (\\((\\delta C_{A}\\delta C_{B}^{*})=0\\)), but they have the same dispersion \\(\\langle|\\delta C_{A}|^{2}\\rangle=\\langle|\\delta C_{B}|^{2}\\rangle=2C^{gg,N}C^{GG,N}\\). The factor 2 here provides the correct rms noise in \\(C^{(2)}\\), which is \\(C^{gg,N}C^{GG,N}\\). Clearly the shot noise error in \\(C^{(2)}|_{S}\\) is \\(\\delta C_{A}\\). Plug the above relations into Eq. 15, we find that the shot noise contribution is indeed the last term in Eq. C4. Unlike the cosmic variance term, which does not rely on \\(Q\\), the shot noise term blows up when \\(Q\\to 1\\). This corresponds to the case that the photo-z error is too large to provide any useful information and thus we are no longer able to separate the Ig contribution form the Gg contribution.
* The middle term is the cross talk between cosmic variance and shot noise. One can find similar terms in usual cross correlation statistical error analysis.
Interestingly, when \\((1-Q)^{2}=1/3\\) and when \\(C^{II}\\ll C^{GG}\\), Eq. C4 reduces to
\\[(\\Delta C^{Ig})^{2}\\!\\simeq\\!(C^{gg}+2C^{gg,N})(C^{GG}+2C^{GG,N}),\\ \\ when\\ Q\\sim 1- \\sqrt{1/3}=0.423. \\tag{15}\\]
This expression is identical to the usual expression of cross correlation statistical error, expect for the factor 2. | The galaxy intrinsic alignment is a severe challenge to precision cosmic shear measurement. We propose to self-calibrate the induced gravitational shear-galaxy intrinsic ellipticity correlation (the GI correlation, Hirata & Seljak, 2004) in weak lensing surveys with photometric redshift measurement. (1) We propose a method to extract the intrinsic ellipticity-galaxy density cross correlation (I-g) from the galaxy ellipticity-density measurement in the same redshift bin. (2) We also find a generic scaling relation to convert the extracted I-g correlation to the demanded GI correlation. We perform concept study under simplified conditions and demonstrate its capability to significantly reduce the GI contamination. We discuss the impact of various complexities on the two key ingredients of the self-calibration technique, namely the method to extract the I-g correlation and the scaling relation between the I-g and the GI correlation. We expect none of them is likely able to completely invalidate the proposed self-calibration technique.
Subject headings:cosmology: gravitational lensing-theory: large scale structure | Give a concise overview of the text below. |
arxiv-format/0811_1762v1.md | # Non-steady Accretion in Protostars
Zhaohuan Zhu1, Lee Hartmann1, and Charles Gammie 23
[email protected], [email protected], [email protected]
Footnote 1: affiliation: Dept. of Astronomy, University of Michigan, 500 Church St., Ann Arbor, MI 48105
Footnote 2: affiliation: Dept. of Astronomy, University of Illinois Urbana-Champaign, 1002 W. Green St., Urbana, IL 61801
Footnote 3: affiliation: Dept. of Physics, University of Illinois Urbana-Champaign
## 1 Introduction
The standard model of low-mass star formation posits the free-fall collapse of a protostellar molecular cloud core to a protostar plus disk during times of a few times \\(10^{5}\\) yr (e.g., Shu, Adams, & Lizano 1987), consistent with the statistics of protostellar objects in Taurus (Kenyon et al. 1990, 1994). To build up a star over these timescales requires a time-averaged infall rate of order \\(2\\times 10^{-6}-10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\), rates typically used in calculations of protostellar properties at the end of accretion (Stahler 1988; Hartmann, Cassen, & Kenyon 1997). The numerical simulations of dynamic star cluster formation by Bate et al. (2003) found that stars and brown dwarfs formed in burst lasting \\(\\sim 2\\times 10^{4}\\) years, implying infall rates of \\(\\sim 10^{-4}\\) to \\(10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\). However, the accretion luminosity implied by such infall rates is considerably higher than typical observed protostellarluminosities (Kenyon et al., 1990, 1994). This \"luminosity problem\" can be solved temporarily by piling up infalling matter in the circumstellar disk. Most of the mass must eventually be accreted onto the star, however. This requires major accretion events that are sufficiently short-lived that protostars are usually observed in quiescence.
This picture of highly time-dependent accretion is supported by observations. Individual knots in jets and Herbig-Haro objects, thought to be the result of outflows driven by accretion energy, argue for substantial disk variability (e.g., Bally, Reipurth, & Davis, 2007). The FU Ori objects provide direct evidence for short episodes of rapid accretion in early stages of stellar evolution, with accretion rates of \\(10^{-4}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) or more (Herbig, 1977; Hartmann & Kenyon, 1996), vastly larger than typical infall rates of \\(\\lesssim 10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) for low-mass objects (e.g., Kenyon, Calvet, & Hartmann, 1993; Furlan et al., 2008).
The mechanism driving FU Ori outbursts is not yet clear. A variety of models have been proposed: thermal instability (TI; Lin & Papaloizou, 1985; Bell & Lin, 1994); gravitational instability (GI; Vorobyov & Basu, 2005, 2005); gravitational instability and activation of the magnetorotational instability (MRI; Armitage, Livio, & Pringle, 2001; also Gammie, 1999 and Book & Hartmann, 2005); and even models in which planets act as a dam limiting downstream accretion onto the star (Clarke & Syer, 1996; Lodato & Clarke, 2004). Our recent analysis based on Spitzer IRS data (Zhu et al., 2007) led us to conclude that a pure TI model cannot work for FU Ori.
In view of the complexity of the problem and the physical uncertainties we adopt a schematic approach. We start with the (optimistic) assumption that protostellar accretion can be steady. We then show that the GI is likely to dominate in the outer disk, while the MRI is likely to be important in the inner disk, and that mismatches between the GI and MRI result in non-steady accretion for expected protostellar infall rates. 1 Our analysis agrees with the results found in the time-dependent outburst model of Armitage et al. (2001), and is consistent with our empirical analysis of the outbursting system FU Ori (Zhu et al., 2007). Although our results depend on simplified treatments of the GI and MRI, the overall picture is insensitive to parameter choices. We predict that above a critical infall rate protostellar disk accretion can be (relatively) steady; observational confirmation would help constrain mass transport rates by the GI and the MRI.
Footnote 1: GI and MRI here means turbulent states initiated by gravitational or magnetic instabilities, respectively.
## 2 Overview
A disk with viscosity 2\\(\
u\\) will evolve at cylindrical radius \\(r\\) on a timescale
Footnote 2: We use “viscosity” as shorthand for internal, localized transport of angular momentum by turbulence. We will also make the nontrivial assumption that external torques (e.g. MHD winds) can be neglected.
\\[t_{\
u}\\sim r^{2}/\
u \\tag{1}\\]If this is comparable to the timescales over which mass is being added to the disk, then in principle the disk can adjust to an approximate steady state with infall balanced by accretion. To fix ideas, we assume that the disk beyond 1 AU is mostly heated by irradiation from the central protostar of mass \\(M_{*}\\), so that the temperature \\(T\\propto r^{-1/2}\\). For a fully viscous disk, we adopt the usual parametrization of the viscosity \\(\
u=\\alpha c_{s}^{2}/\\Omega\\), where \\(c_{s}\\) is the sound speed (for a molecular gas) and \\(\\Omega\\) is the (roughly Keplerian) angular velocity. Then
\\[t_{\
u}\\sim{r^{2}\\Omega\\over\\alpha c_{s}^{2}}\\sim 1.3\\times 10^{3}\\alpha_{-1}^{-1 }\\,M_{1}\\,T_{300}^{-1}\\,R_{AU}\\,yr\\,, \\tag{2}\\]
where \\(\\alpha_{-1}\\equiv\\alpha/0.1\\) is the viscosity parameter \\(M_{1}\\equiv M_{*}/M_{\\odot}\\), \\(R_{AU}\\equiv r/{\\rm AU}\\), and \\(T_{300}\\) is the temperature at 1 AU in units of 300K. From this relation we see that a fully viscous disk might be able to keep up with mass infall over typical protostellar lifetimes of \\(\\sim 10^{5}\\) yr if the radius at which matter is being added satisfies \\(R_{AU}\\lesssim 10^{2}\\alpha_{-1}\\). In a layered disk picture, the viscosity may need to be modified such that \\(\
u=\\alpha c_{s}h\\), where \\(h\\) should be the thickness of the active layer instead of the midplane scale height. However, this difference is significant only if the temperatures differ at the active layer and the midplane, which they do not unless there is some midplane viscosity. In any case, Eq. (2) provides an upper limit to \\(R_{AU}\\). Since typical observational estimates of infall radii are \\(\\sim\\)10-100 AU (Kenyon et al. 1993), protostellar infall to a constant \\(\\alpha\\) disk is likely to pile up unless \\(\\alpha\\) is relatively large.
A more serious problem is that protostellar disks are unlikely to have constant \\(\\alpha\\). The best studied mechanism for angular momentum transport in disks, turbulence driven by the MRI (e.g., Balbus & Hawley 1998), requires a minimum ionization fraction to couple the magnetic fields to the mostly neutral disk. As substantial regions of protostellar disks will generally be too cold for thermal (collisional) ionization, ionization by nonthermal processes becomes important. This led Gammie (1996) to suggest a layered model in which non-thermally ionized surface layers are magnetically coupled while the disk midplane remain inert. We modify Gammie's analysis by assuming that the heating of the outer disk is not determined by local viscous dissipation but by irradiation from the central protostar, as above.
The mass accretion rate in a layered disk is
\\[\\dot{\\rm M}=6\\pi r^{1/2}{\\partial\\over\\partial r}\\left(2\\Sigma_{a}\
u r^{1/ 2}\\right)\\,, \\tag{3}\\]
where \\(\\Sigma_{a}\\) is the (one-sided) surface density of the active layers. Taking \\(\\Sigma_{a}=\\) constant, and assuming that the disk temperature \\(T\\propto r^{-1/2}\\),
\\[\\dot{\\rm M}=5\\times 10^{-7}\\Sigma_{100}T_{300}\\alpha_{-1}R_{AU}\\,, \\tag{4}\\]
where \\(\\Sigma_{100}\\equiv\\Sigma_{a}/100{\\rm g\\,cm^{-2}}\\).
Our nominal value of \\(\\alpha=0.1\\) may be reasonable for well-ionized regions, but it may be an overestimate for the outer regions of T Tauri disks (see SS5.3). Also, the fiducial value for \\(\\Sigma_{a}\\) is based upon Gammie's (1996) assumption of cosmic ray ionization, which may be an overestimate due to exclusion of cosmic rays by scattering and advection in the magnetized protostellar wind. X-rays provide a higher ionization rate near the surface of the disk but are attenuated more rapidly than cosmic rays (Glassgold & Igea 1999), yielding similar or smaller \\(\\Sigma_{a}\\). Both calculations assume that absorption of ions and electrons by grains is unimportant, which is only true if small dust is highly depleted in the active layer (e.g., Sano et al. 2000, also Ilgner & Nelson 2006a,b,c). In summary, it is likely that the estimate in equation (4) is an upper limit, and thus it appears unlikely that the MRI can transport mass at \\(r\\sim\\) a few AU at protostellar infall rates \\(2\\times 10^{-6}-10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\). MRI transport resulting from non-thermal ionization might however move material adequately in response to infall at \\(r\\gtrsim 10-100\\) AU.
On the other hand, if some nonmagnetic angular momentum transport mechanism can get matter in to \\(r\\lesssim 1\\) AU, _thermal_ ionization can occur and activate the MRI. A _minimum_ disk temperature is given by the effective temperature generated solely by local energy dissipation
\\[T>T_{eff}\\sim 1600(M_{1}{\\rm\\dot{M}}_{-5})^{1/4}\\,({\\rm R}/0.2{\\rm AU})^{-3/4} \\,{\\rm K}\\,, \\tag{5}\\]
where \\({\\rm\\dot{M}}_{-5}\\equiv{\\rm\\dot{M}}/10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\). Radiative trapping in an optically thick disk will make internal temperatures even higher. If \\(T\\gtrsim 1400\\) K most of the silicate particles will evaporate, thus eliminating a major sink for current-carrying electrons. Therefore, high accretion rates can potentially activate the MRI on distance scales of order 1 AU or less.
If magnetic angular momentum transport is weak then mass will accumulate in the disk until the disk becomes gravitationally unstable, at which point gravitational torques can transfer mass inward. GI alone may result cause accretion outbursts (Vorobyov & Basu 2006, 2008), although the details of disk cooling are crucial in determining if such bursts actually occur due to pure GI. Moreover the GI may be unable to drive accretion in the inner disk. GI sets in when the Toomre parameter
\\[Q={c_{s}\\kappa\\over\\pi G\\Sigma}\\simeq{c_{s}\\Omega\\over\\pi G\\Sigma}\\sim 1\\,, \\tag{6}\\]
where we have set the epicyclic frequency \\(\\kappa\\simeq\\Omega\\), appropriate for a near-Keplerian disk. At small radius \\(\\Omega\\) and \\(c_{s}\\) will be large and therefore \\(\\Sigma\\) must also be large if we are to have GI. Since rapid accretion causes significant internal heating (compared to heating by protostellar irradiation), large surface densities imply significant radiative trapping, raising internal disk temperatures above the effective temperature estimate above. Thus, when considering rapid mass transfer by GI, either in a quasi-steady state or in bursts, it is necessary to consider thermal MRI activation in the inner disk.
The above considerations suggest that the only way low-mass protostellar disks can accrete steadily during infall is if a smooth transition can be made from the GI operating on scales of \\(\\sim 1-10\\) AU to the thermally-activated MRI at smaller radii. To test this idea, we have constructed a series of steady-state disk models with realistic opacities. We compute both MRI and GI steady models and then investigate whether a smooth, steady, or quasi-steady transition is likely. Ourresults indicate that making the optimistic assumptions of steady GI and MRI accretion results in a contradiction for infall rates thought to be typical of low-mass protostars.
## 3 Methods
We compute steady disk models employing cylindrical coordinates \\((r,z)\\), treating radiative energy transport only in the vertical (\\(z\\)) direction. Energy conservation requires that
\\[\\sigma T_{eff}^{4}=\\frac{3GM_{*}\\dot{M}}{8\\pi\\sigma r^{3}}\\left(1-\\left(\\frac{r _{in}}{r}\\right)^{1/2}\\right)\\,, \\tag{7}\\]
where \\(M_{*}\\) is the central star's mass and we have assumed that the disk is not so massive as to make its rotation significantly non-Keplerian. Balance between heating by dissipation of turbulence and radiative cooling requires that
\\[\\frac{9}{4}\
u\\rho_{z}\\Omega^{2}=\\frac{d}{dz}\\left(\\frac{4\\sigma}{3}\\frac{dT^{ 4}}{d\\tau}\\right)\\,, \\tag{8}\\]
where
\\[\
u=\\alpha c_{s}^{2}/\\Omega \\tag{9}\\]
and
\\[d\\tau=\\rho\\kappa dz\\,, \\tag{10}\\]
and \\(\\kappa\\) is the Rosseland mean opacity. We have updated the fitting formulae provided by Bell & Lin (1994) for the Rosseland mean opacity to include more recent molecular opacities and an improved treatment of the pressure-dependence of dust sublimation (Zhu et al., 2007). The new fit and a comparison with the Bell & Lin (1994) opacity treatment is given in the Appendix.
Convection has not been included in our treatment. Lin & Papaloizou (1980) show that for a power law opacity (\\(\\kappa=\\kappa_{0}T^{\\beta}\\)), convection will occur when \\(\\beta\\gtrsim 1\\). Our opacity calculations show that \\(\\beta\\gtrsim 1\\) only occurs for \\(T\\gtrsim 2000K\\). As our steady-state analysis depends upon disk properties for \\(\\mathrm{T}\\lesssim 1400\\) K, the neglect of convection will not affect our results (see also Cassen, 1993).
We ignore irradiation of the disk by the central star, as we are assuming high accretion rates and a low central protostellar luminosity. The diffusion approximation (equation 8) is adequate since the disk is optically thick at the high mass accretion rates (\\(\\dot{M}>10^{-7}M_{\\odot}/yr\\)) we are interested in.
We also require hydrostatic equilibrium perpendicular to the disk plane,
\\[\\frac{dP_{z}}{dz}=\\frac{GM_{*}\\rho z}{r^{3}}\\,, \\tag{11}\\]
and use the ideal gas equation of state
\\[P=\\frac{k}{\\mu}\\rho T\\,. \\tag{12}\\]Given a viscosity prescription, equations (6) - (12) can be solved iteratively for the vertical structure of the disk at each radius, resulting in self-consistent values of the surface density \\(\\Sigma\\), and the temperature at the disk midplane \\(T_{c}\\).
In detail, we use a shooting method based on a Runge-Kutta integrator rather than a relaxation method (e.g., D'Alessio et al. 1998) to solve the two-point boundary value problem. Given \\(\\alpha\\) and \\(\\dot{M}\\) at \\(r\\), we fix \\(z=z_{i}\\) and set \\(T=T_{eff}\\) and \\(\\tau=2/3\\) (this is adequate in the absence of significant protostellar irradiation), then integrate toward the midplane. We stop when the total radiative flux \\(=\\sigma T_{eff}^{4}\\) at \\(z=z_{f}\\). In general \\(z_{f}\
eq 0\\); we alter the initial conditions and iterate until \\(z_{f}=0\\).
For an MRI active disk we fix \\(\\alpha=\\alpha_{M}\\), assuming the disk is active through the entire column. We then check to see if thermal ionization is sufficient or if the surface density is low enough that non-thermal ionization is plausible. The exact temperatures above which MRI activity can be sustained are somewhat uncertain; here we assume the transition occurs for a central temperature of 1400 K, when the dust grains that can absorb ions and electrons and thus inactivate the MRI (e.g., Sano et al. 2000) are evaporated. We set \\(\\alpha_{M}=10^{-2}\\) to \\(10^{-1}\\) to span a reasonable range given current estimates (see SS5.3).
For simplicity we neglect the possible presence of an actively accreting, non-thermally-ionized layer. This omission will not affect our results at high accretion rates, for which the layered contribution is unimportant (equation 4); our approximation then breaks down for \\(\\dot{\\rm M}\\leq 10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) for large values of \\(\\Sigma_{a}\\) and \\(\\alpha_{M}\\).
For the steady GI disk models \\(\\alpha\\) is not fixed. Instead we start with a large value of \\(\\alpha=\\alpha_{Q}\\) and then vary \\(\\alpha_{Q}\\) until \\(Q=2\\). The adoption of the local treatment of GI energy dissipation requires some comment. Since gravity is a long-range force a local viscous description is not generally applicable (Balbus & Papaloizou 1999). However, as Gammie (2001) and Gammie & Johnson (2003) argue, a local treatment is adequate if \\(\\lambda_{c}\\equiv 2c_{s}^{2}/(G\\Sigma)=2\\pi HQ\\lesssim r\\); here \\(\\lambda_{c}\\) is the characteristic wavelength of the GI. More broadly, our main result involves order-of-magnitude arguments; that is, as long as inner disks must be quite massive to sustain GI transport, and as long as there is _some_ local dissipation of energy as this transport and accretion occurs, steady accretion will not occur for a significant range of infall rates. To change our conclusions dramatically, one would need to show that the GI causes rapid accretion through the inner disk without substantial local heating. We return to this issue in SS5.1.
## 4 Results
Figures 1a-d show steady disk results for a central star mass of \\(1{\\rm M}_{\\odot}\\) and accretion rates of \\(10^{-4}\\), \\(10^{-5}\\), \\(10^{-6}\\), and \\(10^{-7}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\). Proceeding counterclockwise from upper left, the panels show the central disk temperature, \\(\\alpha_{Q}\\), the one-sided surface density \\(\\Sigma\\equiv\\int_{0}^{\\infty}dz\\rho\\), and the viscous timescale \\(r^{2}/\
u\\) as a function of radius. The solid curves show results for pure-GI models, while the dotted and dashed curves show results for \\(\\alpha_{M}=0.1\\) and \\(0.01\\), respectively.
The upper left panels show that the central temperatures rise more dramatically toward small radius in the GI models than in the MRI models. The GI models have higher temperatures because their higher surface densities lead to stronger radiative trapping. The GI solutions in these high-temperature regimes are unrealistic because they assume the MRI is absent, when it seems likely the MRI will in fact be active. These high temperature states do, however, suggest the possibility of thermal instability in the inner disk at high accretion rates, especially as the solutions near \\(\\sim 3000\\) K represent unstable equilibria (e.g., Bell & Lin 1994; SS5). We consider the MRI models to be inconsistent at \\(T\\lesssim 1400\\) K (collisional ionization would be absent) and when \\(\\Sigma>\\Sigma_{a}<100{\\rm g\\,cm^{-2}}\\).
Can a smooth or steady transition between MRI and GI transport occur? The transition region would be the \"plateau\" in the temperature structure which occurs near \\(T\\sim 1400\\) K (see Figure 1). This plateau is a consequence of the thermostatic effects of dust opacity, which vanishes rapidly at slightly higher temperatures. A small increase in temperature past this critical temperature causes a large decrease in the disk opacity and thus the optical depth; this in turn reduces the radiative trapping and decreases the central temperature. Thus disk models tend to hover around the dust destruction temperature over roughly an order of magnitude in radius, with the plateau occurring farther out in the disk for larger accretion rates. Since the plateau is connected with the evaporation of dust, it corresponds to a region where we might expect MRI activity.3
Footnote 3: There will be hysteresis because the dust size spectrum in a parcel of gas will depend on the parcel’s thermal history. Heating the parcel destroys the dust and the accumulated effects of grain growth. Cooling it again would presumably condense dust with small mean size (and therefore a strong damping effects on MHD turbulence). The opacity would then vary strongly with time as the grains grow again. These effects are not considered here.
First consider the case \\({\\rm\\dot{M}}=10^{-4}{\\rm M_{\\odot}\\,yr^{-1}}\\) (upper left corner of Figure 1). The plateau region is very similar in extent for all models. More importantly, \\(\\alpha_{Q}\\sim 10^{-2}\\) in this region, and so the surface densities of the GI and \\(\\alpha_{M}=10^{-2}\\) models are nearly the same. This suggests that a steady disk solution is plausible with a transition from GI to MRI at a few AU for these parameters. Depending on the precise thermal activation temperature for the MRI, a smooth transition at around 10 AU might also occur for \\(\\alpha_{M}=0.1\\).
Next consider the case \\({\\rm\\dot{M}}=10^{-5}{\\rm M_{\\odot}\\,yr^{-1}}\\) (upper right corner of Figure 1). Here \\(\\alpha_{Q}\\sim 10^{-3}\\) in the plateau region, with resulting surface densities much higher than for either of the MRI cases. This discrepancy in \\(\\alpha\\) and \\(\\Sigma\\) between the two solutions makes a steady disk unlikely. A small increase in surface density in a GI model near the transition region, resulting in increased heating and thus thermal activation of the MRI, would suddenly raise the effective transport rates by one or two orders of magnitude, depending upon \\(\\alpha_{M}\\). The result would be an accretion outburst. This is qualitatively the same situation as proposed for outbursts in dwarf novae, where thermal instability is coupled to an increase in \\(\\alpha\\) from the initial low state to the high state (similar to what Bell & Lin 1994 adopted to obtain FU Ori outbursts). Our inference of non-steady accretion also agrees with the time-dependent one-dimensional models of Armitage et al. (2001) and of Gammie (1999)and Book & Hartmann (2005), as discussed further in in SS5.
A similar situation holds at \\(10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\), although the evolutionary (viscous) timescales of the GI model are of order \\(10^{5}\\) yr, comparable to protostellar infall timescales. At this infall rate, the disk would only amass \\(\\sim 0.1{\\rm M}_{\\odot}=0.1{\\rm M}_{*}\\), and so the disk might not need to transfer this mass into the star to avoid GI. At \\(10^{-7}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\), evolutionary timescales become much longer than protostellar lifetimes, and become comparable to T Tauri lifetimes; disk material can pile up without generating GI transport and consequent thermal activation of the MRI. In addition, an \\(\\alpha_{M}=0.1\\) value could result in a steady disk with surface densities low enough to be activated entirely by cosmic ray or X-ray ionization. This does not mean, however, that T Tauri disks do not have layered accretion, as the surface density distribution depends upon the history of mass transport.
The results of our calculations are summarized in the \\(\\dot{\\rm M}-{\\rm r}\\) plane in Figure 2. The solid curves farthest to the lower right, labeled \\(R_{Q}\\), are the radii at which the pure GI-driven disk would have a central temperature of 1400 K (at which temperature the dust starts to sublimate), and thus activate the MRI. Moving up and left, the solid curve labeled \\(R_{M}\\) denotes the radii at which a pure MRI disk of the given \\(\\alpha_{M}\\) would have a central temperature of 1400 K. When these two curves are close together, or cross, \\(\\alpha_{Q}\\) and \\(\\alpha_{M}\\) are similar, making possible a smooth transition between GI and MRI and thus steady accretion. In the (shaded) regions between these two curves the viscosity parameters diverge, making non-steady accretion likely.
The radial regions at which we predict material will pile up, trigger the MRI, and result in rapid accretion lie in the shaded regions. The dotted curve shows \\(R_{Q}\\) and \\(R_{M}\\) where the disk has a central temperature of 1800 K (at which temperature all dust has sublimated). \\(R_{Q}\\) and \\(R_{M}\\) at 1800 K are smaller than they are at 1400 K because of the plateau region discussed above. Thus if the MRI trigger temperature is higher the outbursts are expected to be shorter because the outburst drains the smaller inner disk (\\(r<R_{Q}\\)) on the viscous timescale.
Figure 2 indicates that non-steady accretion, with potential outbursts, is predicted to occur for infall rates \\(\\lesssim 10^{-5}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) for \\(\\alpha_{M}=0.01\\) and \\(\\lesssim 10^{-4}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) for \\(\\alpha_{M}=0.1\\). As described above, for \\(\\dot{\\rm M}<10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) outbursts are unlikely, simply because the transport timescales are too long. Outbursts are expected to be triggered at \\(r\\sim 1-10\\) AU for protostellar infall rates \\(\\sim 10^{-5}-10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\). These predictions are relatively insensitive to the precise temperature of MRI activation; the dotted curves in Figure 2 show the results for a critical MRI temperature of 1400 K, which simply shift the regions of instability to slightly smaller radii without changing the qualitative results.
The other shaded band in Figure 2 denotes the region where thermal instability might occur. The two limits correspond to the two limiting values of the \"S curve\" (e.g., Faulkner, Lin, & Papaloizou 1983) at which transitions up to the high (rapid accretion) state and the low (slow accretion) state occur.
Figures 3-6 show results for central star masses of 0.3 and 0.05\\({\\rm M}_{\\odot}\\), respectively. The predictions
are qualitatively similar to the case of the 1M\\({}_{\\odot}\\) protostar, with the exception that thermal instability is less likely for the brown dwarf. This also implies generally unstable protostellar accretion for more massive protostars during the time that they are increasing substantially in mass.
Much of the overall behavior of our results derive from the general property that disk temperatures rise strongly toward smaller radii. For optically-thick viscous disks, the central temperatures are proportional to
\\[T_{c}\\sim T_{eff}\\tau^{1/4}\\propto\\dot{\\rm M}^{1/4}{\\rm r}^{-3/4}(\\kappa_{\\rm R }\\Sigma)^{1/4}\\,, \\tag{13}\\]
where \\(\\tau\\) is the vertical optical depth. Thus, even changes in surface density for differing values of \\(\\alpha\\) result in modest changes in radii where a specific temperature is achieved. Changing the mass accretion rate has a bigger effect, because \\(\\Sigma\\propto\\dot{\\rm M}\\).
## 5 Discussion
Our prediction of unsteady accretion during protostellar disk evolution is the result of the inefficiency of angular momentum transport of the two mechanisms considered here: the MRI, because of low ionization in the disk; and the GI, because it tends to be inefficient at small radii, where \\(\\Omega\\) and \\(c_{s}\\) will be large, forcing \\(\\Sigma\\) to be large. To provide a feeling of just how large the surface density must be for \\(Q=2\\) in the inner disk, at accretion rates of \\(10^{-4}\\) and \\(10^{-5}\\)M\\({}_{\\odot}\\,\\)yr\\({}^{-1}\\) for the 1M\\({}_{\\odot}\\) star the disk mass interior to 1 AU would have to be \\(\\sim 0.6\\)M\\({}_{\\odot}\\) and \\(\\sim 0.5\\)M\\({}_{\\odot}\\), respectively (Fig. 7), which are implausible large. At some point the disk must accrete most of its mass into the star, forcing the inner disk temperatures to be very large and thermally activating the MRI, resulting in outburst of accretion. Here we consider whether the assumptions leading to this picture are reasonable, then discuss applications to outbursting systems.
### Outbursts?
Our inference of cycles of outbursts of accretion - piling up of mass by GI transport, followed by thermal triggering of the MRI - was found in the models of Armitage et al. (2001), as well as in the calculations of Gammie (1999) and Book & Hartmann (2005). We have also found outbursting behavior in time-dependent two-dimensional disk models, to be reported in a subsequent paper (Zhu, Hartmann, & Gammie, 2009). Here we compare our results with those of Armitage et al..
Figure 8 shows the results of our stability calculations for parameters and opacities adopted by Armitage et al. : a central star mass of 1M\\({}_{\\odot}\\), \\(\\alpha_{M}=0.01\\), and an assumed triggering temperature for the MRI of 800 K. Armitage et al. found steady accretion at an infall rate of \\(\\dot{\\rm M}=3\\times 10^{-6}\\)M\\({}_{\\odot}\\,\\)yr\\({}^{-1}\\) but outbursting behavior at \\(1.5\\times 10^{-6}\\)M\\({}_{\\odot}\\,\\)yr\\({}^{-1}\\). This is reasonably consistent with our calculations; \\(R_{M}\\) and \\(R_{Q}\\) are close together at \\(\\dot{\\rm M}=3\\times 10^{-6}\\)M\\({}_{\\odot}\\,\\)yr\\({}^{-1}\\) and cross near \\(\\dot{\\rm M}=1\\times 10^{-5}\\)M\\({}_{\\odot}\\,\\)yr\\({}^{-1}\\), suggesting stable accretion somewhere in this range. Armitage et al. find that the MRI is triggered
at about 2 AU, whereas our analysis (for \\(\\dot{\\rm M}\\sim 10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\)) would suggest a triggering radius of about 3 AU. Our ability to reproduce the results of Armitage et al. is adequate, considering that steady models do not precisely reproduce the behavior of time-dependent models, and that the form of \\(\\alpha_{Q}\\) used by Armitage et al. is somewhat different from ours, though it still retains the feature of non-negligible GI only for small \\(Q\\).
Our finding of non-steady accretion is the result of assuming no other significant level of angular momentum transport that is not due to GI or thermal MRI. Terquem (2008) has shown that steady accretion is possible for a layered disk accreting at \\(\\dot{M}=10^{-8}{\\rm M}_{\\odot}/{\\rm yr}\\) if there is a non-zero (non-gravitational) viscosity in disk regions below the surface active layers. Simulations have indicated that active layers can have an effect on non-magnetically active regions below, producing a Reynolds stress promoting accretion in the lower regions (Fleming & Stone, 2003; Turner & Sano, 2008; Ilgner & Nelson, 2008). We argue that this effect is unlikely to be important for the much higher accretion rates considered here, simply because the amount of mass transfer that needs to occur is much higher than what is sustainable by a non-thermally ionized surface layer. It seems implausible that a small amount of surface energy and turbulence generation can activate a very large amount of turbulence and energy dissipation in a much more massive region.
### Local vs. non-local GI transport
We have adopted a local formalism for GI whereas it has non-local properties. Furthermore, we have adopted azimuthal symmetry in calculating the dissipation of energy whereas energy will be deposited in nonaxisymmetric spiral shocks. Neither of these assumptions is strictly correct.
Boley et al. (2006) performed a careful analysis of the torques in a three-dimensional model of a self-gravitating disk, including radiative transfer. They found that the mass transfer was dominated by global modes, but could be consistent with a locally-defined \\(\\alpha(r,t)\\). This result did not hold near the inner and outer edge of their disk, although this is not surprising as these regions were characterized by \\(Q>2\\) and thus one would not expect the GI to be operating. Boley et al. were unable to address whether energy dissipation was localized. Nevertheless it is difficult to imagine that gravitational instability could avoid some heating in regions with \\(Q\\sim 1\\), and only relatively small amounts of heating are required to activate the MRI at small radii.
The details of the disk temperature structure near 1 AU must be found by three dimensional simulations of the GI with realistic cooling. The analysis presented here suggests that pure GI in the absence of MRI tends to lead to very long transport times in the inner disk, as required by our low values of \\(\\alpha_{Q}\\). This presents two potential technical problems for a numerical investigation: first, numerical viscosity must be smaller than \\(\\alpha_{Q}\\) to follow the evolution; and second, the disk must be followed over long, evolutionary timescales. It will be challenging to follow the GI near 1 AU numerically.
### What is \\(\\alpha_{M}\\)?
The magnetic transport rate \\(\\alpha_{M}\\) is constrained by both observations and theory. A recent review of the observational evidence by King, Pringle, & Livio (2007) argues that \\(\\alpha_{M}\\) must be large, of the order 0.1-0.4, based in part on observations of dwarf novae and X-ray binaries where there is no question of gravitational instability. Our own analysis required \\(\\alpha\\sim 0.1\\) in FU Ori (Zhu et al., 2007).
On the theoretical side the situation is murky. Early calculations (Hawley et al., 1996) suggested that for \"shearing box\" models with zero mean azimuthal and vertical field \\(\\alpha_{M}\\simeq 0.01\\). Recent work (Fromang & Papaloizou, 2007), however, shows that \\(\\alpha_{M}\\) does not converge in the sense that \\(\\alpha_{M}\\to 0\\) as the numerical resolution increases.
But are the zero mean field models relevant to astrophysical disks? Global disk simulations (Hirose et al., 2004; McKinney & Narayan, 2007; Beckwith et al., 2008), local disk simulations in which the mean field is allowed to evolve because of the boundary conditions (Brandenburg et al., 1995), and observations of the galactic disk (Vallee, 2004) all exhibit a \"mean\" azimuthal field when an average is taken over areas of \\(\\gtrsim H^{2}\\) in the plane of the disk. This suggests that the zero mean field local models are a singular case, and that mean azimuthal field models are most relevant to real disks (strong vertical fields would appear to be easily removed from disks according to the plausible phenomenological argument originally advanced by van Ballegooijen (1989)).
So what do numerical simulations tell us about disks with mean azimuthal field? Recent work shows that in this case the outcome depends on the magnetic Prandtl number \\(Pr_{M}\\equiv\
u/\\eta\\)(Fromang et al., 2007; Lesur & Longaretti, 2007) (\\(\
u\\equiv\\) viscosity and \\(\\eta\\equiv\\) resistivity) and that \\(\\alpha_{M}\\) is a monotonically increasing function of \\(Pr_{M}\\). This intriguing result, and the fact that YSO disks have \\(Pr_{M}\\ll 1\\) throughout (although more dimensionless parameters are required to characterize YSO disks, where the Hall effect and ambipolar diffusion can also be important), might suggest that \\(\\alpha_{M}\\) should be small. But the numerical evidence also shows that \\(\\alpha_{M}\\) depends on \\(\
u\\) in the sense that the dependence on \\(Pr_{M}\\) weakens as \\(\
u\\) decreases. In sum, the outcome is not known as \\(\
u\\) drops toward astrophysically plausible values. Mean azimuthal field models with effective \\(Pr_{M}\\sim 1\\)(Guan et al., 2008) are also not fully converged; they show that \\(\\alpha_{M}\\)_increases_, albeit slightly, as the resolution is increased. For a mean field with plasma \\(\\beta=400\\)(Guan et al., 2008) find \\(\\alpha_{M}=0.03\\) at their highest resolution. In disks with an initial strong azimuthal magnetic field in equipartition with thermal pressure, Johansen & Levin (2008) find \\(\\alpha=0.1\\) resulting from a combination of the Parker instability and an MRI-driven dynamo.
Very small \\(\\alpha_{M}\\) would pose a problem for T Tauri accretion. In the layered disk model, Gammie estimated the accretion rate to be
\\[\\dot{\\rm M}\\ \\sim\\ 2\\times 10^{-8}\\left(\\frac{\\alpha_{\\rm M}}{0.01}\\right)^{2 }\\,\\left(\\frac{\\Sigma_{\\rm a}}{100{\\rm g\\,cm^{-2}}}\\right)^{3}{\\rm M}_{\\odot} \\,{\\rm yr}^{-1}\\,, \\tag{14}\\]
where \\(\\Sigma_{a}\\) is the surface density of the layer which is non-thermally ionized. Thus withit would be difficult to explain typical T Tauri accretion rates.
On the other hand \\(\\alpha_{M}\\sim 0.1\\) could cause the outer disks of T Tauri stars to expand to radii of 1000 AU or more in 1 Myr (Hartmann et al., 1998). There is no particular reason why the \\(\\alpha_{M}\\sim 0.1\\) that we estimated for the thermally-ionized inner disk region in FU Ori should be the same as the effective \\(\\alpha\\) in the outer disks of T Tauri stars, which cannot be thermally ionized.
### Protostellar accretion
Our models predict that most low-mass protostars will be accreting more slowly than matter is falling onto their disks. This is consistent with observational results, as outlined in the Introduction. The results of Armitage et al. (2001) suggested that steady accretion might be possible at \\(\\sim 3\\times 10^{-6}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\) and above (for \\(1{\\rm M}_{\\odot}\\)). We find a different result because we adopt a significantly higher temperature for thermal MRI activation, closer to that required for dust evaporation. This means that our MRI triggering occurs at smaller radii, where the GI is less effective. It does seem likely that higher activation temperatures than the 800 K adopted by Armitage et al. are more plausible. Even if thermal ionization in the absence of dust is sufficient at around 1000 K in statistical equilibrium, ionization rates are so low that equilibrium is unlikely (e.g. Desch, 1998). We also note that Armitage et al. were unable to obtain the high accretion rates and short outburst durations characteristic of FU Ori objects, but Book & Hartmann (2005) were able to reproduce the FU Ori characteristics better with a higher MRI activation temperature.
At infall rates \\(\\gtrsim 10^{-4}{\\rm M}_{\\odot}\\,{\\rm yr}^{-1}\\), our models predict (quasi-) steady accretion (also Armitage et al., 2001); but such high rates are not expected to last long, perhaps only during an initial rapid phase of infall (Foster & Chevalier, 1993; Hartmann et al., 1994; Henriksen, Andre, & Bontemps, 1997). Testing this prediction may be difficult as relatively few objects will be caught in this phase and they will likely be heavily embedded.
At lower infall rates, GI-driven accretion timescales are longer than evolutionary times and/or layered MRI turbulence may produce sufficient mass transport. Thus, we would not expect outbursts for Class II (T Tauri) stars.
### FU Ori outbursts
In our radiative transfer modeling of the outbursting disk system FU Ori (Zhu et al., 2007), we found that to fit the _Spitzer Space Telescope_ IRS spectrum the rapidly-accreting, hot inner disk must extend out to \\(\\sim 1\\) AU, inconsistent with a pure thermal instability model. In contrast, the results of this paper suggest thermal MRI triggering can occur at a few AU, in much better agreement with observation.
Our recent analysis of the silicate emission features of FU Ori (Zhu et al., 2008) also suggeststhat the disk becomes dominated by irradiation rather than internal heating at distances of \\(\\gtrsim 1\\) AU, but this is consistent with the results of this paper, as irradiation from the central disk can dominate local viscous dissipation if the disk is sufficiently flared.
We also found that the decay timescales of FU Ori suggest \\(\\alpha_{M}\\sim 10^{-1}\\); large values of \\(\\alpha_{M}\\) are more likely to lead to outbursting behavior. High inner disk accretion rates also make thermal instability more likely very close to the central star; the presence or absence of this instability may account for the difference in rise times seen in some FU Ori objects (Hartmann & Kenyon, 1996).
## 6 Conclusions
Our study predicts that the disk accretion of low-mass protostars will generally be unsteady for typical infall rates. During the protostellar phase, GI is likely to dominate at radii beyond 1 AU but not at smaller radii; in contrast, rapid accretion should drive thermal activation of the MRI in the inner disk. Because of the differing transport rates comparable to typical infall values results in high inner disk temperatures sufficient to trigger the MRI. This is a general conclusion, though if the external disk accretion is driven by GI, the radius at which the MRI can be triggered thermally is much larger, because of the high surface density needed to produce a low value of \\(Q\\). Furthermore, GI-driving in the inner disk results in a low value of \\(\\alpha_{Q}\\), much lower than the expected \\(\\alpha_{M}\\), for a wide range of \\(\\dot{\\rm M}\\). The feature of mass accumulation at low external \\(\\alpha\\) followed by a change to a high inner viscosity is similar to thermal instability models (and also Armitage et al., 2001). Thermal instabilities may also occur in the inner disk at very high accretion rates, enhancing the potential for non-steady protostellar accretion.
## Appendix A Appendix: Rosseland mean opacity
The Bell & Lin (1994) Rosseland mean opacity fit has been widely used to study high temperature accretion disks (CV objects, FU Ori, et al. ) for more than a decade, with opacities generated almost two decades ago. Our understanding of opacity sources (especially dust and molecular line spectra) has improved both observationally and theoretically since then (Alexander & Ferguson, 1994; Ferguson et al., 2005; D'Alessio et al., 1998, 2001; Zhu et al., 2007).
We have generated Rosseland mean opacity assuming LTE for a wide range of temperature and pressure during our study of FU Orionis objects (Zhu et al., 2007, 2008). The molecular, atomic, and ionized gas opacities have been calculated using the Opacity Distribution Function (ODF) method (Castelli & Kurucz, 2004; Sbordone et al., 2004; Castelli, 2005; Zhu et al., 2007) which is a statistical approach to handling line blanketing when millions of lines are present in a small wavelength range (Kurucz et al., 1974). The dust opacity was derived by the prescription in D'Alessio et al. (2001) (Zhu et al., 2008). Our opacity has been used not only to study FU Orionis objects but also to fit the gas opacity for Herbig Ae star disks constrained by interferometric observations (Tannirkulam et al.
2008). The opacities are shown in Figure 9). Compared with Alexander & Ferguson (1994) or Zhu et al. (2007,2008), the Bell & Lin opacity lacks water vapor and TiO opacity around 2000 K and has a lower dust sublimation temperature.
We have made a piecewise power-law fit to the Zhu et al. (2007, 2008) opacity (analogous to the Bell & Lin fit) to enhance computational efficiency (Table 1; see also Figure 9). This speedup has been useful in performing the calculations of this paper, and is essential for our forthcoming two-dimensional hydrodynamic simulations of FU Ori outbursts (Zhu, Hartmann, & Gammie 2009).
We acknowledge useful conversations with Ken Rice and Dick Durisen. This work was supported in part by NASA grant NNX08A139G, by the University of Michigan, and by a Sony Faculty Fellowship, a Richard and Margaret Romano Professorial Scholarship, and a University Scholar appointment to CG.
## References
* Alexander & Ferguson (1994) Alexander, D. R., & Ferguson, J. W. 1994, ApJ, 437, 879
* Andre et al. (2000) Andre, P., Ward-Thompson, D., & Barsony, M. 2000, Protostars and Planets IV, 59
* Armitage et al. (2001) Armitage, P. J., Livio, M., & Pringle, J. E. 2001, MNRAS, 324, 705
* Balbus & Papaloizou (1999) Balbus, S. A., & Papaloizou, J. C. B. 1999, ApJ, 521, 650
* Bally et al. (2007) Bally, J., Reipurth, B., & Davis, C. J. 2007, Protostars and Planets V, 215
* Bate et al. (2003) Bate, M. R., Bonnell, I. A., & Bromm, V. 2003, MNRAS, 339, 577
* Beckwith et al. (2008) Beckwith, K., Hawley, J. F., & Krolik, J. H. 2008, ApJ, 678, 1180
* Bell & Lin (1994) Bell, K. R., & Lin, D. N. C. 1994, ApJ, 427, 987
* Boley et al. (2006) Boley, A. C., Mejia, A. C., Durisen, R. H., Cai, K., Pickett, M. K., & D'Alessio, P. 2006, ApJ, 651, 517
* Book & Hartmann (2005) Book, L. G., & Hartmann, L. 2005, BAAS, 37, 1287
* Brandenburg et al. (1995) Brandenburg, A., Nordlund, A., Stein, R., & Torkelsson, U. 1995, ApJ, 446, 741
* Cai et al. (2008) Cai, K., Durisen, R. H., Boley, A. C., Pickett, M. K., & Mejia, A. C. 2008, ApJ, 673, 1138
* Cassen (1993) Cassen, P. 1993, Lunar and Planetary Institute Conference Abstracts, 24, 261
* Castelli (2005) Castelli, F. 2005, Memorie della Societa Astronomica Italiana Supplement, 8, 34
* Castelli & Kurucz (2004) Castelli, F., & Kurucz, R. L. 2004, ArXiv Astrophysics e-prints, arXiv:astro-ph/0405087
* Castelli et al. (2008)Clarke, C. J., & Syer, D. 1996, MNRAS, 278, L23
* () D'Alessio, P., Calvet, N., & Hartmann, L. 2001, ApJ, 553, 321
* () D'Alessio, P., Canto, J., Calvet, N., & Lizano, S. 1998, ApJ, 500, 411
* () Desch, S. J. 1998, Ph.D. Thesis, U. Illinois.
* () Faulkner, J., Lin, D. N. C., & Papaloizou, J. 1983, MNRAS, 205, 359
* () Ferguson, J. W., Alexander, D. R., Allard, F., Barman, T., Bodnarik, J. G., Hauschildt, P. H., Heffner-Wong, A., & Tamanai, A. 2005, ApJ, 623, 585
* () Fleming, T., & Stone, J. M. 2003, ApJ, 585, 908
* () Foster, P. N., & Chevalier, R. A. 1993, ApJ, 416, 303
* () Fromang, S., & Papaloizou, J. 2007 a, 476, 1113 (FP07)
* () Fromang, S., Papaloizou, J., Lesur, G., & Heinemann, T. 2007, A&A, 476
* () Furlan, E., et al. 2008, ApJS, 176, 184
* () Gammie, C. F. 1996, ApJ, 457, 355
* () Gammie, C. F. 2001, ApJ, 553, 174
* () Guan, Gammie, C. F. et al. 2008, in preparation
* () Johnson, B. M., & Gammie, C. F. 2003, ApJ, 597, 131
* () Gammie, C. F. 1999, ASP Conference series 160, 122
* () Guan, X., Gammie, C. F., Simon, J., and Johnson, B. M. 2008, ApJ, in prep.
* () Glassgold, A. E., Najita, J., & Igea, J. 1997, ApJ, 480, 344
* () Hartmann, L., Boss, A., Calvet, N., & Whitney, B. 1994, ApJ, 430, L49
* () Hartmann, L., Cassen, P., & Kenyon, S. J. 1997, ApJ, 475, 770
* () Hartmann, L., & Kenyon, S. J. 1996, ARA&A, 34, 207
* () Hawley, J. F., Gammie, C. F., & Balbus, S. A. 1995, ApJ, 440, 742
* () Hawley, J. F., Gammie, C. F., & Balbus, S. A. 1996, ApJ, 464, 690
* () Henriksen, R., Andre, P., & Bontemps, S. 1997, A&A, 323, 549
* () Herbig, G. H. 1977, ApJ, 217, 693
* ()* Hirose et al. (2005) Hirose, S., Krolik, J., De Villiers, J.-P., & Hawley, J. 2005, ApJ, 606, 1083
* Ilgner & Nelson (2006a) Ilgner, M., & Nelson, R. P. 2006a, A&A, 445, 205
* Ilgner & Nelson (2006b) Ilgner, M., & Nelson, R. P. 2006b, A&A, 445, 223
* Ilgner & Nelson (2006c) Ilgner, M., & Nelson, R. P. 2006c, A&A, 455, 731
* Ilgner & Nelson (2008) Ilgner, M., & Nelson, R. P. 2008, A&A, 483, 815
* Johansen & Levin (2008) Johansen, A., & Levin, Y. 2008, A&A, 490, 501
* Kenyon et al. (1993) Kenyon, S. J., Calvet, N., & Hartmann, L. 1993, ApJ, 414, 676
* Kenyon et al. (1990) Kenyon, S. J., Hartmann, L. W., Strom, K. M., & Strom, S. E. 1990, AJ, 99, 869
* Kenyon et al. (1994) Kenyon, S. J., Gomez, M., Marzke, R. O., & Hartmann, L. 1994, AJ, 108, 251
* King et al. (2007) King, A. R., Pringle, J. E., & Livio, M. 2007, MNRAS, 376, 1740
* Kurucz et al. (1974) Kurucz, R. L., Peytremann, E., & Avrett, E. H. 1974, Washington : Smithsonian Institution : for sale by the Supt. of Docs., U.S. Govt. Print. Off., 1974., 37
* Lesur & Longaretti (2007) Lesur, G., & Longaretti, P.-Y. 2007, MNRAS, 378, 1471
* Lin et al. (1985) Lin, D. N. C., Faulkner, J., & Papaloizou, J. 1985, MNRAS, 212, 105
* Lin & Papaloizou (1980) Lin, D. N. C., & Papaloizou, J. 1980, MNRAS, 191, 37
* Lin & Papaloizou (1985) Lin, D. N. C., & Papaloizou, J. 1985, Protostars and Planets II, 981
* Lodato & Clarke (2004) Lodato, G., & Clarke, C. J. 2004, MNRAS, 353, 841
* McKinney & Narayan (2007) McKinney, J. C., & Narayan, R. 2007, MNRAS, 375, 513
* Muzerolle et al. (1998) Muzerolle, J., Hartmann, L., & Calvet, N. 1998, AJ, 116, 2965
* Myers et al. (1998) Myers, P. C., Adams, F. C., Chen, H., & Schaff, E. 1998, ApJ, 492, 703
* Tannirkulam et al. (2008) Tannirkulam, A., et al. 2008, ArXiv e-prints, 808, arXiv:0808.1728
* Sano et al. (2000) Sano, T., Miyama, S. M., Umebayashi, T., & Nakano, T. 2000, ApJ, 543, 486
* Sbordone et al. (2004) Sbordone, L., Bonifacio, P., Castelli, F., & Kurucz, R. L. 2004, Memorie della Societa Astronomica Italiana Supplement, 5, 93
* Shu et al. (1987) Shu, F. H., Adams, F. C., & Lizano, S. 1987, ARA&A, 25, 23
* Stahler (1988) Stahler, S. W. 1988, ApJ, 332, 804
* Stahler et al. (2007)Terquem, C. E. J. M. L. J. 2008, arXiv:0808.3897
* ()Turner, N. J., & Sano, T. 2008, ApJ, 679, L131
* ()Vallee, J. P. 2004, NewA Rev., 48, 763
* ()van Ballegooijen, A. A. 1989, Accretion Disks and Magnetic Fields in Astrophysics, 156, 99
* ()Vorobyov, E. I., & Basu, S. 2005, ApJ, 633, L137
* ()Vorobyov, E. I., & Basu, S. 2006, ApJ, 650, 956
* ()Vorobyov, E. I., & Basu, S. arXiv:0802.2242v1
* ()bibitem[White et al.(2007)]2007prpl.conf..117W White, R. J., Greene, T. P., Doppmann, G. W., Covey, K. R., & Hillenbrand, L. A. 2007, Protostars and Planets V, 117
* ()White, R. J., & Hillenbrand, L. A. 2004, ApJ, 616, 998
* ()Zhu, Z., Hartmann, L., Calvet, N., Hernandez, J., Muzerolle, J., & Tannirkulam, A.-K. 2007, ApJ, 669, 483
* ()Zhu, Z., Hartmann, L., Calvet, N., Hernandez, J., Tannirkulam, A.-K., & D'Alessio, P. 2008, ArXiv e-prints, 806, arXiv:0806.3715 (ApJ, in press)Figure 1: Steady-state disk calculations for four accretion rates - \\(10^{-4}\\), \\(10^{-5}\\), \\(10^{-6}\\), and \\(10^{-7}\\)M\\({}_{\\odot}\\) yr\\({}^{-1}\\), assuming a central star of mass 1M\\({}_{\\odot}\\). The solid curves show solutions for GI-driven accretion, as described in the text. The dashed and dotted curves yield results for steady disk models with a constant \\(\\alpha=10^{-2}\\) and \\(10^{-1}\\), respectively (see text)
Figure 2: Unstable regions in the \\(r-\\dot{M}\\) plane for a 1M\\({}_{\\odot}\\) central star. The shaded region in the lower right shows where the central temperature of steady GI models exceeds an assumed MRI trigger temperature of 1400 K. The dotted curves show \\(R_{M}\\) and \\(R_{Q}\\) (the boundaries of the shaded region; see text for definition) for an MRI trigger temperature of 1800 K. The shaded region in the upper left shows the region subject to classical thermal instability.
Figure 3: Same as in Figure 1 but for a central star mass of 0.3M\\({}_{\\odot}\\).
Figure 4: Same as in Figure 1 but for a central star mass of 0.05M\\({}_{\\odot}\\).
Figure 5: Same as figure 2 for 0.3M\\({}_{\\odot}\\) central star.
Figure 6: Same as Figure 2 for the 0.05M\\({}_{\\odot}\\) central star.
Figure 8: Same as Figure 2 for the parameters of Armitage et al. (2001) (see text)
Figure 9: Rosseland mean opacities: the dotted lines show the Bell & Lin (1994) fit, the solid curves show the detailed opacity calculation of Zhu et al. (2007, 2008) (solid line), and the dashed lines show the simple fit to Zhu et al. opacities (Table 1).
\\begin{table}
\\begin{tabular}{c l c} \\hline \\hline \\(\\log_{10}T\\) & \\(\\log_{10}\\kappa\\) & comments \\\\ \\hline \\(<0.03\\log_{10}P+3.12\\) & \\(0.738\\log_{10}T-1.277\\) & grain opacity \\\\ \\(<0.0281\\log_{10}P+3.19\\) & \\(-42.98\\log_{10}T+1.312\\log_{10}P+135.1\\) & grain evaporation \\\\ \\(<0.03\\log_{10}P+3.28\\) & \\(4.063\\log_{10}T-15.013\\) & water vapor \\\\ \\(<0.00832\\log_{10}P+3.41\\) & \\(-18.48\\log_{10}T+0.676\\log_{10}P+58.93\\) & \\\\ \\(<0.015\\log_{10}P+3.7\\) & \\(2.905\\log_{10}T+0.498\\log_{10}P-13.995\\) & molecular opacities \\\\ \\(<0.04\\log_{10}P+3.91\\) & \\(10.19\\log_{10}T+0.382\\log_{10}P-40.936\\) & H scattering \\\\ \\(<0.28\\log_{10}P+3.69\\) & \\(-3.36\\log_{10}T+0.928\\log_{10}P+12.026\\) & bound-free,free-free \\\\ else a & electron scattering \\\\ \\hline \\end{tabular}
\\end{table}
Table 1: Fit to Zhu et al. (2007, 2008) opacity | Observations indicate that mass accretion rates onto low-mass protostars are generally lower than the rates of infall to their disks; this suggests that much of the protostellar mass must be accreted during rare, short outbursts of rapid accretion. We explore when protostellar disk accretion is likely to be highly variable. While constant \\(\\alpha\\) disks can in principle adjust their accretion rates to match infall rates, protostellar disks are unlikely to have constant \\(\\alpha\\). In particular we show that neither models with angular momentum transport due solely to the magnetorotational instability (MRI) nor gravitational instability (GI) are likely to transport disk mass at protostellar infall rates over the large range of radii needed to move infalling envelope material down to the central protostar. We show that the MRI and GI are likely to combine to produce outbursts of rapid accretion starting at a few AU. Our analysis is consistent with the time-dependent models of Armitage, Livio, & Pringle (2001) and agrees with our observational study of the outbursting object FU Ori.
accretion disks, stars: formation, stars: pre-main sequence | Condense the content of the following passage. |
arxiv-format/0811_2016v1.md | **Land Cover Mapping Using Ensemble Feature Selection Methods**
Gidudu, A.*, Abe, B.* and Marwala, T.*
*School of Electrical and Information Engineering
University of the Witwatersrand, 2050, South Africa
### Introduction
Increasingly Earth observation has become a prime source of data in the geosciences and many related disciplines permitting research into the distant past, the present and into the future (Kramer, 2002). This has resulted into new clarity and better awareness of the earth's dynamic nature. Earth observation is based on the premise that information is available from the electromagnetic energy field arising from the earth's surface (or atmosphere or both) and in particular from the spatial, spectral and temporal variations in that field (Kramer, 2002). One of the areas of research interest has always been how to relate Earth observation output e.g. aerial photographs and satellite images to known features (e.g. land cover). The leap from manual aerial photographic interpretation to 'automatic' classification was inspired by the availability of experimental data in various bands in the mid 1960's as a prelude to the launch of the Earth Resources Technology Satellite (ERTS - which was later renamed Landsat 1). This necessitated the adoption of digital multivariate statistical methods for the extraction of land cover information (Landgrebe, 1997). Some of the earliest image classifiers at the time included maximum likelihood and minimum distance to means classifiers (Landgrebe, 2005; Wacker and Landgrebe, 1972). Artificial Neural Network analysis was popular at the time, however the then computational capacity inhibited its wide spread use (Landgrebe, 1997). To date, image classification has benefitted from advancements in improved computational power and algorithm development. An example of the subsequent algorithms that have taken root in image classification include k-Nearest Neighbours, Support Vector Machines, Self Organising Maps, Neural Networks, k-means clustering and object oriented classification.
In light of the improved computational power, variety of classification algorithms, datasets with increasing number of bands, one of the growing areas of interest has been how to 'combine' classifiers in a process better known as ensemble classification. Ensemble classification is premised on combining the outputs of a given number of classifiers in order to derive an accurate classification (Foody et al., 2007). In fields like computational intelligence, combining classifiers is now an established research area (Kuncheva and Whitaker, 2003) and goes by a variety of names such as multiple classifier systems, mixture of experts, committee of classifiers, and ensemble based systems (Polikar, 2006). One of the main prerequisites in building an ensemble system is ensuring that there is diversity among the base (constituent) classifiers (Yu and Cho, 2006). Diversity in ensemble systems may be ensured through the use of different: training datasets, classifiers, features or training parameters (Polikar, 2006). Previous work relating ensemble classification to land cover mapping has focused on investigating how combining different classifiers impacts on classification accuracy (Foody et al., 2007), how different types ofensembles can be applied to land cover mapping (Pal, 2007) and also enforcing diversity through bagging for land cover mapping (Steele and Patterson, 2001). In this paper, ensemble feature selection is investigated as a means of image classification whereby diversity is enforced through using different features. Another key aspect in this paper will be to establish if there is any correlation between classification accuracy and one of the common diversity measures. The paper is arranged as follows: section 2 gives an overview of ensemble classification and diversity measures, section 3 will present the methodology developed to carry out the research, section 4 will present and discuss the results accruing thereof.
### Overview of Ensemble Classification
The main idea behind ensemble classification is that one is interested in taking advantage of various classifiers at their disposal to come up with a 'consensus' result. The challenge at hand involves deciding which classifiers to consider and how to combine their results. From the literature (e.g. Polkar, 2006) it is recommended that the constituent classifiers in the ensemble have different decision boundaries, because if identical there will be no gain in combining the classifiers (Shipp and Kuncheva, 2002). Such a set is considered to be diverse (Polkar, 2006). Diversity in ensemble systems has been more commonly explored by considering different classifiers; training a given classifier on different portions of the data; using a classifier with different parameter specifications and using different features. Two methods which have gained prominence in ensemble classification research include bagging or bootstrap aggregating (Breiman, 1996) and Adaboost or reweighting boosting (Freund and Schapire, 1996) which principally involve training a classifier on different training data.
The focus of this paper is ensemble feature selection which entails ensuring diversity through training a given classifier on different features, which in remote sensing would be the different sensor bands. By varying the feature subsets used to generate the ensemble classifier, diversity is ensured since the base classifiers tend to err in different subspaces of the instance space (Oza and Tumer, 2008; Tsymbal et al. 2005) as illustrated in Figure 1. Some of the techniques used to select features to be used in ensemble systems include genetic algorithms (Opitz, 1999), exhaustive search methods and random selection of feature subsets (Ho, 1998).
Of equal importance to ensemble classification is how to combine the results of the base classifiers (Foody et al., 2007). A number of approaches exist to combine information from multiple classifiers (Huang and Lees, 2004; Valentini and Masulli, 2003; Giacinto and Roli, 2001) such as majority voting
Figure 1: Graphical illustration of an Ensemble classifier system (Adopted from Parikh and Polikar, 2007)
(Chan and Paelinckx, 2008), weighted majority voting (Polikar, 2006) or more sophisticated methods like consensus theory (Benediksson and Swain, 1992) and stacking (Dzeroski and Zenko, 2004).
One of the emerging areas of research interest has been how to quantify diversity, as a result of which numerous diversity measures are under investigation in the literature. The main focus of investigation has centered on finding measures which can be used as a basis upon which to build diverse ensemble systems. In the literature (e.g. Polkar, 2006; Kuncheva and Whitaker 2003), there are two categorizations of diversity measures namely: pair-wise and non-pair-wise diversity measures. Examples of pair-wise measures include: Q statistic, correlation coefficient, agreement measure, disagreement measure and double-fault measure. The diversity measure for the ensemble is derived by calculating the average of the pair-wise measures of the constituent classifiers (Tsymbal et al., 2005; Shipp and Kuncheva, 2002). Non-pair-wise diversity measures include: the entropy measure, Kohavi-Wolpert variance and measurement of inter-rater agreement.
### Methodology
The study area for this research was Kampala, the capital of Uganda. The optical bands of a 2001 Landsat image (column 171 and row 60) formed the dataset from which ensembles were created and investigated. There were five land cover classes of interest considered including: water, built up areas, thick swamps, light swamps and other vegetation. Ten ensembles were created each with five base classifiers, the number five having been arbitrarily chosen. For each ensemble, the base classifiers were made up of the bands which yielded the best separability indices (best five band combinations in this case). Three separability indices where used namely: Bhattacharyya distance, divergence and transformed divergence.
For each base classifier and corresponding ensemble and a land cover map was derived using Gaussian Support Vector Machines. The land cover map for each ensemble was consequently derived through majority voting primarily due to its simplicity (Valentini and Masulli, 2002). Each of the derived land cover maps was compared with ground truth data to ascertain their classification accuracy. In order to determine the diversity of each ensemble the kappa analysis was used to give the measure of agreement between the constituent base maps and ultimately the overall ensemble diversity. The influence of diversity on land cover classification accuracy for each ensemble was evaluated by comparing the derived land cover classification accuracies with the derived diversity measures.
### Results, Discussion and Conclusions
Table 1 gives a summary of the results depicting the ensembles constituted depending on the separability index used, the respective base classifier classification accuracy assessment and the consequent ensemble classification accuracies.
It also gives the diversity measure per ensemble according to degree of agreement and variance. From Table 1 it can be observed that for all ensembles, whereas the ensemble classification accuracy was better than many of the base classifiers, in no case was it better than the best classifier within the ensemble. It is, however, critical to note, and the possibility is indicated here and reported elsewhere (e.g. Bruzzone and Cossu, 2004), that whereas the ensemble classification may not be more accurate than all of the base classifiers used in its construction (Foody et al., 2007), it certainly reduces the risk of making a
\\begin{table}
\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \\hline
**Index** & \\multicolumn{3}{|c|}{**BiBi**} & \\multicolumn{3}{|c|}{**Bi**-****************} & \\multicolumn{3}{|c|}{**Bi**-********} & \\multicolumn{3}{|c|}{**Bi**-******} & \\multicolumn{3}{|c|}{**Bi**-****} & \\multicolumn{particularly poor selection (Polikar, 2006). Table 1 also shows that across all ensembles, the respective classification accuracy increased as the size of the base classifiers increased. This is further confirmed from Table 2 depicting the binomial tests of significance of the between ensemble classification accuracies. In the simple case of determining if there is a difference between two classifications (2 sided test), the null hypothesis (H\\({}_{\\text{o}}\\)) that there is no significant difference will be rejected if \\(|\\)Z\\(|\\) > 1.96 (Congalton and Green, 1998). For each separability index used, increasing the number of features in the base classifiers in general significantly increased the ensemble classification accuracy. The ensemble (E) with five features per base classifier was seen to be significantly better than all the other ensembles apart from D3, where the difference was deemed insignificant. From the results, nothing conclusive can be deduced regarding which of the used separability indices is best suited as a basis upon which to build ensembles.
The relationship between ensemble classification accuracy and diversity was investigated by determining the correlation between ensemble classification accuracy and the agreement measure which in this case was the Kappa value. This was computed by averaging the in-ensemble pair-wise kappa values of the base classifiers measured against each other. In order to get a better appreciation on the in-ensemble diversity, the variance was also computed for the computed pair-wise kappa values. Intuitively, the more diverse the ensemble, the lower the agreement between the classifiers and consequently the lower the consequent kappa values. By extension, the more diverse the ensemble, the bigger the variance between the in-ensemble pair-wise kappa values. Figure 2 depicts the interplay between the ensemble accuracy and diversity measure, which in this case is the mean of the in-ensemble measure of agreement computed from the in-ensemble pair-wise kappa values.
Figure 3 gives a modification of the ensemble measure of agreement whereby instead of considering the mean of the in-ensemble pair-wise kappa values, their variances are considered. The coefficient of correlation of the line of best fit in Figure 2 is 0.83 while in Figure 3 is -0.72. From Figure 2 and 3, respectively, it can be deduced that ensemble accuracy increases as the agreement between the base
\\begin{table}
\\begin{tabular}{l l l l l l l l l l l} & B1 & B2 & B3 & D1 & D2 & D3 & T1 & T2 & T3 & E \\\\ B1 & - & & & & & & & & & & \\\\ B2 & 0.44 & - & & & & & & & & & \\\\ B3 & 6.06 & 5.62 & - & & & & & & & & \\\\ D1 & 8.20 & 8.64 & 14.22 & - & & & & & & & \\\\ D2 & 2.09 & 1.65 & 3.97 & 10.28 & - & & & & & & \\\\ D3 & 8.17 & 7.74 & 2.12 & 16.31 & 6.09 & - & & & & & \\\\ T1 & 5.58 & 6.02 & 11.61 & 2.63 & 7.67 & 13.71 & - & & & & \\\\ T2 & 1.47 & 1.03 & 4.59 & 9.67 & 0.62 & 6.71 & 7.05 & - & & & \\\\ T3 & 3.15 & 2.71 & 2.91 & 11.34 & 1.06 & 50.3 & 8.72 & 1.68 & - & & \\\\ E & 9.65 & 9.22 & 3.61 & 17.76 & 7.57 & 1.49 & 15.17 & 8.19 & 6.52 & - & \\\\ \\end{tabular}
\\end{table}
Table 2: Binomial Test of Significance of between ensemble classification accuracies
Figure 2: Correlation between Diversity (agreement) and Ensemble classification accuracy
classifiers increases and as the variance between the base classifier output decreases. In effect, this would ideally imply that the ensemble classification accuracy would increase if there is more agreement between the base classifier outputs. The contradiction this imputes is that to get higher ensemble classification accuracy there is need for less diversity among the base classifiers.
The results bring to the fore the challenge that comes with including diversity measures in ensemble classification research. Clearly its use in determining diversity for land cover mapping is counter intuitive. The problem may stem from using classifier output as the basis upon which to measure diversity. Whereas diversity, as defined in ensemble classification research, is premised on having decision boundaries which err differently, using outputs to determine the measure of diversity presupposes that using different decision boundaries would yield different results. In the case of ensemble feature selection, base classifiers from different features certainly result in decision boundaries which err differently (and hence exhibit diversity), however, their final classification outputs are similar as the high coefficients of correlation depict. Hence basing on the outputs as a measure of diversity clearly gives a poor reflection of how diverse the ensemble is. In their concluding remarks, Shipp and Kuncheva (2002) posit that the quantification of diversity and its use in determining diversity in ensembles will only be possible when a more precise formulation of the notion of diversity is obtained. Until then different heuristics will have to be employed. Whereas ensemble classification presents a unique approach to land cover mapping, the quantification of diversity and its consequent influence in determining the type of ensembles is clearly still open for research.
## Acknowledgements
The authors would like to acknowledge the support of the University of the Witwatersrand, Department of Science and Technology and reviewers.
[MISSING_PAGE_POST]
Chan, J., C., and Paelinckx, D. 2008. Evaluation of Random Forest and Adaboost tree-based ensemble classification and spectral band selection for ecotope mapping using airborne hyperspectral imagery. Remote Sensing of Environment 112, pp 2999 - 3011
Congalton, R. G., and Green, K. 1998. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices. (Boca Raton, Florida: Lewis Publishers)
Dzeroski, S. and Zenko, B. 2004. Is Combining Classifiers with Stacking Better than Selecting the Best One? Machine Learning. 54(3) pp 255 - 273 (Hingham, MA, USA: Kluwer Academic Publishers )
Foody, G.M., Boyd, D.S. and Sanchez-Hernandez, C. 2007. Mapping a specific class with an ensemble of classifiers. International Journal of Remote Sensing, 28(8), pp 1733 - 1746
Freund, Y., and Schapire, R. 1996. Experiments with a new boosting algorithm. In Proceedings of the 13\\({}^{\\text{th}}\\) International Conference on Machine Learning, Bari, Italy, pp 148 - 156. (Morgan Kaufmann)
Giacinto, G., and Roli, F. 2001. Design of effective neural network ensembles for image classification processes. Image Vision and Computing Journal, 19:9/10, pp 699 - 707
Huang, Z and Lees, B.G., (2004). Combining non-parametric models for multisource predictive forest mapping. Photogrammetric Engineering and Remote Sensing, vol.70, pp 415 - 425
Ho, T. K. 1998. The random subspace method for constructing decision forests. IEEE Transactions of Pattern Analysis and Machine Intelligence, 20 (8), pp 832 - 844
Kramer J. H., 2002. Observation of the earth and its environment: Survey of missions and sensors (4\\({}^{\\text{th}}\\) Edition). (Berlin: Springer)
Kuncheva, L., and Whitaker, C., J. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning 51 pp 181 - 207 (Hingham, MA, USA: Kluwer Academic Publishers )
Landgrebe, D. 2005. Multispectral Land Sensing: Where From, Where to? IEEE Transactions on Geoscience and Remote Sensing, 43, pp 433 - 440.
Landgrebe, D. 1997. The evolution of Landsat Data Analysis. Photogrammetric Engineering and Remote Sensing, 63, pp 859 - 867
Opitz, D. 1999. Feature Selection for Ensembles. In Proceedings of the 16\\({}^{\\text{th}}\\) National Conference on Artificial/ Intelligence (AAAI), Orlando-Florida, USA, pp 379 - 384
Oza, C. N. and Tumer, K. 2008. Classifier ensemble: Select real - world applications. Information Fusion, vol. 9, pp 4 - 20
Pal, M., 2007. Ensemble Learning with Decision Tree for Remote Sensing Classification. In Proceedings of the World Academy of Science, Engineering and Technology Vol 26 pp 735 - 737
Parikh, D., and Polikar, R. 2007. An Ensemble-Based Incremental Learning Approach to Data Fusion. IEEE Transactions On Systems, Man, And Cybernetics--Part B: Cybernetics, Vol. 37, No. 2
Polikar, R. 2006. Ensemble based systems in decision making. IEEE Circuits and Systems Magazine, pp 21 - 44
Steele, B., M. and Patterson, D., A. Land Cover Mapping using Combination and Ensemble Classifiers. Computing Science and Statistics vol 33, pp 236 - 247
Shipp, C., A. and Kuncheva, L., 2002. Relationship between combination methods and measures of diversity in combining classifiers. Information Fusion 3 pp 135 - 148
Tsymbal, A., Pechenizkiy, M., and Cunningham, P. 2005. Diversity in search strategies for ensemble feature selection. Information Fusion, 6(1), pp 83 - 98
Valentini, G., and Masulli, F. 2002. Ensembles of learning machines. In: Neural Nets WIRN Vietri, Lecture Notes in Computer Sciences, edited by; Tagliaferri, R and Marinaro, M., vol. 2486, pp 3 - 19
Wacker, A. G., and Langrebe, D. 1972. Minimum Distance Classification in Remote Sensing. In Proceedings of the 1\\({}^{\\text{st}}\\) Canadian Symposium for Remote Sensing February. 7\\({}^{\\text{th}}\\) - 9\\({}^{\\text{th}}\\)Feb 1972.
Yu, E., and Cho, S. 2006. Ensemble Based in GA Wrapper Feature Selection. Computers and Industrial Engineering 51 pp 111 - 116
\\({}^{\\ast}\\)Corresponding Author: Tel.: +27117177261; Fax: +27114031929
Email: [email protected]; [email protected] | Ensemble classification is an emerging approach to land cover mapping whereby the final classification output is a result of a 'consensus' of classifiers. Intuitively, an ensemble system should consist of base classifiers which are diverse i.e. classifiers whose decision boundaries err differently. In this paper ensemble feature selection is used to impose diversity in ensembles. The features of the constituent base classifiers for each ensemble were created through an exhaustive search algorithm using different separability indices. For each ensemble, the classification accuracy was derived as well as a diversity measure purported to give a measure of the in-ensemble diversity. The correlation between ensemble classification accuracy and diversity measure was determined to establish the interplay between the two variables. From the findings of this paper, diversity measures as currently formulated do not provide an adequate means upon which to constitute ensembles for land cover mapping.
Keywords: _Ensemble Feature Selection, Diversity, Diversity Measures, Land Cover Mapping_ | Write a summary of the passage below. |
arxiv-format/0811_2464v1.md | A model of underground ridership during the severe outbreaks of the SARS epidemic in a modern city
Kuo-Ying Wang
Department of Atmospheric Sciences
National Central University
Chung-Li, Taiwan
# Introduction
The 2003 SARS epidemic is a recent vivid example, demonstrating the deep impacts that an infectious disease can have on human society. For example, TIME magazine called Taiwan a SARS Island (16), that SARS sinks Taiwan (17), and China as a SARS Nation (18). One of the best ways to understand the response of people living through an emerging disease is by examining the change of people's daily activity with respect to the variations of the reported SARS cases during the epidemics. However, there is a lack of study showing the dynamics of public's response to the perceived risk associated with the daily reported SARS cases. Previous works studied the dynamics of the daily accumulated infected cases during the SARS outbreaks in Beijing, Canada, Hong Kong, Singapore, and Taiwan (1, 2, 3, 12), respectively. A study examined the impact of influenza on death rates in tropical Singapore (4). Since confined environment with short distance is conducive for infectious transmission between species (5, 6), the perceived risk associated with staying in any confined space during the SARS epidemics alters people's daily activity. In this work, we use the Taipei underground mass transportation system, which is a typical of confined space, and reported SARS cases in Taiwan, to show that the dynamics of the underground usage can be modeled by the daily variations of the reported SARS cases.
**2. Methods**
**2.1. Daily Underground Ridership Data**
The Taipei underground system transports about 1 million people per day (7). These daily ridership exhibits a strong weekly cycle. A lower amount of people traveling on Wednesday (a short weekend), a weekly peak on Friday (before the long weekend), the lowest amount of people traveling on Saturday and Sunday, and the rest of the week days are about the same. Except for occasional events such as typhoon and the Chinese New Year (8), the weekly pattern is roughly the same through the year. This stability in the daily ridership provides a good quantifiable measure of public reaction when an unprecedented risk occurs in the society.
Since weekly patterns of the passengers are less perturbed during the weeks from early spring to early summer than other periods, we can determine mean daily underground ridership \\(\\overline{p}\\)in a week based on the average of the twelve weeks, started from the week with the first Monday in March, for the years 2001, 2002, 2004, and 2005, respectively. The statistical daily ridership \\(\\overline{p}\\) for 2003 is calculated as the mean of the daily ridership from 2001, 2002, 2004, and 2005. Hence, if without the disturbance of significant factors such as an approaching typhoon, a long holiday, festivals, and epidemics, the daily ridership will normally maintain a constant pattern throughout the week.
**2.2. A Dynamic Model**
In order to model the daily variations of the underground ridership with respect to the daily reported SARS cases, a dynamical model was developed to simulate day-to-day variations of the underground ridership in the periods before, during, and after the SARS epidemics. Since the model is dynamic in nature, the model variables and the external forcing that governs the time evolution of the model variables must be established so that the model is able to make prediction based on the change of external forcing.
During the 2003 SARS period, we observed two significant relationships between the daily underground ridership and the daily reported SARS case. Firstly, there exists a quick response of the underground ridership with respect to the daily reported SARS cases that made headlines almost everyday in the mass media during the SARS period. The overwhelming reports from these public media appear to have big impacts on the willingness of the public in using the underground as a mean for going to schools and offices (both schools and offices were not closed during the SARS period). The public perception of risk in associating with the use of the underground system vividly reflected in the significant drops of underground ridership (9). Secondly, the gradual increases in underground ridership during the final stage of the SARS epidemic. This indicates the return of public confidence in using the underground system as a mean for daily transportation to offices and schools. These two observations indicate that a dynamic model should represent these effects, i.e., the sharp drops in the ridership associated with increases in the reported SARS cases, and a gradual return of the underground ridership as the reported SARS cases gradually faded away from the headlines.
Based on the daily ridership from 2001 to 2005 and daily reported SARS cases in 2003, we can write a model of daily ridership with respect to the daily reported SARS cases as
\\[p(t)=\\overline{p}-\\sum_{i=1}^{i=t}L(i)\\exp(-(t-i)/\\tau)\\]
Here, on a given day \\(t\\), \\(\\overline{p}\\)is the daily normal underground usage predicted by the statistics as described above;\\(L=kc(t)\\) is the loss of the underground ridership, due to the perceived risk if traveling with the underground system; \\(k\\) is the loss rates of the underground ridership per reported SARS case; \\(c\\) is the daily reported SARS cases; \\(i\\) is the day number since the simulation started (1 Mar 2003); and\\(\\tau\\)is the \\(e\\)-folding time.
Notice that the real-time reported probable SARS cases (9) were used in this work. These were the information affecting people's decision during the height of the SARS epidemics. The SARS cases published after the SARS epidemics are slightly different (10, 11, 12). The instantaneous passenger loss rate on day \\(i\\) will propagate exponentially to the following days with an \\(e\\)-folding time of \\(\\tau\\)days. Here the e-folding time measures the damping of the perceived risk factor due to each new reported SARS cases, reflecting the public expectation of the risk of contacting the disease arises from each newly reported SARS cases. Hence, the amount of passengers at each day \\(t\\) is determined by the daily normal ridership\\(\\overline{p}\\), subtracted the loss of ridership due to the number of reported SARS cases at that day and the accumulated impacts from the days before.
**3. Results**
**3.1. The Underground Ridership During Non-SARS Years**
Figure 1 shows a time-series plot of recorded and modeled daily underground ridership during the non-SARS years of 2001, 2002, 2004, and 2005, respectively. During these years, the daily underground ridership can be approximated by the statistical average daily ridership (\\(\\overline{p}\\)) of each year. During most of the days the actual amount of people traveling with the underground is close to the statistical predictions, indicating that the underground usage during normal days are very regular. However, there are days when the model and the actual underground ridership show big discrepancies. These are the days when special events (e.g., Chinese New Year, spring holiday, and typhoons) occurred. For example, Figure 1B shows a typical example of the time-series plot of daily ridership using the Taipei underground system in 2002. We note that the first big drop in ridership after day 31 corresponds to the Chinese New Year holiday, the second big drop in numbers after day 91 is due to students' spring holiday, and the drop after day 241 Julian is due to a typhoon. Similar situations applied to other years as well. We note that a big drop of ridership during the days 241-271 in 2001 is due to the closure of two main lines of the underground system, which was flooded by the severe rainfall during the passing of Typhoon Nari (8).
Figure 2 further compares the discrepancies in ridership between the statistical predictions and the actual numbers. During the spring months (days 61-150) of each year, the statistical model predicts the daily ridership that are, in most of the days, close to within 10% of the actual numbers of people taking the underground. Exceptions occur during rare events (e.g., long holidays, fes
Figure 1: Daily underground passengers (red curve), and the passengers predicted by the statistical model (blue line) in 2001 (**A**), 2002 (**B**), 2004 (**C**), and 2005 (**D**).
Figure 2: Difference in daily ridership between the model prediction and the actual daily ridership in 2001 (**A**), 2002 (**B**), 2004 (**C**), and 2005 (**D**).
tivals, typhoons, etc). We find that the underground flooding in 2001, casued by Typhoon Nari, resulted in a loss of about 80% daily ridership; the Chinese New Year holiday causes a loss of 60-70% daily ridership; and the close of the governmental offices and schools during typhoon periods caused a loss of 50-80% daily ridership. The increases in people taking the underground toward the end of 2002, 2004, and 2005, respectively, could be due to the factors that the underground ridership was steadily growing and also more people tend to take underground during the winter months.
**3.2. The Underground Ridership During the 2003 SARS Year**
In a sharp contrast with the normal underground usage during 2001, 2002, 2004, and 2005, the daily ridership in 2003 shows anomalously high loss of ridership from about day 60 to days 120-150, when the maximum reduction of daily ridership of half a million were occurred (Figure 3). About 50% of daily ridership was lost during peak of the 2003 SARS periods. This period concurs with the SARS outbreaks in Taiwan (10, 11). The reason for the drop of daily underground passengers is clearly related to the rising numbers of reported probable SARS cases (9) during this period. Figure 3A compares the time evolution of SARS cases and the wane and the wax of the daily underground ridership. The peak of the reduction in the daily ridership occurred after the peak of the reported probable SARS cases. While the reported SARS cases drop sharply during days 151-181, the returns of the ridership to the underground appear to be at a slow pace during days 151-271. Predicted loss of the daily underground ridership and its comparison with the actual ridership are shown in Figure 3B. The sharp response in daily ridership following the increase of the reported SARS cases, and the slow return of the ridership after the peak of the SARS cases is well reproduced by the model, Figure 3B. The close agreement between the model and actual underground ridership indicate that the model can successfully reproduce the daily underground ridership during the 2003 SARS epidemics in Taiwan. Though the fear of the SARS still linger on in 2004, no reported SARS cases in that year resulted in the normal use of the underground system as seen from the model and the actual daily ridership, Figure 2C.
**3.3. Sensitivity of Underground Ridership to Reported SARS Cases**
Two parameters are keys to the predicted underground ridership with respect to the daily reported SARS cases: Instantaneous ridership loss rate (\\(k\\)) per reported SARS case, and the \\(e\\)-folding time (\\(\\tau\\)) indicating the propagation of the loss rates to subsequent days. Figure 4 shows tests of various values of these two parameters. For the same \\(e\\)-folding time (the periods that perceived risk lasts), for example \\(\\tau\\)=14 days (Figures 4A-C), the larger the daily ridership loss rates \\(k\\) per reported SARS case (degree of shocks to the public), the deeper the reduction in the underground ridership will be resulted. But the time to return to the normal daily ridership is similar for different ridership loss rates after passing the peaks in the SARS cases. These results indicate that, if the time scales of public perception to each reported SARS case are the same, then the impact on the loss of underground passengers will be limited to the days close to the peak of the reported SARS cases.
## Figure 3
Figure 3: Predicted (blue curve) and actual (red curve) daily ridership (**A**). Difference in the actual daily ridership (red curve) and the predicted ridership (blue curve) (**B**).Green curve shows daily reported SARS cases. Both data are for 2003.
Figure 4: Difference in the actual daily ridership (red curves) and the predicted daily ridership (blue curves) with respect to a combination of three instantaneous passenger reduction rates (\\(k\\)) and three passenger \\(e\\)-folding time (\\(\\tau\\)) scales. For plots in the columns from the left to the right showing \\(\\tau\\)=14, 21, and 28 days, respectively. For plots in the rows from the bottom to the top showing \\(k\\)=1200, 1600, and 2000 ridership loss rates per reported SARS case, respectively. Green curves show daily reported SARS cases. These data are shown for 2003.
On the other hand, if the passenger loss rates are the same, for example \\(k\\)=1200 (Figures 4A, 4D, and 4G), then the longer the \\(e\\)-folding time scale \\(\\tau\\), the slower the return of the underground ridership to the normal. A long \\(e\\)-folding time scale also results in a large accumulated loss due to the accumulated effects from previous days (Figure 4G). Hence, long period of the public perception of the risk associated with the reported SARS cases is likely to cause the long-lasting impact on the behavior of people and their willingness to use the underground.
**4. Summary**
In this work we show that the dynamics of the Taipei underground usage during the 2003 SARS epidemic in Taiwan are closely linked to the daily wax and wane of the reported probable SARS cases. Our model shows that each reported SARS case results in an immediate loss of about 1200 underground ridership, reflecting the public perception of immediate risk associated with the intense report of the SARS outbreaks and their reluctance in using the underground system. The public perception of the risk propagates and exponentially decays to the following days with an \\(e\\)-folding time of about 28 days. This duration of time reflects the perception of the risk perceived by the normal underground passengers. Our study shows that the longer the \\(e\\)-folding time (perception of the risk), the slower the return of the underground ridership. A huge loss of the underground ridership but with a short \\(e\\)-folding time results in predicted passengers returning to the underground system sooner than what had occurred. The combination of the immediate passenger loss rates and their impacts propagates to the following days resulting in the occurrence of the peak of the ridership loss later than the peak of the reported SARS cases. About 50% of daily ridership was lost during the peak of the 2003 SARS periods, compared with the loss of 80% daily ridership during the closure of the underground system after Typhoon Nari, the loss of 50-70% ridership due to the closure of the governmental offices and schools during typhoon periods, and the loss of 60% daily ridership during Chinese New Year holidays.
Since social distancing measures have been shown to be important for containing an emerging disease (6, 13, 14, 15, 22), our results could be useful in incorporating into the disease spreading models where underground usage is an important connection node for social behaviors. There are other major cities such as Hong Kong, Singapore, and Beijing all contain massive underground systems and were impacted by the 2003 epidemics (19, 20, 21). Our model developed here could be useful to test if similar ridership behaviors found in Taipei are also applicable to these major cities in Asia. In the context of avian flu, the underground ridership occurred under the SARS epidemics may provides us a glimpse on what the general public will response in the wake of next epidemics.
**Acknowledgement**
The author dedicates this work to those who suffered the SARS disease; the US CDC who helped Taiwan fights the SRAS war; and the doctors, nurses, voluntary workers, and public officers who stayed on duty during the 2003 SARS epidemic. The author thanks P. Hadjinicolaou, O. Wild, A. Polli, and H.-C. Lee for their comments on the manuscript.
## References
* [1] Hsieh YH, Cheng YS, Real-time forecast of multiphase outbreak, Emerg Infect Dis. 2006; 12:114-121.
* [2] Cauchemez S, Boelle PY, Donnelly CA, Ferguson NM, Thomas G, Leung GM, Hedley AJ, Anderson RM, Valleron AJ, Real-time estimates in early detection of SARS, Emerg Infect Dis. 2006; 12:110-113.
* [3] Zhou G, and Yan G, Severe Acute Respiratory Syndrome epidemic in Asia, Emerg Infect Dis. 2003; 9:1608-1610.
* [4] Chow A, Ma S, Ling AE, Chew SK, Influenza-associated deaths in tropical Singapore, Emerg Infect Dis. 2006; 12:114-121.
* [5] Lowen AC, Mubareka S, Tumpey, TM, Garcia-Sastre A, Palese P, The guinea pig as a transmission model for human influenza viruses, PNAS 2006; 103:9988-9992.
* [6] Ferguson NM, Cummings DAT, Cauchemez S, Fraser C, Riley S, Meeyai A, Iamsirithaworn S, Burke DS, Strategies for containing an emerging influenza pandemic in southeast Asia, Nature 2005; 437:209-214.
* [7] Taipei Rapid Transport Corporation (TRTC). Available at [http://www.trtc.com.tw](http://www.trtc.com.tw). Statistics for daily passengers were published on-line since March 1996.
* [8] Wang KY, Shallcross DE, Hadjinicolaou P, Giannakopoulos C, Ambient vehicular pollutants in the urban area of Taipei: Comparing normal with anomalous vehicle emissions, Water Air Soil Pollu. 2004; 156:29-55.
* [9] Department of Health, Taipei City Government. Available at [http://sars.health.gov.tw/INDEX.ASP](http://sars.health.gov.tw/INDEX.ASP). The probable SARS cases during the SARS outbreak were published and updated real-time at [http://sars.health.gov.tw/article.asp?channelid=C&serial=262&click=](http://sars.health.gov.tw/article.asp?channelid=C&serial=262&click=).
* Taiwan, 2003, MMWR Morb. Mortal Wkly Rep. 2003; 52: 461-466.
* [11] Chen YC, et al., SARS in hospital emergency room, Emerg Infect Dis. 2004; 10:782-788.
* [12] Hsieh YH, Chen CWS, Hsu SB, SARS outbreak, Taiwan, 2003, Emerg Infect Dis. 2004; 10:201-206.
* [13] Fraser C, Riley S, Anderson RM, Ferguson NM, Factors that make an infectious disease outbreak controllable, PNAS 2004; 101:6146-6151.
* [14] Germann TC, Kadau K, Longini IM Jr., Macken CA, Mitigation strategies for pandemic influenza in the United States, PNAS 2006; 103:5935-5940.
* [15] Longini IM Jr., Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, Cummings DA, Halloran ME, Containing pandemic influenza at the source, Science 2005; 309:1083-1087.
16. Time, 26 May 2003, page 3.
17. Time, 2 June 2003, page 3.
18. Time, 5 May 2003. Cover story.
19. Donnelly CA, et al., Epidemiological determinants of spread of causal agent of severe acute respiratory syndrome in Hong Kong, Lancet 2003; 361:1761-1766.
20. Zhou G, Yan G, Severe acute respiratory syndrome epidemic in Asia, Emerg Infect Disc. 2003; 9:1608-1610.
21. Dye C, Gay N, Modeling the SARS epidemic, Science 2003; 300:1884-1885.
22. Eubank S, Guclu H, Anil Kumar VS, Marathe MV, Srinivasan A, Toroczkai Z, Wang N, Modelling disease outbreaks in realistic urban social networks, Nature 2004; 429:180-184. | The outbreaks of the severe acute respiratory syndrome (SARS) epidemic in 2003 resulted in unprecedented impacts on people's daily life. One of the most significant impacts to people is the risk of contacting SARS while engaging daily routine activity. In this work we use data from daily underground ridership in Taipei and daily reported SARS cases in Taiwan to model the dynamics of the public underground usage during the wax and wane of the SARS period. We found that for each reported SARS case there is an immediate loss of about 1200 underground ridership. These loss rates propagate to the following days with an \\(e\\)-folding decay time of about 28 days, reflecting the public perception on the risk of contacting SARS disease when travelling with the underground system. About 50% of daily ridership was lost during the peak of the 2003 SARS period, compared with the loss of 80% daily ridership during the closure of the underground system after Typhoon Nari, the loss of 50-70% ridership due to the closure of the governmental offices and schools during typhoon periods, and the loss of 60% daily ridership during Chinese New Year holidays.
**Keywords:** SARS, modeling, risk, ridership | Summarize the following text. |
arxiv-format/0811_3226v1.md | # Comprehensive Observations of a Solar Minimum CME with STEREO
B. E. Wood1, R. A. Howard1, S. P. Plunkett1, D. G. Socker1
Footnote 1: affiliation: Naval Research Laboratory, Space Sciences Division, Washington, DC 20375; [email protected]
## 1 Introduction
The _Solar Terrestrial Relations Observatory_ (STEREO) mission is designed to improve our understanding of coronal mass ejections (CMEs) and their interplanetary counterparts ICMEs (\"interplanetary CMEs\") in many different ways. Consisting of two spacecraft observing the Sun from very different locations, STEREO simultaneously observes the Sun and interplanetary medium (IPM) from two vantage points, allowing a much better assessment ofa CME's true three-dimensional structure from the two-dimensional images. STEREO has the capability of observing CMEs far into the IPM thanks to its two Heliospheric Imagers, HI1 and HI2, which can track CMEs all the way to 1 AU. The only other instrument with comparable capabilities is the Solar Mass Ejection Imager (SMEI) on the _Coriolis_ spacecraft, which is still in operation (Eyles et al., 2003; Jackson et al., 2004; Webb et al., 2006; Howard et al., 2008). Finally, the two spacecraft possess particle and field instruments that can study ICME properties in situ. The ability to continuously follow a CME from the Sun into the IPM actually blurs the distinction between the CME and ICME terms. Since most of this paper will be focused on white-light images of the CME, we will generally use only the CME acronym.
Harrison et al. (2008) reported on the first CMEs observed by STEREO that could be continuously tracked into the IPM by HI1 and HI2. They tracked the events over 40\\({}^{\\circ}\\) from the Sun. We extend this work further by presenting observations of a CME that can be tracked all the way to 1 AU, where the event is then detected by particle and field detectors on one of the two spacecraft. It is also detected by the _Advanced Composition Explorer_ (ACE), and by the Charge, Element, and Isotope Analysis System (CELIAS) on board the _Solar and Heliospheric Observatory_ (SOHO). Both ACE and SOHO are at Earth's L1 Lagrangian point, so the CME's detection there means that it qualifies as an Earth-directed event. This CME is therefore useful for illustrating how STEREO's unique perspective can provide a much better assessment of the kinematics and structure of potentially geoeffective CMEs. This will become more important as we move away from the 2008 solar minimum and strong Earth-directed CMEs become more frequent.
## 2 The STEREO instruments
The two STEREO spacecraft were launched on 2006 October 26, one into an orbit slightly inside that of Earth (STEREO-A), which means that it moves ahead of the Earth in its orbit, and one into an orbit slightly outside that of Earth (STEREO-B), which means that it trails behind the Earth. Since launch the separation of the A and B spacecraft has been gradually growing. Figure 1 shows their locations on 2008 February 4, which is the initiation date of the CME of interest here. At this point STEREO A and B had achieved a separation angle of 45.3\\({}^{\\circ}\\) relative to the Sun.
The two STEREO spacecraft contain identical sets of instruments. The imaging instruments are contained in a package called the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI), which will be described in more detail below. There are two in situ instruments on board, the Plasma and Suprathermal Ion Composition (PLASTIC) instrument (Galvin et al., 2008), and the In-situ Measurements of Particles and CMEs Transients (IMPACT) package (Acuna et al., 2008; Luhmann et al., 2008). The former studies the properties of ions in the bulk solar wind, and the latter studies electrons, energetic particles, and magnetic fields in the IPM. Finally, there is a radio wave detector aboard each spacecraft called STEREO/WAVES, or SWAVES (Bougeret et al., 2008), but SWAVES did not see any activity relevant to our particular CME.
Most of the data presented in this paper will be from the five telescopes that constitute SECCHI, which are fully described by Howard et al. (2008). Moving from the Sun outwards, these consist firstly of an Extreme Ultraviolet Imager (EUVI), which observes the Sun in several extreme ultraviolet bandpasses. There are then two coronagraphs, COR1 and COR2, which observe the white light corona at elongation angles from the Sun of \\(0.37^{\\circ}-1.07^{\\circ}\\) and \\(0.7^{\\circ}-4.2^{\\circ}\\), respectively. These angles correspond to distances in the plane of the sky of \\(1.4-4.0\\) R\\({}_{\\odot}\\) for COR1 and \\(2.5-15.6\\) R\\({}_{\\odot}\\) for COR2. Finally, there are the two Heliospheric Imagers, HI1 and HI2, mentioned in SS1, which observe the white light IPM in between the Sun and Earth at elongation angles from the Sun of \\(3.9^{\\circ}-24.1^{\\circ}\\) and \\(19^{\\circ}-89^{\\circ}\\), respectively. At these large angles, plane-of-sky distances become very misleading, so we do not quote any here. Figure 1 shows explicitly the overlapping COR2, HI1, and HI2 fields of view for STEREO-A and STEREO-B on 2008 February 4.
## 3 The 2008 February 4 CME
### Imaging the Event
Figures 2-6 provide examples of images of the February 4 event from all five of the SECCHI telescopes. Figure 2 shows sequences of images in two of the four bandpasses monitored by EUVI: the He II \\(\\lambda 304\\) bandpass and the Fe XII \\(\\lambda 195\\) bandpass. Note that the actual time cadence is 10 minutes in both of these bandpasses, rather than the 30 minute time separation of the chosen images in Figure 2.
At about 8:16 UT a prominence is observed to be gradually expanding off the southeast limb of the Sun in the EUVI-A \\(\\lambda 304\\) images. This expansion then accelerates into a full prominence eruption, as a small flare begins at about 8:36 in the He II and Fe XII images some distance northwest of the prominence. The flaring site is indicated by an arrow in Figure 2. This is not a strong flare in EUVI, and there is no GOES X-ray event recorded at all at this time, so the flare is apparently too weak to produce sufficient high temperature plasma to yield a GOES detection.
Figures 3 and 4 show white-light COR1 and COR2 images of a CME that emerges
shortly after the EUVI flare begins. As was the case for EUVI, the actual time cadence is three times faster than implied by the selected images: 10 minutes for COR1 and 30 minutes for COR2. The synoptic COR1 and COR2 programs actually involve the acquisition of 3 separate images at three different polarization angles, which we combine into a single total-brightness image for our purposes. [Technically, the COR2 cadence is actually 15 minutes, alternating between the acquisition of full polarization triplets, which we use here, and total brightness images computed from polarization doublets combined onboard, which we do not use (Howard et al., 2008).] The coronagraph images are all displayed in running-difference mode in Figures 3 and 4, where the previous image is subtracted from each image. This is a simple way to subtract static coronal structures and emphasize the dynamic CME material.
From the perspective of STEREO-A the CME is first seen by COR1-A off the southeast limb, as expected based on the location of the flare and prominence eruption. However, the strong southern component to the CME motion seen in the COR1-A images in Figure 3 disappears by the time the CME leaves the COR2-A field of view. In the final COR2-A image in Figure 4 the CME is roughly symmetric about the ecliptic plane, in contrast to its COR1-A appearance. Cremades & Bothmer (2004) have noted that near solar minimum, CMEs appear to be deflected towards the ecliptic plane, presumably due to the presence of high speed wind and open magnetic field lines emanating from polar coronal holes. The February 4 CME may be another example of this.
The CME's appearance is radically different from the point of view of STEREO-B, illustrating the value of the multiple-viewpoint STEREO mission concept. The EUVI-B flare is only 10\\({}^{\\circ}\\) from disk center, so the expectation is that any CME observed by STEREO-B will be a halo event directed at the spacecraft. However, the COR1-B and COR2-B images show only a rather faint front expanding slowly in a southwesterly direction, though the COR2-B movies do provide hints of expansion at other position angles, meaning that this might qualify as a partial halo CME. It is possible that if the CME was much brighter it might have been a full halo event. It is impressive how much fainter the event is from STEREO-B than from STEREO-A, possibly due to the CME subtending a larger solid angle from STEREO-B's perspective, with some of the CME blocked by the occulter. The visibility of a CME as a function of viewing angle can also be affected by various Thomson scattering effects (Andrews, 2002; Vourlidas & Howard, 2006).
Both COR1-A and COR1-B (see Fig. 3) show a CME directed more to the south than would be expected based on the flare site (see Fig. 2), and COR1-B shows more of a westward direction than would be expected considering how close to disk center the flare is from the point of view of STEREO-B. We speculate that perhaps the coronal hole just east of the flare site (see EUVI-B Fe XII \\(\\lambda\\)195 images in Fig. 2) plays a role in deflecting the CME into the more southwesterly trajectory seen by STEREO-B. Thus, this CME seems to show evidence for two separate deflections from coronal holes: the initial deflection to the southwest from the low latitude hole adjacent to the flare site, and the more gradual deflection back towards the ecliptic plane seen in COR2-A (see Fig. 4).
Figures 5 and 6 show HI1 and HI2 images of the CME as it propagates through the IPM to 1 AU. As was the case for COR1 and COR2, the HI1 and HI2 data are displayed in running difference mode. The time cadence of HI1 and HI2 data acquisition are 40 minutes and 2 hours, respectively.
The large fields of view of the HI telescopes and the increasing faintness of CME fronts as they move further from the Sun make subtraction of the stellar background a very important issue. For HI1 we first subtract an average image computed from about 2 days worth of data encompassing the Feb. 4 event. This removes the static F-corona emission, which eliminates the large brightness gradient in the raw HI1 images. We then use a simple median filtering technique to subtract the stars before the running difference subtraction of the previous image is made. Artifacts from some of the brightest stars are still discernible in Figure 5, including vertical streaks due to exposure during the readout of the detector. Median filtering does not work well for the diffuse background produced by the Milky Way, so the Milky Way's presence on the right side of the HI1-B images is still readily apparent. A somewhat more complicated procedure is used for HI2, which involves the shifting of the previous image before it is subtracted to make the running-difference sequence, in an effort to better eliminate the stellar background. This method should be effective for both the diffuse Milky Way and stellar point source background, but median filtering is also used to further improve the stellar subtraction. The HI2 image processing procedure is described in more detail by Sheeley et al. (2008b).
The bright CME front is readily apparent in the HI1-A images, but in HI1-B the CME can only be clearly discerned in the lower left corner of the last two images in Figure 5. This is consistent with expectations from the appearance of the CME in the COR2 data. The situation becomes more complicated in the HI2 field of view (FOV). Figure 6 shows two HI2-A images, and also shows the positions of Earth, SOHO, and STEREO-B in the FOV. Earth and SOHO are behind a trapezoidal occulter, which is used to prevent the image from being contaminated by a very overexposed image of Earth. The first image shows that the CME front is initially a bright, semicircular front, consistent with its appearance in HI1-A. But it quickly fades, becoming much harder to follow. There are other fronts in the FOV (see Fig. 6) associated with a corotating interaction region (CIR), which confuses things further. The CME front appears to overtake the CIR material and the second image in Figure 6 shows the CME as it approaches the position of STEREO-B. At this point the front is much more well defined in the southern hemisphere than in the north.
Given the potential confusion between our CME front and the CIR material, it is worthwhile to briefly review what CIRs are and how they are perceived by STEREO. Sheeley et al. (2008a,b) have already described CIR fronts seen by HI2 in some detail, which have been the most prominent structures regularly seen by HI2 in STEREO's first year of operation. The CIRs are basically standing waves of compressed solar wind, where high speed wind coming from low latitude coronal holes is running into low speed wind. The CIRs stretch outwards from the Sun in a spiral shape due to the solar rotation, and have a substantial density enhancement that HI2-A sees as a gradually outward propagating front (or series of fronts) as the CIR rotates into view. The HI2-B imager does _not_ see the approach of the CIR in the distance like HI2-A does because HI2-B is looking at the west side of the Sun rotating away from the spacecraft instead of the east side rotating towards it, where HI2-A is looking (see Fig. 1). Instead, when the CIR reaches STEREO-B, HI2-B seen a very broad front pass very rapidly through the foreground of the FOV as the CIR passes over and past the spacecraft.
Since our CME front appears to overtake a CIR in the HI2-A images, it is tempting to look for evidence of interaction between the two. However, we believe that the leading edge of the CME is actually always ahead of the CIR. The appearance of \"overtaking\" is due to a projection effect, where the faster moving CME is in the foreground while the apparently slower CIR material seen in Figure 6 is in the background. Support for this interpretation is provided by Figure 2. The EUVI-B Fe XII \\(\\lambda\\)195 images show the coronal hole that is the probable source of the high speed wind responsible for the CIR. The coronal hole is just east of the flare region that represents the CME initiation site, so with respect to the Sun's westward rotation the CME leads the high speed wind that yields the CIR. This is the same coronal hole that we suppose to have deflected the CME into a more southwesterly direction, but the leading edge of the CME is always ahead of the CIR. Nevertheless, it is quite possible that the sides and trailing parts of the CME may be interacting with the CIR structure. Trying to find clear evidence for this in the HI2 data ideally requires guidance from models of CME/CIR interaction. Such an investigation is clearly a worthwhile endeavor, but it is outside the scope of our purely empirical analysis here.
Returning to the CME, just as HI2-B does not see CIRs until they engulf STEREO-B, HI2-B does not perceive the February 4 CME until it is practically on top of the spacecraft (as seen from STEREO-A). As the CME approaches and passes over STEREO-B, HI2-B sees a very broad, faint front pass rapidly through the foreground of the FOV, similar in appearance to the CIR fronts described above. Though the rapid front is apparent in HI2-B movies, its faintness combined with its very broad and diffuse nature makes it practically impossible to discern in still images, so we have not attempted to show it in any HI2-B images here.
### In Situ Observations
Figures 2-6 demonstrate STEREO's ability to track a CME continuously from its origin all the way out to 1 AU using the SECCHI telescopes, even for a modest event like the February 4 CME. Figure 7 demonstrates STEREO's ability to study the properties of the CME when it gets to 1 AU. The upper two panels of Figure 7 show the solar wind proton density and velocity sampled by the PLASTIC experiments on both STEREO spacecraft from February 5-17, and the bottom panel shows the magnetic field strength observed by IMPACT. For comparison, we have also added measurements made at Earth's L1 Lagrangian point by ACE and SOHO/CELIAS. Including both ACE and SOHO/CELIAS data provides us with two independent measurements at L1. (The CELIAS instrument does not provide magnetic field measurements, though.) The CME is detected by STEREO-B, and more weakly by ACE and SOHO, but it is not detected at all by STEREO-A.
Given that the CME's initiation site is near Sun-center as seen by STEREO-B (see Fig. 2), it is not surprising that the CME eventually hits that spacecraft. STEREO-B sees a density and magnetic field increase on February 7 at the same time that HI2-A sees the CME front reach STEREO-B, so there is good reason to believe that this is the expected ICME corresponding to the February 4 CME. However, the particle and field response are not characteristic of a typical ICME or magnetic cloud (see, e.g., Jian et al., 2006), and it is difficult to tell exactly where the ICME begins and ends. The wind velocity increases from an ambient slow solar wind speed of about 360 km s\\({}^{-1}\\) to the CME's propagation speed of 450 km s\\({}^{-1}\\), but the velocity increase trails the density and magnetic field increase by at least 12 hours. Perhaps much of the field and density excess associated with the CME may be slow solar wind that has been overtaken and piled up in front of the original CME front, but we cannot rule out the possibility that the CME may be mixed up with some other magnetic structure, confusing the ICME signature in Figure 7. Another possibility is that the central axis of the CME passed to the south of the spacecraft, leading to a more muddled magnetic field signature.
The ICME is also detected by ACE and CELIAS. The velocity profiles seen at L1 are practically identical to that seen by PLASTIC-B. The density profiles seen by ACE and CELIAS are somewhat discrepant. The ACE data show a weak, narrow density spike at the time of maximum density at STEREO-B, while the CELIAS data only show a broad, weak density enhancement. In any case, the density enhancement at L1 is weaker than at STEREO-B. The magnetic field enhancement seen by ACE is also weaker than at STEREO-B, and shorter in duration. Thus, STEREO-B receives a more direct hit from the CME than ACE and SOHO, which are 23.6\\({}^{\\circ}\\) away from STEREO-B (see Fig. 1). This is once again consistent with the CME's direction inferred from the SECCHI images. At an angle from STEREO-B of 45.3\\({}^{\\circ}\\), the PLASTIC and IMPACT instruments on STEREO-A do not see the CME at all, providing a hard upper limit for the angular extent of the CME in the ecliptic plane.
It is worthwhile to compare and contrast how HI2 and the in situ instruments perceive the CME. The CME front seen by HI2-A (see Fig. 6) appears to reach the location of STEREO-B at about 18:00 UT on February 7. This corresponds roughly to the time when the densest part of the CME is passing by STEREO-B (see Fig. 7). However, both the density and magnetic field data indicate that less dense parts of the CME front reach STEREO-B much earlier. This demonstrates that much of the CME structure is unseen by HI2-A. The HI2-A front displayed in Figure 6 is only the densest part of the CME. For ACE and SOHO there is an even greater disconnect between the HI2-A CME front and the in situ observations of it. Movies of the fading CME front allow it to just barely be tracked out to the position of ACE and SOHO, which it reaches at about 6:00 UT on February 8, well after the weak density enhancement seen by these instruments is over. This means that HI2-A does not see the part of the CME that hits the spacecraft at L1, only seeing the denser parts of the structure that are farther away than L1, in the general direction of STEREO-B.
This leads to the schematic picture of the CME geometry shown in Figure 1. Based on the velocity curves in Figure 7, there is essentially no velocity difference along the CME front, and no time delay between the CME arrival at STEREO-B and L1, so the CME front is presumably roughly spherical as it approaches 1 AU, as shown in Figure 1. However, we have argued above that HI2-A only sees the densest part of the CME, which hits STEREO-B, and HI2-A does not see the foreground part that hits L1 and Earth at all. The dotted purple line in Figure 1 crudely estimates the full extent of the CME, which we know hits STEREO-B, ACE, and SOHO but not STEREO-A, while the shorter solid line arc is an estimate of the part of the CME that HI2-A actually sees. It is difficult to know how far the CME extends to the right of STEREO-B in Figure 1. There is little if any emission apparent to the east of the Sun in COR2-B movies (see Fig. 4), which is why we have not extended the CME arc very far to the right of STEREO-B in Figure 1. Thus, the final picture is that of a CME that has a total angular extent of no more than 60\\({}^{\\circ}\\), with the visible part of the CME constituting less than half of that total.
Besides showing the CME signatures observed by PLASTIC, IMPACT, ACE, and CELIAS, Figure 7 also shows these instruments' observations of the CIR that follows, the presence of which is also apparent in the HI2-A images in Figure 6, as noted in SS3.1. All these spacecraft see a strong density and magnetic field enhancement, which is accompanied by a big jump in wind velocity as the spacecraft passes from the slow solar wind in front of the CIR to the high speed wind that trails it. Such signatures are typical of CIRs seen by in situ instruments (e.g., Sheeley et al. 2008a,b). There is a significant time delay between when the CIR hits STEREO-B, then ACE and SOHO, and finally STEREO-A. This time delay illustrates the rotating nature of the CIR structure. It is curious that the time delay is significantly longer between ACE/SOHO and STEREO-A than it is between STEREO-B and ACE/SOHO despite the angular separation being about the same (see Fig. 1). It is also interesting that the density, velocity, and magnetic field profiles seen by the three spacecraft are rather different. Though outside the scope of this paper, a more in-depth analysis of this and other STEREO-observed CIRs is certainly worthwhile, especially given the substantial number of these structures observed by STEREO in the past year.
### Kinematic Analysis
Possibly the simplest scientifically useful measurements one can make from a sequence of CME images are measurements of the velocity of the CME as a function of time. However, even these seemingly simple measurements are complicated by uncertainties in how to translate apparent 2-dimensional motion into actual 3-dimensional velocity. We now do a kinematic analysis of the February 4 CME, and in the process we show how comprehensive STEREO observations can improve confidence in such an analysis.
In order to measure the velocity and acceleration of a CME's leading edge, positional measurements must first be made from the SECCHI images. What we actually measure is not distance but an elongation angle, \\(\\epsilon\\), from Sun center. Many previous authors have discussed methods of inferring distance from Sun-center, \\(r\\), from \\(\\epsilon\\) (Kahler & Webb 2007; Howard et al. 2007, 2008; Sheeley et al. 2008b). One approach, sometimes referred to as the \"Point-P Method,\" assumes the CME leading edge is an intrinsically very broad, uniform, spherical front centered on the Sun, in which case (Howard et al. 2007)
\\[r=d\\sin\\epsilon. \\tag{1}\\]
Here \\(d\\) is the distance of the observer to the Sun, which is close to 1 AU for the STEREO spacecraft, but not exactly (see Fig. 1).
Another approach, which Kahler & Webb (2007) call the \"Fixed-\\(\\phi\\) Method,\" assumes that the CME is a relatively narrow, compact structure traveling on a fixed, radial trajectory at an angle, \\(\\phi\\), relative to the observer's line of sight to the Sun, in which case
\\[r=\\frac{d\\sin\\epsilon}{\\sin(\\epsilon+\\phi)}. \\tag{2}\\]
[Note that this is a more compact version of equation (A2) in Kahler & Webb (2007).] In the top panel of Figure 8, we show plots of \\(r\\) versus time, as seen from STEREO-A, using both equations (1) and (2). In the latter case we have assumed the CME trajectory is radial from the flare site, meaning \\(\\phi=46^{\\circ}\\). There is a time gap in the HI2 measurements, corresponding to when the CME front is too confused with CIR material to make a reliable measurement.
The bottom panel of Figure 8 shows velocities computed from the distance measurements in the top panel. Velocities computed strictly from adjacent distance data points often lead to velocities with huge error bars, which vary wildly in time in a very misleading fashion. For this reason, as we compute velocities from the distances point-by-point we actually skip distance points until the uncertainty in the computed velocity ends up under some assumed threshold value (70 km s\\({}^{-1}\\) in this case), similar to what we have done in past analyses of SOHO data (Wood et al., 1999). The velocity uncertainties are computed assuming the following estimates for the uncertainties in the distance measurements: 1% fractional errors for the COR1 and COR2 distances, and 2% and 3% uncertainties for HI1 and HI2, respectively.
There are significant differences in the distance and velocity measurements that result from the use of equations (1) and (2). In order to explore the reasons behind the distance differences, first note that a point in an image represents a direction vector in 3D space. If this vector has a closest approach to the Sun at some point P, the geometry assumed by the Point-P method always assumes that this point P represents the real 3D location of the apparent leading edge seen by the observer. Thus, distances estimated using equation (1) by definition represent a lower bound on the actual distance (Howard et al., 2007), explaining why the Point-P data points are always at or below the Fixed-\\(\\phi\\) data points in Figure 8.
The two methods lead to different inferences about the CME's kinematic behavior. The Fixed-\\(\\phi\\) method implies an acceleration up to a maximum velocity of about 700 km s\\({}^{-1}\\) in the COR2 FOV, followed by a gradual deceleration through HI1 and into HI2. In contrast, the Point-P method suggests that the CME accelerates to about 500 km s\\({}^{-1}\\) in the COR2 FOV and then continues to accelerate more gradually through HI1 and into HI2, before decelerating precipitously in HI2. However, this last precipitous deceleration is clearly an erroneous artifact of the Point-P geometry, which assumes that the CME has a very broad angular extent, encompassing all potentially observed position angles relative to the Sun, and implicitly assuming that the CME engulfs the observer when it reaches 1 AU. That is why equation (1) does not even allow the possibility of measuring \\(r\\) greater than 1 AU.
However, we know that the Feb. 4 CME does _not_ hit the observer (i.e., STEREO-A).
We have argued near the end of SS3.2 that the Feb. 4 CME does not have a very large angular extent, and that the extent of the observed part of the CME is even more limited (see Fig. 1). Thus, the Fixed-\\(\\phi\\) geometry is a much better approximation for this particular event. It is important to note that this conclusion will not be the case for broader, brighter CMEs, where the Point P approach might work better. The Fixed-\\(\\phi\\) method does have the disadvantage that it requires a reasonably accurate knowledge of \\(\\phi\\), though the known flare location provides a good estimate. And as the CME travels outwards, there will still be some degree of uncertainty introduced by the likelihood that the observed leading edge is not precisely following precisely the same part of the CME structure at all times.
The effects of these uncertainties can be explored by comparing the CME velocities measured in the HI2-A FOV with the in situ velocity observed by PLASTIC-B. If the uncertainties are low, the SECCHI image-derived velocities should agree well with the PLASTIC-B velocity. The Fixed-\\(\\phi\\) velocities measured in the HI2 FOV in Figure 8 (at times of \\(t\\gtrsim 30\\) hr) average around 530 km s\\({}^{-1}\\), somewhat higher than the 450 km s\\({}^{-1}\\) velocity seen by PLASTIC-B (see Fig. 7). This is presumably indicative of the aforementioned systematic uncertainties. Figure 9 illustrates how the discrepancy can be addressed by lowering the assumed CME trajectory angle, \\(\\phi\\). Figure 9 plots \\(r\\) versus \\(\\phi\\) for many values of \\(\\phi\\), computed using equation (2). The curves steepen in the HI2 FOV (\\(\\epsilon=19^{\\circ}-89^{\\circ}\\)) as \\(\\phi\\) increases, meaning that velocities inferred from these distances will also increase. Thus, lowering \\(\\phi\\) below the \\(\\phi=46^{\\circ}\\) value assumed in Figure 8 will lower the inferred HI2 velocities.
In order to determine which \\(\\phi\\) value works best, we perform a somewhat more sophisticated kinematic analysis than that in Figure 8. Compared to the point-by-point analysis used in Figure 8, a cleaner and smoother velocity profile can be derived from the data if the distance measurements are fitted with some functional form, which in essence assumes that the timescale of velocity variation is long compared to the time difference between adjacent distance measurements. Polynomial or spline fits are examples of such functional forms that can used for these purposes. However, we ultimately decide on a different approach, relying on a very simple physical model of the CME's motion. This model assumes an initial acceleration for the CME, \\(a_{1}\\), which persists until a time, \\(t_{1}\\), followed by a second acceleration (or deceleration), \\(a_{2}\\), lasting until time \\(t_{2}\\), followed finally by constant velocity. This model also has two additional free parameters: a starting height, and a time shift of the model distance-time profile to match the data. The two-phase model bears some resemblance to the \"main\" and \"residual\" acceleration phases of a CME argued for by Zhang & Dere (2006). But to us the appeal of this simple model is that not only are its parameters physical ones of interest, it also seems to fit the data as well or better than more complex functional forms,despite having only six free parameters.
Figure 10 shows our best fit to the data using this model. The top panel shows the leading edge distances computed assuming \\(\\phi=38^{\\circ}\\), which turns out to be the value that leads to the observed PLASTIC-B velocity of 450 km s\\({}^{-1}\\) in the HI2-A FOV. The solid line shows our best fit to the data, determined using a chi-squared minimization routine, where we have assumed the same fractional errors in the distance measurements as we did in Figure 8 (see above). With these assumed uncertainties, the best fit ends up with a reduced chi-squared of \\(\\chi^{2}_{\
u}=1.33\\). This agrees well with the \\(\\chi^{2}_{\
u}\\approx 1\\) value expected for a good fit (Bevington & Robinson 1992), which implies that the error bars assumed for our measurements are neither unrealistically small nor unreasonably large.
The bottom two panels of the figure show the velocity and acceleration profiles implied by this fit. The velocity at 1 AU (214 \\(R_{\\odot}\\)) in the HI2-A FOV ends up at 450 km s\\({}^{-1}\\) as promised. It should be emphasized that in forcing the HI2-A velocity to be consistent with the PLASTIC-B measurement, we are implicitly assuming that the part of the CME front being observed by HI2-A has the same velocity as the part of the CME front that hits STEREO-B. Essentially, this amounts to assuming that the CME front is roughly spherical and centered on the Sun at 1 AU, as pictured in Figure 1 and argued for in SS3.2. The excellent agreement between the CME velocity seen by PLASTIC-B and that seen at L1 by ACE and SOHO/CELIAS also implies that this assumption is a very good one for this event. But this may not be the case for all events, so comparing HI2-A and PLASTIC-B velocities may not always be appropriate.
The \\(\\phi=38^{\\circ}\\) value assumed in Figure 10 is \\(8^{\\circ}\\) less than the \\(\\phi=46^{\\circ}\\) value that radial outflow from the observed flare site would suggest. This result could indicate that the CME's overall center-of-mass trajectory is truly at least \\(8^{\\circ}\\) closer to the STEREO-A direction than the flare site would predict. (More if there is a component of deflection perpendicular to the ecliptic plane.) In SS3.1 we noted that the COR1-B and COR2-B images imply a deflection of the CME into a more southwesterly trajectory than suggested by the flare site, possibly due to the adjacent coronal hole. The western component of this deflection would indeed predict a CME trajectory less than \\(\\phi=46^{\\circ}\\) angle suggested by the flare. Thus, interpreting the \\(8^{\\circ}\\) shift as due to this deflection is quite plausible.
However, this interpretation comes with two major caveats. One is that the part of the CME seen as the leading edge by HI2-A is not necessarily representative of either the geometric center of the CME, or its center-of-mass. Measurements from a location different from that of STEREO-A could in principle see a different part of the CME front as being the leading edge, thereby leading to a different trajectory measurement. The second caveat is the aforementioned issue of the observed leading edge not necessarily faithfully following the same part of the CME front at all times, which could yield velocity measurement errors and therefore an erroneous \\(\\phi\\) measurement.
Figure 10 represents our best kinematic model of the February 4 CME, which can be described as follows. The model suggests that the CME's leading edge has an initial acceleration of \\(a_{1}=159\\) m s\\({}^{-2}\\) for its first \\(t_{1}=1.1\\) hours, reaching a maximum velocity of \\(689\\) km s\\({}^{-1}\\) shortly after entering the COR2 FOV. Until \\(t_{2}=33\\) hours the CME then gradually decelerates at a rate of \\(a_{2}=-2.1\\) m s\\({}^{-2}\\) during its journey through the COR2 and HI1 fields of view, eventually reaching its final coast velocity of \\(451\\) km s\\({}^{-1}\\) shortly after reaching the HI2 FOV, this velocity being consistent with the PLASTIC-B measurement.
Interaction with the ambient solar wind is presumably responsible for the \\(a_{2}\\) deceleration inferred between 0.024 and 0.47 AU, as the PLASTIC-B data make it clear that the CME is traveling through slower solar wind plasma. Note that the Point-P measurements in Figure 8 are not only inconsistent with this \\(a_{2}\\) IPM deceleration, but they would actually imply an _acceleration_ of the CME at that time. This emphasizes the importance of the issue of how to compute distances from elongation angles. Even basic qualitative aspects of a CMEs IPM motion, such as whether it accelerates or decelerates, depend sensitively on this issue. Given that the CME is plowing through slower solar wind material, an IPM deceleration seems far more plausible than an acceleration. This is yet another argument that the Fixed-\\(\\phi\\) geometry is better for this particular event than the Point-P geometry.
### Implications for Space Weather Prediction
An event like the February 4 CME is perfect for assessing the degree to which the unique viewpoint of the STEREO spacecraft can yield better estimates of arrival times for Earth-directed CMEs. The February 4 CME is directed at STEREO-B, so STEREO-B's in situ instruments tell us exactly when the CME reaches 1 AU, but the STEREO-B images of the event close to the Sun provide very poor velocity estimates by themselves because of the lack of knowledge of the CME's precise trajectory. For full halo CMEs, Schwenn et al. (2005) provide a prescription to determine the true expansion velocity from its lateral expansion (see also Schwenn, 2006). But the uncertainties remain large, and in any case this prescription is not helpful for our February 4 event, which is barely perceived as a partial halo, let alone a full halo. Therefore, it is STEREO-A that by far provides the best assessment of the CME's kinematic behavior thanks to its location away from the CME's path. And it is STEREO-A that is therefore in a much better position to predict ahead of time when the event should reach STEREO-B.
To better quantify this, we imagine a situation where only STEREO-B data is available. The CME has just taken place and the CME has been observed by COR1-B and COR2-B as shown in Figures 3 and 4. We can then ask the question, what would our estimated CME velocity be from the STEREO-B data alone and what would be the predicted arrival time at 1 AU? The apparent plane-of-sky velocity of the CME in the COR2-B images is about 240 km s\\({}^{-1}\\) (assuming \\(\\phi=90^{\\circ}\\)). The total travel time to Earth at this speed is about 174 hours, leading to a predicted arrival time of roughly Februrary 11, 15:00 UT. This is 4 days after the actual arrival time on February 7, so this prediction is obviously very poor!
If the EUVI-B flare location is used to provide an estimated trajectory of \\(\\phi=10^{\\circ}\\), the velocity estimated from the COR2-B data increases dramatically to about 1000 km s\\({}^{-1}\\). In this case the 1 AU travel time decreases to only 42 hours, corresponding to a predicted arrival time of February 6, 3:00 UT. This is well over a day _before_ the actual arrival time. For events directed at the observing spacecraft, CME velocity measurements are particularly sensitive to uncertainties in the exact trajectory angle. There is also the problem that the observed leading edge motion in the COR2-B images may have more to do with the lateral expansion of the CME rather than the motion outwards from the Sun.
If a similar thought experiment is done for the STEREO-A data, the COR2-A images alone and an assumption of the \\(\\phi=46^{\\circ}\\) trajectory suggested by the EUVI-A flare site lead to a CME velocity in COR2-A of about 590 km s\\({}^{-1}\\). This corresponds to a 1 AU travel time of about 71 hours, leading to a predicted arrival time at 1 AU of February 7, 8:00 UT. This is only a few hours after the arrival of the CME suggested by the IMPACT-B magnetic field data (see Fig. 7), though it is about 13 hours before the peak density seen by PLASTIC-B, which is what the CME front observed by the SECCHI imagers actually corresponds to (see SS3.3). Improving the arrival time prediction of the peak CME density would require taking into account the deceleration of the CME during its travel through the IPM (see Fig. 10).
It is clear that STEREO-A's perspective provides a dramatic improvement in our ability to predict when the February 4 CME reaches STEREO-B. An analysis of multiple events like this one would allow this improvement to be better quantified. In the spirit of previous analyses such as Gopalswamy et al. (2001), perhaps an analysis of multiple events such as this one would also provide empirical guidance in how to predict the deceleration during IPM travel (or perhaps acceleration in some cases), which is clearly necessary to achieve arrival time estimates that are good to within a few hours. The ability of SECCHI to provide continuous tracking information on CMEs could in principle allow time-of-arrival estimates to be continously improved during a CME's journey to 1 AU.
Summary
We have presented STEREO observations of a CME that occurred in the depths of the 2008 solar minimum, when there were not many of these events taking place. The February 4 CME is not a particularly dramatic event, but it has an advantageous trajectory. It is directed at STEREO-B, so that it eventually hits that spacecraft and is detected by its in situ instruments. A different part of the CME hits the ACE and SOHO spacecraft at Earth's L1 Lagrangian point. The CME's trajectory is far enough away from the STEREO-A direction that STEREO-A images can provide an accurate assessment of the CME's kinematic behavior, which is not possible from STEREO-B's location. This event illustrates just how much the appearance of a CME can differ between the two STEREO spacecraft, which at the time had an angular separation relative to the Sun of 45.3\\({}^{\\circ}\\), a separation that continues to increase with time by about 44\\({}^{\\circ}\\) per year.
Despite the relative faintness of the event, the SECCHI imagers are still able to track it continuously all the way from the Sun to 1 AU, which provides hope that as the Sun moves towards solar maximum, STEREO will be able to provide similarly comprehensive observations of many more such CMEs. The kinematic analysis presented here is the first based on such a comprehensive STEREO data set, involving both SECCHI images and in situ data, but hopefully many others will follow. We have used two different methods of computing CME leading edge distances from measured elongation angles: 1. The Point-P method, which assumes the CME is a broad, uniform, spherical front; and 2. The Fixed-\\(\\phi\\) method, which assumes a narrow, compact CME structure traveling radially from the Sun. Our analysis illustrates just how sensitive conclusions about the kinematic behavior of a CME are to the method used. The first method suggests continued acceleration in the IPM, while the second implies a deceleration. Fortunately, the comprehensive nature of observations of the February 4 CME has provided us with an abundance of evidence that the observable part of this CME has a very limited angular extent. Therefore, the Fixed-\\(\\phi\\) method is clearly best in this case, leading to our best kinematic model for the CME in Figure 10. But we do not expect the Fixed-\\(\\phi\\) method to necessarily be the best option for all STEREO-observed CMEs.
Finally, the geometry of the event allows us to use the two spacecrafts' observations to quantify just how much more accurately the CME's arrival time at 1 AU can be predicted using images taken away from the CME's path (from STEREO-A in this case), compared to images taken from directly within it (from STEREO-B in this case). The STEREO-A prediction proves to be dramatically better than STEREO-B's. Thus, STEREO could in principle be able to improve space weather forecasting for Earth-directed events in the coming years.
We would like to thank Neil Sheeley and Peter Schroeder for helpful discussions and assistance in this project. The STEREO/SECCHI data are produced by a consortium of NRL (US), LMSAL (US), NASA/GSFC (US), RAL (UK), UBHAM (UK), MPS (Germany), CSL (Belgium), IOTA (France), and IAS (France). In addition to funding by NASA, NRL also received support from the USAF Space Test Program and ONR. In addition to SECCHI, this work has also made use of data provided by the STEREO IMPACT and PLASTIC teams, supported by NASA contracts NAS5-00132 and NAS5-00133. We have also made use of data provided by the CELIAS/MTOF experiment on SOHO, which is a joint ESA and NASA mission. We thank the ACE SWEPAM and MAG instrument teams and the ACE Science Center for providing the ACE data.
## References
* Acuna et al. (2008) Acuna, M. H., Curtis, D., Scheifele, J. L., Russell, C. T., Schroeder, P., Szabo, A., & Luhmann, J. G. 2008, Space Sci. Rev., 136, 203
* Andrews (2002) Andrews, M. D. 2002, Sol. Phys., 208, 317
* Bevington & Robinson (1992) Bevington, P. R., & Robinson, D. K. 1992, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill)
* Bougeret et al. (2008) Bougeret, J. L., et al. 2008, Space Sci. Rev., 136, 487
* Cremades & Bothmer (2004) Cremades, H., & Bothmer, V. 2004, A&A, 422, 307
* Eyles et al. (2003) Eyles, C. J., et al. 2003, Sol. Phys., 217, 319
* Galvin et al. (2008) Galvin, A. B., et al. 2008, Space Sci. Rev., 136, 437
* Gopalswamy et al. (2001) Gopalswamy, N., Lara, A., Yashiro, S., Kaiser, M. L., & Howard, R. A. 2001, J. Geophys. Res., 106, 29207
* Harrison et al. (2008) Harrison, R. A., et al. 2008, Sol. Phys., 247, 171
* Howard et al. (2008) Howard, R. A, et al. 2008, Space Sci. Rev., 136, 67
* Howard et al. (2007) Howard, T. A., Fry, C. D., Johnston, J. C., & Webb, D. F. 2007, ApJ, 667, 610
* Howard et al. (2008) Howard, T. A., Nandy, D., & Koepke, A. C. 2008, J. Geophys. Res., 113, A01104
* Jackson et al. (2004) Jackson, B. V., et al. 2004, Sol. Phys., 225, 177
* Jackson et al. (2008)Jian, L., Russell, C. T., Luhmann, J. G., & Skoug, R. M. 2006, Sol. Phys., 239, 393
* () Kahler, S. W., & Webb, D. F. 2007, J. Geophys. Res., 112, A09103
* () Luhmann, J. G., et al. 2008, Space Sci. Rev., 136, 117
* () Schwenn, R. 2006, Living Rev. Solar Phys. 3, 2, URL: [http://www.livingreviews.org/lrsp-2006-2](http://www.livingreviews.org/lrsp-2006-2)
* () Schwenn, R., dal Lago, A., Huttunen, E., & Gonzalez, W. D., 2005, Ann. Geophys., 23, 1033
* () Sheeley, N. R., Jr., et al. 2008a, ApJ, 674, L109
* () Sheeley, N. R., Jr., et al. 2008b, ApJ, 675, 853
* () Vourlidas, A., & Howard, R. A. 2006, ApJ, 642, 1216
* () Webb, D. F., et al. 2006, J. Geophys. Res., 111, A12101
* () Wood, B. E., Karovska, M., Chen, J., Brueckner, G. E., Cook, J. W., & Howard, R. A. 1999, ApJ, 512, 484
* () Zhang, J., & Dere, K. P. 2006, ApJ, 649, 1100Figure 1: The locations of Earth, STEREO-A, STEREO-B, and the Sun (at the origin) on 2008 February 4 in heliocentric aries ecliptic coordinates. The red, green, and blue dotted lines indicate the fields of view of the COR2, HI1, and HI2 telescopes on board STEREO-A and B. The purple arc is an estimated location for the Feb. 4 CME’s leading edge as it approaches 1 AU, where the part of the arc represented as a solid line is the part of the CME that we detect in SECCHI images from STEREO-A images, and the dotted line indicates the parts of the CME front that we do not see (see §3.2).
Figure 2: Sequences of EUVI images taken near the beginning of the 2008 Feb. 4 CME. The upper 2 sequences are He II \\(\\lambda\\)304 and Fe XII \\(\\lambda\\)195 images from STEREO-A and the bottom 2 sequences are He II and Fe XII images from STEREO-B. The arrows point to a region that flares weakly during the event. The EUVI-A He II \\(\\lambda\\)304 images also show a prominence eruption off the southeast limb.
Figure 3: Running-difference COR1 images from STEREO-A (top) and STEREO-B (bottom) showing the 2008 Feb. 4 CME erupting off the southeast limb in COR1-A, but primarily off the southwest limb in COR1-B. The white circle indicates the location of the solar disk.
Figure 4: Running-difference COR2 images from STEREO-A (top) and STEREO-B (bottom) showing the 2008 Feb. 4 CME. The CME is clearly seen off the east limb in COR2-A, but it is much fainter and primarily off the southwest limb in COR2-B (arrows). The white circle indicates the location of the solar disk. Beyond the occulter there is some additional masking for COR2-B to hide blooming caused by a slight miscentering of the Sun behind the occulting disk.
Figure 5: Running-difference HI1 images from STEREO-A (top) and STEREO-B (bottom) showing the 2008 Feb. 4 CME. The Sun is to the right in the HI1-A images and to the left for HI1-B (see Fig. 1). The CME front is obvious in HI1-A, but is only faintly visible in the lower left corner of the last two HI1-B images (arrows).
Figure 6: Running-difference HI2 images from STEREO-A showing the 2008 Feb. 4 CME (arrows). The positions of the Earth, SOHO, and STEREO-B are also shown. The first image shows the bright CME front as it enters the field of view on Feb. 5, but the CME front quickly fades and becomes confused with a CIR structure in the background, which is gradually rotating towards the observer. The second image shows the CME front just before it crosses the apparent position of STEREO-B on Feb. 7.
Figure 7: Proton density, solar wind velocity, and magnetic field strength are plotted versus time using data from the PLASTIC and IMPACT instruments on STEREO A and B. Also included are data from ACE and the CELIAS instrument on SOHO, both residing at Earth’s L1 Lagrangian point. The 2008 Feb. 4 CME is observed by STEREO-B on Feb. 7, and much more weakly by ACE and SOHO/CELIAS. It is not seen at all by STEREO-A. A CIR is observed a couple days after the CME by STEREO-B, at a later time by SOHO/CELIAS and ACE, and later still by STEREO-A as the structure rotates past the various spacecraft.
Figure 8: The top panel shows two different versions of the distance-vs.-time plot for the leading edge of the Feb. 4 CME, computed using two different methods to get from measured elongation angle to physical distance from Sun-center. The green measurements assume the “Point-P” method (equation 1), and the red data points assume the “Fixed-\\(\\phi\\)” method (equation 2). The symbols indicate which SECCHI imager on STEREO-A is responsible for the measurement. The bottom panel shows velocities computed from the distance measurements.
Figure 9: A plot of inferred distance from Sun-center (\\(r\\)) as a function of measured elongation angle \\(\\epsilon\\), for seven values of the CME trajectory angle \\(\\phi\\), using equation (2). The \\(\\phi=46^{\\circ}\\) curve is emphasized since that is the trajectory angle suggested by the flare location (see Fig. 2).
Figure 10: The top panel shows the distance from Sun-center of the leading edge of the 2008 Feb. 4 CME as a function of time, assuming the CME trajectory angle is \\(\\phi=38^{\\circ}\\) from the line of sight. The \\(t=0\\) time is 8:36 UT, roughly when the flare associated with this CME begins. The symbols indicate which SECCHI imager on STEREO-A is responsible for the measurement. The data points are fitted with a simple kinematic model assuming an initial acceleration phase, a second deceleration phase, and then a constant velocity phase. The best fit is shown as a solid line in the top panel. The bottom two panels show the velocity and acceleration profiles suggested by this fit. | We perform the first kinematic analysis of a CME observed by both imaging and in situ instruments on board STEREO, namely the SECCHI, PLASTIC, and IMPACT experiments. Launched on 2008 February 4, the CME is tracked continuously from initiation to 1 AU using the SECCHI imagers on both STEREO spacecraft, and is then detected by the PLASTIC and IMPACT particle and field detectors on board STEREO-B. The CME is also detected in situ by ACE and SOHO/CELIAS at Earth's L1 Lagrangian point. The CME hits STEREO-B, ACE, and SOHO on 2008 February 7, but misses STEREO-A entirely. This event provides a good example of just how different the same event can look when viewed from different perspectives. We also demonstrate many ways in which the comprehensive and continuous coverage of this CME by STEREO improves confidence in our assessment of its kinematic behavior, with potential ramifications for space weather forecasting. The observations provide several lines of evidence in favor of the observable part of the CME being narrow in angular extent, a determination crucial for deciding how best to convert observed CME elongation angles from Sun-center to actual Sun-center distances.
Sun: activity -- Sun: coronal mass ejections (CMEs) -- solar wind -- interplanetary medium | Provide a brief summary of the text. |
arxiv-format/0811_4233v2.md | # Constraints on nuclear matter parameters of an Effective Chiral Model
T. K. Jha\\({}^{1}\\)
[email protected]
H. Mishra\\({}^{1,2,3}\\)
[email protected] \\(1\\) Theoretical Physics Division, Physical Research Laboratory, Navrangura, Ahmedabad, India - 380 009 \\(2\\) Institut fur Theoretische Physik, J. W. Goethe Universitat, Max-von-Laue-Str. 1, 60438 Frankfurt am Main, Germany \\(3\\) School of Physical Sciences, Jawaharlal Nehru University, New Delhi, India 110067
November 4, 2021
## I Introduction
The framework of Quantum Hadrodynamics [1; 2] as an elegant and consistent theoretical treatment of finite nuclei as well as infinite nuclear matter laid down the pillars of relativistic theories which seem to provide solution to the so called \"_the Coester band_\" problem [3; 4]. However, our present knowledge of nuclear matter is confined around nuclear saturation density (\\(\\rho_{0}\\approx 3\\times 10^{14}gcm^{-3}\\)) and therefore, in order to have some meaningful correlations while extrapolating to higher densities, the nuclear equation of state (EOS) must satisfy certain minimum criteria quantified as the \"_nuclear saturation properties_\", which are the physical constants of nature. Basically it is understood that the inherited uncertainty at \\(\\rho_{0}\\) gets more pronounced at higher densities (\\(3-10\\)\\(\\rho_{0}\\)), relevant to astrophysical context such as the modeling of neutron stars. In this context, the two most important quantities which play vital role and are known to have substantial impact on the EOS are the nucleon effective mass and the nuclear incompressibility [5; 6]. Ironically, these two properties are not very well determined and they posses large uncertainty. The nuclear incompressibility derived from nuclear measurements and astrophysical observations exhibit a broad range of values \\(K=(180-800)\\) MeV [7]. Further the non-relativistic and the relativistic models fails to agree to a common consensus. The non-relativistic calculations predict the compression moduli in the range \\(K=(210-240)MeV\\)[8; 9; 10], whereas, relativistic calculations predicts it in the range \\((200-300)\\)\\(MeV\\)[11; 12]. Apart from that we are inevitably marred by the uncertainty in the determination of mass of the scalar meson (\\(\\sigma\\)-meson). The attractive force resulting from the scalar sector is responsible for the intermediate range attraction which, along with the repulsive vector forces provides the saturation mechanism for nuclear matter [2]. The estimate from the Particle Data Group quotes the mass of this scalar meson '\\(f_{0}(600)\\)' or \\(\\sigma-\\)meson in the range \\((400-1200)\\) MeV [13]. A recent estimate however, for sigma meson mass is found to be \\(513\\pm 32\\) MeV [14].
Phenomenologically, parallel to the well known \\(\\sigma-\\omega\\) model, preferably known as the Walecka model [1; 2; 15], chiral models [16; 17; 18; 19; 20; 21; 22] have been developed and were applied to nuclear matter studies. Chiral symmetry is a symmetry of strong interactions in the limit of vanishing quark masses and is desirable in any relativistic theory. However, because the current quark masses are small but finite this symmetry can be considered as an approximate symmetry. This symmetry is spontaneously broken in the ground state. In the context of \\(\\sigma-\\)models, the \\(\\sigma-\\) field (which carries the quantum numbers of the vacuum) attains a finite vacuum expectation value \\(\\langle\\sigma\\rangle=\\sigma_{0}=f_{\\pi}\\). Equivalently, the potential for the \\(\\sigma-\\) field attains a minimum at \\(f_{\\pi}\\)[23; 24]. The value of \\(f_{\\pi}\\) reflects the strength of the symmetry breaking and experimentally it is found to be \\(f_{\\pi}\\approx 131\\) MeV [13].
Time and again, the aforesaid facts and figures emphasize the need to address the importance of imposing constraints to the EOS to narrow down the uncertainties both experimentally and theoretically. Arguably, to address these issues, one needs a model that has the desired attributes of the relativistic framework and which can be successfully applied to various nuclear force problem both in the vicinity of \\(\\rho_{0}\\) as well as at higher densities with the same set of parameters. With this motivation, we choose a model [25] which embodies chiral symmetry and has minimum number of free parameters (total five) to reproduce the saturation properties. The spontaneous breaking of chiral symmetry relates the mass of the hadrons to the vacuum expectation value of the scalar field and thus naturally restricts the parameters of the model. Therefore, the present study, apart from testing the reliability of the model, puts valuable constrainton the EOS based on the pion decay constant and brings out correlations between between the pion decay constant (\\(f_{\\pi}\\)), the \\(\\sigma-\\)meson mass (\\(m_{\\sigma}\\)), the nuclear incompressibility (\\(K\\)) and the nucleon effective mass (\\(m^{\\star}\\)).
In section 2, we briefly describe basic ingredients of the hadronic model and the energy and pressure of many baryonic system is computed following the mean-field ansatz. Subsequent section (Section 3) describes the methodology to evaluate the model parameters. In Section 4, we extract the best fit among the various parameter of the model and apply it to study the resulting EOS of symmetric nuclear matter. In the result and discussion section, we discuss the consequences of imposing various constraints on the model parameters and finally, we conclude with some important findings of this work.
## II The effective chiral model
Using the chiral sigma model with dynamically generated mass for vector meson, Glendening studied finite temperature aspects of nuclear matter and its application to neutron stars [26]. However, there the \\(\\rho-\\)meson and its isospin symmetry influence was not considered. Although a nice framework respecting chiral symmetry, a drawback was its unacceptable high incompressibility and in the subsequent extension of the model [27], the mass of the vector meson is not generated dynamically. The model that we consider [25] in our present analysis embodies higher orders of the scalar field in addition to the dynamically generated mass of the vector meson. Without higher order in scalar field interactions, the model was first employed to study high density matter [28]. To bring down the resulting high incompressibility, non-linear interaction in the scalar field was included in later work [29] and subsequently applied to study nuclear matter at finite temperature [30]. The success of the model then motivated us to generalize it to include the octet of baryons and to study hyperon rich matter and properties of neutron star [25; 31]. However, in earlier works, the parameter sets that were employed were not studied and analyzed in detail with respect to the inherent vacuum properties of chiral symmetry. Moreover, rather than a phenomenological fit the parameters must be constrained meaningfully, so that the resulting EOS is more realistic and purposeful. Motivated by this, we presently try to explore the consequences of imposing stringent constraint on the model parameters not only with properties known at saturation density but also on the resulting EOS with other phenomenological model predictions and experimental data at high density. In addition to that, the correlation between various quantities with the vacuum value of the scalar field naturally spells out definite interlink between them. We now proceed to describe the salient features of the present model. The effective Lagrangian of the model interacting through the exchange of the pseudo-scalar meson \\(\\pi\\), the scalar meson \\(\\sigma\\), the vector meson \\(\\omega\\) and the iso-vector \\(\\rho-\\)meson is given by:
\\[{\\cal L} = \\bar{\\psi}_{B}\\ \\left[\\left(i\\gamma_{\\mu}\\partial^{\\mu}-g_{\\omega} \\gamma_{\\mu}\\omega^{\\mu}-\\frac{1}{2}g_{\\rho}\\vec{\\rho}_{\\mu}\\cdot\\vec{\\tau} \\gamma^{\\mu}\\right)-g_{\\sigma}\\ \\ \\left(\\sigma+i\\gamma_{5}\\vec{\\tau}\\cdot\\vec{\\pi} \\right)\\right]\\ \\psi_{B} \\tag{1}\\] \\[+\\frac{1}{2}\\big{(}\\partial_{\\mu}\\vec{\\pi}\\cdot\\partial^{\\mu} \\vec{\\pi}+\\partial_{\\mu}\\sigma\\partial^{\\mu}\\sigma\\big{)}-\\frac{\\lambda}{4} \\big{(}x^{2}-x_{0}^{2}\\big{)}^{2}-\\frac{\\lambda b}{6m^{2}}\\big{(}x^{2}-x_{0}^ {2}\\big{)}^{3}-\\frac{\\lambda c}{8m^{4}}\\big{(}x^{2}-x_{0}^{2}\\big{)}^{4}\\] \\[-\\frac{1}{4}F_{\\mu\
u}F_{\\mu\
u}+\\frac{1}{2}g_{\\omega}{}^{2}x^{2 }\\omega_{\\mu}\\omega^{\\mu}-\\frac{1}{4}\\vec{R}_{\\mu\
u}\\cdot\\vec{R}^{\\mu\
u}+ \\frac{1}{2}m_{\\rho}^{2}\\vec{\\rho}_{\\mu}\\cdot\\vec{\\rho}^{\\mu}\\.\\]
The first line of the above Lagrangian represents the interaction of the nucleon isospin doublet \\(\\psi_{B}\\) with the aforesaid mesons. In the second line we have the kinetic and the non-linear terms in the pseudo-scalar-isovector pion field '\\(\\vec{\\pi}\\)', the scalar field '\\(\\sigma\\)', and higher order terms of the scalar field in terms of the invariant combination of the two i.e., \\(x^{2}=\\vec{\\pi}^{2}+\\sigma^{2}\\). Finally in the last line, we have the field strength and the mass term for the vector field '\\(\\omega\\)' and the iso-vector field '\\(\\vec{\\rho}\\)' meson. \\(g_{\\sigma},g_{\\omega}\\) and \\(g_{\\rho}\\) are the usual meson-nucleon coupling strength of the scalar, vector and the iso-vector fields respectively. Here we shall be concerned only with the normal non-pion condensed state of matter, so we take \\(<\\vec{\\pi}>=0\\) and also \\(m_{\\pi}=0\\).
The interaction of the scalar and the pseudoscalar mesons with the vector boson generates a dynamical mass for the vector bosons through spontaneous breaking of the chiral symmetry with scalar field attaining the vacuum expectation value \\(x_{0}\\). Then the mass of the nucleon (\\(m\\)), the scalar (\\(m_{\\sigma}\\)) and the vector meson mass (\\(m_{\\omega}\\)), are related to \\(x_{0}\\) through
\\[m=g_{\\sigma}x_{0},\\ \\ m_{\\sigma}=\\sqrt{2\\lambda}x_{0},\\ \\ m_{\\omega}=g_{\\omega}x_{0}. \\tag{2}\\]
To obtain the equation of state, we revert to the mean-field procedure in which, one assumes the mesonic fields to be uniform i.e., without any quantum fluctuations. We recall here that this approach has been extensively used to obtain field-theoretical EoS for high density matter [27], and gets increasingly valid when the source terms are large [2]. The details of the present model and its attributes such as the derivation of the equation of motion of the meson fields and its equation of state (\\(\\varepsilon\\) & \\(P\\)) can be found in our preceding work [25; 31]. For the sake of completeness however, we write down the meson field equations in the mean-field ansatz. The vector field (\\(\\omega\\)), the scalar field (\\(\\sigma\\)) (in terms of \\(Y=x/x_{0}=m^{\\star}/m\\)) and the isovector field (\\(\\rho\\)) is respectively given by
\\[\\omega_{0}=\\sum_{B}\\frac{\\rho_{B}}{g_{\\omega}x^{2}}, \\tag{3}\\]
\\[(1-Y^{2})-\\frac{b}{m^{2}c_{\\omega}}(1-Y^{2})^{2}+\\frac{c}{m^{4}c_{\\omega}^{2} }(1-Y^{2})^{3}+\\frac{2c_{\\sigma}c_{\\omega}\\rho_{B}^{2}}{m^{2}Y^{4}}-\\frac{2c_ {\\sigma}\\rho_{S}}{mY}=0 \\tag{4}\\]
\\[\\rho_{03}=\\sum_{B}\\frac{g_{\\rho}}{m_{\\rho}^{2}}I_{3}\\ \\rho_{B}. \\tag{5}\\]
The quantity \\(\\rho_{B}\\) and \\(\\rho_{S}\\) are the vector and the scalar density defined as,
\\[\\rho_{B}=\\frac{\\gamma}{(2\\pi)^{3}}\\int_{o}^{k_{F}}d^{3}k, \\tag{6}\\]
\\[\\rho_{S}=\\frac{\\gamma}{(2\\pi)^{3}}\\int_{o}^{k_{F}}\\frac{m^{\\star}d^{3}k}{ \\sqrt{k^{2}+{m^{\\star}}^{2}}}. \\tag{7}\\]
In the above, '\\(k_{F}\\)' is the fermi momenta of the baryon and \\(\\gamma=4\\) (symmetric matter) is the spin degeneracy factor. For symmetric nuclear matter (\\(N=Z\\)), we neglect the contribution from the \\(\\rho-\\)meson. The nucleon effective mass is then \\(m^{\\star}\\equiv Ym\\) and \\(c_{\\sigma}\\equiv g_{\\sigma}^{2}/m_{\\sigma}^{2}\\) are \\(c_{\\omega}\\equiv g_{\\omega}^{2}/m_{\\omega}^{2}\\) are the scalar and vector coupling parameters that enters in our calculations.
The total energy density '\\(\\varepsilon\\)' and pressure '\\(P\\)' of symmetric nuclear matter for a given baryon density is:
\\[\\varepsilon = \\frac{\\gamma}{2\\pi^{2}}\\int_{o}^{k_{F}}k^{2}dk\\sqrt{k^{2}+{m^{ \\star}}^{2}}+\\frac{m^{2}(1-Y^{2})^{2}}{8c_{\\sigma}}-\\frac{b}{12c_{\\omega}c_{ \\sigma}}(1-Y^{2})^{3}+\\frac{c}{16m^{2}c_{\\omega}^{2}c_{\\sigma}}(1-Y^{2})^{4}+ \\frac{c_{\\omega}\\rho_{B}^{2}}{2Y^{2}} \\tag{8}\\] \\[P = \\frac{\\gamma}{6\\pi^{2}}\\int_{o}^{k_{F}}\\frac{k^{4}dk}{\\sqrt{k^{2 }+{m^{\\star}}^{2}}}-\\frac{m^{2}(1-Y^{2})^{2}}{8c_{\\sigma}}+\\frac{b}{12c_{ \\omega}c_{\\sigma}}(1-Y^{2})^{3}-\\frac{c}{16m^{2}c_{\\omega}^{2}c_{\\sigma}}(1-Y ^{2})^{4}+\\frac{c_{\\omega}\\rho_{B}^{2}}{2Y^{2}} \\tag{9}\\]
The meson field equations for \\(\\omega\\) (eqn 3) and \\(\\sigma\\)-meson (eqn. 4) are solved self-consistently at a fixed baryon density to obtain the respective field strengths and the corresponding energy density and pressure is calculated.
## III Evaluation of model parameters
Having calculated the the thermodynamic quantities such as the energy density and the pressure, our primary aim is to evaluate the set of parameters for the EoS that satisfies the nuclear matter properties defined at normal nuclear matter density (\\(\\rho_{0}\\)) at zero temperature. As discussed earlier, a desirable and valid EoS must satisfy the saturation properties of symmetric nuclear matter and the parameters of the model can be adjusted to fit those. Similar procedure has been adopted in Ref. [32; 33] to evaluate the parameters of the mean-field models.
What we have in our hand is the set of five saturation properties of nuclear matter that a EOS has to satisfy, they are the Binding energy per nucleon (\\(\\approx-16.3\\))MeV, the saturation density (\\(\\rho_{0}\\approx 0.153\\ fm^{-3}\\)), the nuclear incompressibility (\\(167-380\\))MeV, the nucleon effective mass (\\(m^{\\star}/m=0.75-0.90\\)) and the asymmetry energy coefficient (\\(J=32\\pm 4\\))MeV, all defined at \\(\\rho_{0}\\), the nuclear saturation density. However, it can be seen that the uncertainty in their values enables us to extract and study the parameters within the specified range or with the variation thereof, in order to analyze their effect on a particular EOS. The five parameters of the present model that are to be evaluated are the three meson-nucleon coupling constants (\\(C_{\\sigma},C_{\\omega},C_{\\rho}\\)) and the two higher order scalar field constants (\\(b\\) & \\(c\\)).
The individual contributions to the energy density for symmetric nuclear matter (eqn. (8)) can be abbreviated as,
\\[\\varepsilon=\\varepsilon_{k}+\\varepsilon_{\\sigma}+\\varepsilon_{\\omega}, \\tag{10}\\]where,
\\[\\varepsilon_{k}=\\frac{\\gamma}{2\\pi^{2}}\\int_{o}^{k_{F}}k^{2}dk\\sqrt{k^{2}+{m^{ \\star}}^{2}}\\, \\tag{11}\\]
\\[\\varepsilon_{\\sigma} = \\frac{m^{2}(1-Y^{2})^{2}}{8c_{\\sigma}}-\\frac{b}{12c_{\\omega}c_{ \\sigma}}(1-Y^{2})^{3} \\tag{12}\\] \\[+ \\frac{c}{16m^{2}c_{\\omega}^{2}c_{\\sigma}}(1-Y^{2})^{4},\\]
and
\\[\\varepsilon_{\\omega}=\\frac{c_{\\omega}\\rho_{B}^{2}}{2Y^{2}}, \\tag{13}\\]
where \\(\\rho_{B}=\\rho_{n}+\\rho_{p}\\) is the total baryon density which is the sum of the neutron density '\\(\\rho_{n}\\)' and the proton density '\\(\\rho_{p}\\)'. The relative neutron excess is then given by \\(\\delta=(\\rho_{n}-\\rho_{p})/\\rho_{B}\\). At the standard state \\(\\rho_{B}=\\rho_{0}\\), the nuclear matter saturation density and \\(\\delta=0\\). Consequently, the standard state is then specified by the argument \\((\\rho_{0},0)\\), and the energy per particle is \\(e(\\rho_{0},0)=\\varepsilon/\\rho_{0}\\) - m = \\(a_{1}\\) = -16.3 MeV for symmetric nuclear matter. The nuclear matter EOS derived earlier can be expressed in terms of the nuclear energy density \\(\\varepsilon\\) as,
\\[\\varepsilon=\\varepsilon_{k}+\\varepsilon_{\\sigma}+\\varepsilon_{\\omega}=\\rho_{ 0}(m-a_{1}). \\tag{14}\\]
From the the equilibrium condition \\(P(\\rho_{0},0)=0\\), we have,
\\[P = -\\varepsilon\\ +\\ \\rho_{B}\\ \\frac{\\partial\\varepsilon}{\\partial \\rho_{B}} \\tag{15}\\] \\[= \\frac{1}{3}\\varepsilon_{k}\\ -\\ \\frac{1}{3}m^{\\star}\\rho_{S}\\ -\\ \\varepsilon_{\\sigma}\\ +\\ \\varepsilon_{\\omega}\\ =\\ 0.\\]
Consequently, the respective energy contributions can be expressed in terms of these specified values at the saturation density. Using eqn. (14) and eqn. (15), they are given as,
\\[\\varepsilon_{\\sigma}=\\frac{1}{2}\\ \\left[\\rho_{0}(m-a_{1})-\\frac{1}{3}(2 \\varepsilon_{k}+m^{\\star}\\rho_{s})\\right] \\tag{16}\\]
and
\\[\\varepsilon_{\\omega}=\\frac{1}{2}\\left[\\rho_{0}(m-a_{1})-\\frac{1}{3}(4 \\varepsilon_{k}-m^{\\star}\\rho_{s})\\right], \\tag{17}\\]
where \\(\\rho_{s}\\) is the scalar density defined in eqn. (7), analytically which is given by,
\\[\\rho_{s}=\\frac{1}{\\pi^{2}}m^{\\star}\\left[k_{F}E_{F}-ln\\Big{(}\\frac{k_{F}+E_{F} }{m^{\\star}}\\Big{)}m^{\\star 2}\\right]. \\tag{18}\\]
In the above equations, \\(m^{\\star}=Ym\\) is the effective nucleon mass and \\(E_{F}=\\sqrt{k_{F}^{2}+m^{\\star 2}}\\) is the effective energy of the nucleon carrying momenta \\(k_{F}\\).
From eqn. (13), the vector coupling (\\(C_{\\omega}\\)) can be readily evaluated using the relation
\\[C_{\\omega}=\\frac{2Y^{2}}{\\rho_{0}^{2}}\\varepsilon_{\\omega}, \\tag{19}\\]
with \\(\\varepsilon_{\\omega}\\) given by eqn. (17), for a specified value of \\(Y=m^{\\star}/m\\) defined at \\(\\rho_{0}\\).
Similarly, using the equation of motion for the scalar field (eqn. (4)), the scalar coupling can be calculated using the relation
\\[C_{\\sigma} = \\frac{mY}{2\\rho_{S}}\\left[(1-Y^{2})-\\frac{b}{m^{2}c_{\\omega}}(1-Y^ {2})^{2}+\\frac{c}{m^{4}c_{\\omega}^{2}}(1-Y^{2})^{3}+\\frac{2c_{\\sigma}c_{ \\omega}\\rho_{B}^{2}}{m^{2}Y^{4}}\\right]. \\tag{20}\\]
In the above expression, the higher order scalar field couplings constants 'b' and 'c' are unknown, but they can be solved simultaneously to obtain the respective parameters. To compute the constants of the higher order scalar field, we use the equation of motion of the scalar field (eqn. (4)) and eqn. (12). From eqn. (12), we get
\\[\\frac{c(1-Y^{2})}{m^{2}c_{\\omega}} = \\frac{16\\varepsilon_{\\sigma}c_{\\omega}c_{\\sigma}}{(1-Y^{2})^{3}}- \\frac{2m^{2}c_{\\omega}}{(1-Y^{2})}+\\frac{4b}{3}. \\tag{21}\\]
From the equation of the motion of scalar field, we get
\\[b = \\frac{2c_{\\sigma}c_{\\omega}^{2}\\rho_{B}^{2}}{Y^{4}(1-Y^{2})^{2}}+ \\frac{c(1-Y^{2})}{m^{2}c_{\\omega}}+\\frac{m^{2}c_{\\omega}}{(1-Y^{2})} \\tag{22}\\] \\[- \\frac{2c_{\\sigma}c_{\\omega}m\\rho_{S}}{Y(1-Y^{2})^{2}}.\\]
Substituting eqn. (21) in eqn. (22) leads us to the expression to calculate the higher order scalar field constant '\\(b\\)', which is,\\[b = \\frac{6c_{\\sigma}c_{\\omega}m\\rho_{S}}{Y(1-Y^{2})^{2}}+\\frac{6c_{ \\sigma}c_{\\omega}^{2}\\rho_{B}^{2}}{Y^{4}(1-Y^{2})^{2}}-\\frac{48\\varepsilon_{ \\sigma}c_{\\sigma}c_{\\omega}}{(1-Y^{2})^{3}} \\tag{23}\\] \\[+ \\frac{3m^{3}c_{\\omega}}{(1-Y^{2})}.\\]
Similarly, the higher order constant '\\(c\\)' in the scalar field can be computed from the relation,
\\[c = \\frac{8c_{\\sigma}c_{\\omega}^{2}m^{3}\\rho_{s}}{Y(1-Y^{2})^{3}}- \\frac{8c_{\\sigma}c_{\\omega}^{3}m^{2}\\rho_{B}^{2}}{Y^{4}(1-Y^{2})^{3}}-\\frac{48 \\varepsilon_{\\sigma}c_{\\sigma}c_{\\omega}^{2}}{(1-Y^{2})^{4}} \\tag{24}\\] \\[+ \\frac{2c_{\\omega}^{2}m^{4}}{(1-Y^{2})^{2}}.\\]
The calculation of \\(C_{\\omega}\\) is straight forward, but eqn. (20), (23) and (24) can be solved simultaneously numerically for a given initial values of \\(C_{\\sigma}\\), b and c, the solution of which would thus return the set of values for a desired value of \\(Y\\) at \\(\\rho_{0}\\).
Finally, for studying asymmetric matter, we need to incorporate the effect of iso-vector \\(\\rho-\\)meson and the coupling for the \\(\\rho-\\) meson has to be obtained by fixing the asymmetry energy coefficient \\(J\\approx 32\\pm 4\\ MeV\\)[34] at \\(\\rho_{0}\\). Accordingly, the \\(\\rho-\\) meson coupling constant (\\(C_{\\rho}\\)) can be fixed using the relation,
\\[J=\\frac{c_{\\rho}k_{F}^{3}}{12\\pi^{2}}+\\frac{k_{F}^{2}}{6\\sqrt{(k_{F}^{2}+m^{ \\star 2})}}\\, \\tag{25}\\]
where \\(c_{\\rho}\\equiv g_{\\rho}^{2}/m_{\\rho}^{2}\\) and \\(k_{F}=(6\\pi^{2}\\rho_{B}/\\gamma)^{1/3}\\).
Thus the model parameters are evaluated solving equations (19), (20), (23), (24) and (25) self-consistently, for the specified or desired values of the properties of symmetric nuclear matter at saturation point. Further it is also required that the EOS so obtained has a reasonable nuclear incompressibility which is defined as the curvature of the energy curve at the saturation point and is given as,
\\[K=9\\ \\rho_{0}^{2}\\ \\frac{\\partial^{2}(\\varepsilon/\\rho_{B})}{\\partial\\rho_{B} }\\Big{|}_{0}. \\tag{26}\\]
Incompressibility is a poorly known quantity experimentally, for the fact that some sort of theoretical modeling comes in these calculations. Apart from that, the other quantity with large uncertainty is the nucleon effective mass. The wide range of values determined from experiments of these two quantities motivates us to analyze and study the EOS with these variations. Further we also need to look into issues related to some indispensable elements, such as the \\(\\sigma-\\)meson mass and the pion decay constant '\\(f_{\\pi}\\)', while we want to achieve a proper framework for studying nuclear matter aspects. We recall that in the present work, our aim is to describe and correlate these physical quantities in a coherent and unified approach.
## IV Results and Discussions
The spontaneous breaking of chiral symmetry lends mass to the Hadrons and relates them to the vacuum expectation value (VEV) of the scalar field (\\(x_{0}\\)), which is what is shown in eqn. (2). Immediately, what follows from the third term in eqn. (2) is that, the VEV of the scalar field which has a minimum potential at \\(f_{\\pi}\\) is related to the vector coupling constant \\(C_{\\omega}\\) through the relation \\(x_{0}=f_{\\pi}=m_{\\omega}/g_{\\omega}=1/\\sqrt{C_{\\omega}}\\). Thus the vector coupling constant is explicitly constrained from the vacuum value of the pion decay constant. Similarly the scalar meson mass can be given by \\(m_{\\sigma}=m\\ \\sqrt{C}_{\\omega}/\\sqrt{C}_{\\sigma}\\). The model is then sterply constrained and exude the relationship between various quantities with the VEV of the scalar field.
In the present calculation, we take the value of the saturation density to be \\(\\rho_{0}=0.153fm^{-3}\\)[5], which agrees with the observed charge and mass distribution of finite nuclei. Saturation density implies that the pressure of the system is zero and the system will remain in this state if left undisturbed. The binding energy per nucleon is fixed at an empirical value \\(B/A-m=-16.3\\) MeV [5]. With the uncertainty in the nucleon effective mass at \\(\\rho_{0}\\), we calculate the parameters of the present model in the range \\(Y=m^{\\star}/m=(0.75-0.90)\\). In order to assure the existence of a lower bound for the energy, we demand that the coefficient 'c' in the quartic scalar field term, remains positive. The corresponding related quantities, such as the pion decay constant, the scalar meson mass and the nuclear incompressibility are also calculated. The asymmetry energy coefficient is fixed at \\(J\\approx 32\\) MeV. The obtained parameters are enlisted in Table 1, where the relationship between the vector coupling constant and the pion decay constant can be easily visualized. Stronger the vector coupling (repulsion), lower is the value of the chiral condensate and vice-versa. From the tabulated data, we find that the calculated sigma meson mass is predicted within \\((340-700)\\) MeV for \\(m^{\\star}/m=(0.75-0.90)\\). Although the values obtained from the analysis of neutron scattering off lead nuclei [5; 35] is consistent with the range \\(m^{\\star}/m=(0.80-0.90)\\), a lower nucleon effective mass is is known to reproduce the finite nuclei properties, such as the spin-orbit effects splitting correctly [36]. Also, we find that as we move to higher effective mass region, the incompressibility of the matter starts to fall or the EOS gets softer. However, within the incompressibility range of K = (200 - 300) MeV, the present model predicts higher nucleon effective mass.
Nuclear matter saturation is a consequence of the interplay between the attractive (scalar) and the repulsive (vector) forces and hence the variation in the coupling strength effects other related properties as well. Fig. 1(A) reflects the same, where we have plotted the nuclear incompressibility for the evaluated parameter sets of the present model as a function of the nucleon effective mass. For better correlation between them, the corresponding ratio of the scalar and vector coupling is also indicated.
On comparison with the incompressibility bound inferred from heavy ion collision experiment (HIC)[37], we find that the EOS with lower nucleon effective mass is ruled out. The present model favors EOS for which the nucleon effective mass \\(m^{\\star}/m>0.82\\), i.e., the mass of the nucleon in the nuclear medium drops to less than \\(\\approx 20\\%\\) of its mass at \\(\\rho_{0}\\). Equivalently, the agreement with the experimental flow data in the density range \\(2<\\rho/\\rho_{0}<4.6\\) seem to favor repulsion (higher effective mass) in matter at high density. From the plot, it can be seen that the EOS becomes much softer with increasing ratio of \\(C_{\\sigma}/C_{\\omega}\\).
Figure 1(B) shows the variation of incompressibility as a function of scalar meson mass obtained for various parameter sets. Recent experimental estimate for scalar meson mass \\(m_{\\sigma}=513\\pm 32\\) MeV [14] is compared with the present calculation. From the figure, we find that the EOS with \\(Y=(0.84-0.86)\\) (Set 10, 11 & 12; Table 1) seems to agree with the combined constraint from the HIC flow data and the experimental meson mass range. Further, it is worth noticing that, lower the value of \\(m_{\\sigma}\\) lower is the value of incompressibility for matter and vice-versa. The heavy ion collision estimate seems to agree with \\(\\sigma-\\)meson mass within \\((350-550)\\) MeV. We know that the nuclear incompressibility and the \\(\\sigma\\)-meson mass both are poorly determined quantities and therefore, some sort of correlation between the two will help to minimize the uncertainties around them.
Figure 2(A) shows the obtained sigma mass as a function of the vacuum value of the pion decay constant. It can be seen that both these physical quantities are inversely proportional to each other. A lower value of \\(f_{\\pi}\\) leads to a higher value of \\(m_{\\sigma}\\). However, the experimental bound of the pion decay constant seems to agree with slightly higher value of \\(m_{\\sigma}\\), which agree with the upper bound of the experimental bound on \\(m_{\\sigma}\\)[14]. Precisely, the constraint of \\(f_{\\pi}\\) seems to agree with \\(m_{\\sigma}\\approx(560\\pm 22)\\) MeV. Figure 2(B) shows the nucleon effective mass \\(Y=m^{\\star}/m\\) as a function of the pion decay constant. The constraint of \\(f_{\\pi}\\) agree with EOS with \\(Y=(0.82-0.84)\\) (Set 8, 9 & 10; Table 1). However the corresponding incompressibility falls in the range \\(K\\approx(344-440)\\) MeV, which is on the higher side of presently acceptable bounds ([8] - [12]). With combined constraints such as those on nuclear incompressibility inferred from HIC data [37], the limits on sigma meson mass [14], the pion decay constant [13] and the nucleon effective mass [35], the best fit from the parameters can be extracted. Thus we choose parameter set 9, 11 and 13 of Table 1 to study the corresponding EOS and compare it with the experimental data [37] as well as other successful relativistic mean field models such as NL3 parameterization [11] and the non-relativistic DBHF [38] calculations.
## V Equation of state at \\(T=0\\)
The selected parameters for further study is highlighted in bold fonts in Table 1. The resulting energy per nucleon for symmetric nuclear matter is calculated for
\\begin{table}
\\begin{tabular}{c c c c c c c c c c} \\hline \\hline set & \\(c_{\\sigma}\\) & \\(c_{\\omega}\\) & \\(c_{\\rho}\\) & \\(B\\) & \\(C\\) & \\(m_{\\sigma}\\) & \\(Y\\) & \\(f_{\\pi}\\) & \\(K\\) \\\\ & \\((fm^{2})\\) & \\((fm^{2})\\) & \\((fm^{2})\\) & \\((fm^{2})\\) & \\((fm^{4})\\) & (MeV) & & (MeV) & (\\(MeV\\)) \\\\ \\hline
1 & 5.916 & 3.207 & 5.060 & 1.411 & 1.328 & 691.379 & 0.75 & 110.185 & 1098 \\\\
2 & 6.047 & 3.126 & 5.087 & 0.822 & 0.022 & 675.166 & 0.76 & 111.601 & 916 \\\\
3 & 6.086 & 3.031 & 5.107 & 0.485 & 0.174 & 662.642 & 0.77 & 113.346 & 809 \\\\
4 & 6.005 & 2.933 & 5.131 & 0.582 & 2.650 & 656.183 & 0.78 & 115.238 & 737 \\\\
5 & 6.172 & 2.825 & 5.155 & -0.261 & 0.606 & 635.287 & 0.79 & 117.403 & 638 \\\\
6 & 6.223 & 2.709 & 5.178 & -0.711 & 0.748 & 619.58 & 0.80 & 119.890 & 560 \\\\
7 & 6.325 & 2.585 & 5.200 & -1.381 & 0.089 & 600.270 & 0.81 & 122.740 & 491 \\\\
8 & 6.405 & 2.451 & 5.222 & -1.990 & 0.030 & 580.876 & 0.82 & 126.039 & 440 \\\\
**9** & **6.474** & **2.233** & **5.242** & **-2.533** & **0.300** & **562.500** & **0.83** & **129.465** & **391** \\\\
10 & 6.598 & 2.159 & 5.265 & -3.340 & 0.445 & 536.838 & 0.84 & 134.378 & 344 \\\\
**11** & **6.772** & **1.995** & **5.285** & **-4.274** & **0.292** & **509.644** & **0.85** & **139.710** & **303** \\\\
12 & 7.022 & 1.823 & 5.305 & -5.414 & 0.039 & 478.498 & 0.88 & 146.131 & 265 \\\\
**13** & **7.325** & **1.642** & **5.324** & **-6.586** & **0.571** & **444.614** & **0.87** & **153.984** & **231** \\\\
14 & 7.865 & 1.451 & 5.343 & -8.315 & 0.502 & 403.303 & 0.88 & 163.824 & 199 \\\\
15 & 8.792 & 1.249 & 5.362 & -10.766 & 0.354 & 353.960 & 0.89 & 176.552 & 168 \\\\
16 & 7.942 & 1.041 & 5.388 & -6.908 & 15.197 & 339.910 & 0.90 & 193.437 & 163 \\\\ \\hline \\hline \\end{tabular}
\\end{table}
Table 1: Parameter sets of the effective chiral model that satisfies the nuclear matter saturation properties such as binding energy per nucleon \\(B/A-m=-16.3~{}MeV\\), nucleon effective mass \\(Y=m^{\\star}/m=(0.75-0.90)\\) and the asymmetry energy coefficient is \\(J\\approx 32\\) MeV at saturation density \\(\\rho_{0}=0.153fm^{-3}\\). The nucleon, the vector meson and the isovector vector meson masses are taken to be 939 MeV, 783 MeV and 770 MeV respectively and \\(c_{\\sigma}=(g_{\\sigma}/m_{\\sigma})^{2}\\), \\(c_{\\omega}=(g_{\\omega}/m_{\\omega})^{2}\\) and \\(c_{\\rho}=(g_{\\rho}/m_{\\rho})^{2}\\) are the corresponding coupling constants. \\(B=b/m^{2}\\) and \\(C=c/m^{4}\\) are the higher order constants in the scalar field. Other derived quantities such as the scalar meson mass ‘\\(m_{\\sigma}\\)’, the pion decay constant ‘\\(f_{\\pi}\\)’ and the nuclear matter incompressibility (\\(K\\)) at \\(\\rho_{0}\\) are also given.
these parameters and is plotted in Fig. 3(A). For comparison, we plot the same with NL3 parameterization from Relativistic Mean Field (RMF) calculations [11] and also the non-relativistic realistic DBHF (Bonn-A) parameterization [38]. In the inset, the region of saturation density is magnified, where we find nice agreement within relativistic mean-field models near \\(\\rho_{0}\\). Although in case of NL3 parameter set, nuclear matter saturates at slightly lower density (\\(\\rho_{0}=0.148~{}fm^{-3}\\)) than what we have taken in present calculation, the Binding energy per nucleon almost remains same (\\(\\approx-16.3~{}MeV\\)). In case of DBHF, nuclear matter saturates at still higher density. It is worth noticing that the incompressibility of the parameter chosen in the present model spans within (230 - 390) MeV, yet the resulting EOS seems to be soft at higher densities in comparison to that predicted by the NL3 parameter set which has \\(K=271.6\\) MeV. Incompressibility of nuclear matter is the measure of the degree of the nuclear matter density, which is the result of the \\(K=271.6\\) MeV (\\(\\rho_{0}=0.148~{}fm^{-3}\\)). In
of softness/ stiffness of the EOS. Conventionally, EOS with K \\(<\\) 300 MeV are considered to be soft. But in the present case the EOS predicted by the effective model is relatively much softer than the NL3 parameterization although the value for the former is high enough. However this can be understood, if we look at the EOS in the vicinity of saturation density (Inset plot). The curve of NL3 seems to compare well with the EOS with \\(K=231\\) MeV below saturation density, but the energy predicted is much larger at higher densities. In contrast to that, the EOS predicted by the effective model gets softer at higher densities. It should be interesting to study the consequences of such behavior in the astrophysical context, especially on the global properties and structure of neutron stars at physically interesting densities (2 - 5 \\(\\rho_{0}\\)) [39; 40].
In Fig. 3(B), the nucleon effective mass in the nuclear medium is plotted as a function of baryon density up to \\(6\\rho_{0}\\). This medium mass modification of nucleon in nuclear medium is a consequence of the Dirac field and forms an essential element for the success of the relativistic phenomenology. From the plot, it is interesting to see that the nucleon experiences repulsive forces in nuclear matter at higher densities (\\(\\rho_{B}>2\\rho_{0}\\)), as a result of which the nucleon effective mass increases again for the three cases that we study presently. A careful look into Table 1 reveals the relationship between the couplings (both scalar and vector) and the resulting nucleon effective mass. The model predicts a higher nucleon effective mass if the ratio of scalar to vector coupling is larger but the increase in \\(m^{\\star}\\) is slower thereafter, which reflects the dominance of attractive force at high densities. At saturation density, the present model results in much higher nucleon effective mass in comparison to the NL3 (\\(m^{\\star}/m=0.60\\)) and DBHF (\\(m^{\\star}/m=0.678\\)).
Fig. 4(A) displays the pressure as a function of baryon density up to nearly \\(6\\rho_{0}\\) for the selected parameters of the model for symmetric nuclear matter. The shaded region corresponds to the experimental HIC data [37] for symmetric nuclear matter (SNM). Among the three theoretical calculations shown, the EOS with Y = 0.85 & 0.87 agree very well with the collision data. Precisely, the third set (K = 231 MeV) completely agree with the flow data in the entire density span of \\(2<\\rho_{B}/\\rho_{0}<4.6\\). We now proceed to calculate the EOS of Pure Neutron matter (PNM) by taking the spin degeneracy \\(\\gamma=2\\) in eqn. (8) and (9). The inclusion of \\(\\rho-\\)meson doesn't seem to affect the EOS substantially and so we refrained from that. In Fig. 4(B), the case of pure neutron matter (PNM) is compared with the experimental flow data. The experimental flow data is categorized in terms of stiff or soft based on whether the density dependence of the symmetry energy term is strong or week [41]. The EOS predicted by the present model seems to rather lie on the softer regime. However, the EOS with \\(Y=0.87,K=231\\) MeV though satisfy the combined constraint rather well, is not consistent with the vacuum value of the pion decay constant.
## VI Summary and Conclusions
The effective chiral model provides a natural framework to interlink the standard state properties of nuclear matter with the inherent fundamental constants such as the pion decay constant and the \\(\\sigma-\\)meson mass within a unified approach. With this motivation, the parameters
Figure 3: (Color online) (A) - Binding energy per nucleon of symmetric nuclear matter plotted as a function of baryon density up to nearly \\(5\\rho_{0}\\). For comparison, we also plot the same for NL3 parameter set from the Relativistic Mean Field theory [11] as well as EOS from DBHF [38] in the non-relativistic domain. The inset plot displays the curve in the vicinity of nuclear saturation (B) - Variation of Nucleon effective mass in medium as a function of total baryon density for symmetric nuclear matter of the selected parameters of the present model.
of the model are evaluated in the mean-field ansatz by fixing the nuclear matter saturation properties defined at \\(\\rho_{0}\\) and varying the nucleon effective mass in the range \\(Y=m^{\\star}/m=(0.75-0.90)\\). Thus the resulting equation of state not only satisfies the saturation properties reasonably well but also relates the various aforesaid fundamental quantities with that of the vacuum value of the scalar field constant. One of the unique features of the model is that the mass of the vector meson (\\(m_{\\omega}\\)) is generated dynamically, as result of which the effective mass of the nucleon acquires a density dependence on both the scalar and vector fields. The interplay between this scalar and vector forces results in the increase of \\(m^{\\star}\\) at \\(\\approx 3\\rho_{0}\\) (Fig. 3(B)), as a consequence of which the resulting EOS is much softer at higher densities.
We also discussed the implication of imposing fundamental constraint on the evaluated model parameters. Among various derived quantities, we find that the pion decay constant is experimentally well known quantity in comparison to the nuclear incompressibility and \\(\\sigma-\\)meson mass, that can put stringent constraint on the model parameters. Employing this constraint (\\(f_{\\pi}=130\\pm 5\\) MeV) rather leave us with few options among the wide range of parameters enlisted in Table 1, while the observed range of \\(\\sigma-\\)meson mass do not rule out any. Experimentally determined effective mass from scattering of neutron over \\(Pb\\) nuclei [35] seems to go well with the present model. Both of them favor higher value for nucleon effective mass. In the present calculation we find that a higher nucleon effective mass is endowed with reasonable incompressibility too. However, the parameter that agree well (\\(m^{\\star}=0.83m\\); Set 9) with the limits of \\(f_{\\pi}\\) has incompressibility in the upper bound of the value inferred from the flow data. On a comparative analysis of the resulting EOS with that of the HIC data for symmetric nuclear matter as well as pure neutron matter, parameter set with \\(Y=0.85;K\\approx 300\\) MeV seems to be the ideal parameterization of the present model. The resulting scalar meson mass \\(m_{\\sigma}\\approx 510MeV\\), is also consistent with the experimentally observed masses [42; 43; 14]. Further, a higher value of incompressibility \\(K\\approx 300\\) MeV is known to predict correctly the isoscalar giant resonance energies in medium and heavy nuclei in the relativistic framework [44]. On account of the aforesaid arguments and constraints, the model seems to work very well within present approach. However, the predictability of the model needs to be tested at finite temperature and high densities. Work is in progress in this direction [45]. In this regard, it will also be interesting to study the medium effects on the underlying couplings [46] as well as the on the meson masses [47] and the pion decay constant [48].
## VII Acknowledgment
One of the authors HM would like to thank Institut for Theoretische Physik, University of Frankfurt for warm hospitality and Alexander von Humbolt foundation, Germany for support during this period.
## References
* (1) J. D. Walecka, Ann. Phys. **83**, 491 (1974); Phys. Lett. **79B** 10 (1978).
* (2) B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **16**, 1
* (3) B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **17**, 1
Figure 4: (Color online) Comparison of the Heavy Ion Collision estimate [37] with the theoretical prediction of the effective model, (A) for Symmetric nuclear matter (SNM) case and (B) for Pure Neutron matter (PNM) case.
(1986); Int. J. Mod. Phys. E **6**, 515 (1997).
* (3) F. Coester, S. Cohen, B. D. Day and C. M. Vincent, Phys. Rev. C **1** 769 (1970).
* (4) R. Machleidt, Adv. Nucl. Phys. **19** 189 (1989).
* (5) N. K. Glendening, _Compact stars: Nuclear physics, particle physics, and general relativity_, Springer-Verlag, New York (2000).
* (6) S. L Shapiro and S. A. Teukolski, _Black holes, white dwarfs, and Neutron stars_, Wiley, New York, (1983).
* (7) N. K. Glendening, Phys. Rev. C **37** 2733 (1988).
* (8) G. Colo, P. F. Bortignon, N. Van Gai, A. Bracco, R. A. Broglia, Phys. Lett. B **276** 279 (1992).
* (9) I. Hamamoto, H. Sagawa, and X. Z. Zhang, Phys. Rev. C **56** 3121 (1997).
* (10) J. P. Blaizot, J. F. Berger, J. Decharge, and M. Girod, Nucl. Phys. A **591** 435 (1995).
* (11) G. A. Lalazissis, J. Konig, and P. Ring, Phys. Rev. C **55** 540 (1997).
* (12) D. Vretenar, A. Wandelt, and P. Ring, Phys. Lett. B **487** 334 (2000).
* (13) W. M. Yao et. al. (Particle Data Group), J. Phys. G **33** 1 (2006) and 2007 partial update for the 2008 edition.
* (14) H. Muramatsu et. al, Phys. Rev. Lett. **89** 251802 (2002).
* (15) J. Boguta and A. R. Bodmer, Nucl. Phys. A **292** 413 (1977).
* (16) M. Gell-Mann and M. Levy, Nuovo Cim. **16** 705 (1960).
* (17) T. D. Lee and G. C. Wick, Phys. Rev. D **9** 2291 (1974).
* (18) J. Boguta, Phys. Lett. B **120** 34 (1983).
* (19) J. Boguta, Phys. Lett. B **128** 19 (1983).
* (20) P. Papazoglou, J. Schaffner, S. Schramm, D. Zschiesche, Horst Stoecker, and W. Greiner, Phys. Rev. C **55**, 1499 (1997).
* (21) P. Papazoglou, S. Schramm, J. Schaffner-Bielich, Horst Stoecker, and W. Greiner, Phys. Rev. C **57**, 2576 (1998).
* (22) V. Dexheimer, S. Schramm, and D. Zschiesche, Phys. Rev. C **77**, 025803 (2008).
* (23) Volker Koch, _Aspects of Chiral Symmetry_; LBNL-39463/ UC-413.
* (24) B. W. Lee, _Chiral Dynamics_, Gordon and Breach, New York (1972).
* (25) T. K. Jha, P. K. Raina, P. K. Panda and S. K. Patra, Phys. Rev. **C 74** 055803 (2006); Erratum- Phys. Rev. **C 75** 029903 (2007).
* (26) N. K. Glendening, Ann. Phys. **168** 246 (1986).
* (27) N. K. Glendening, Nucl. Phys. A **480** 597 (1988).
* (28) P. K. Sahu, R. Basu and B. Datta, Astrophys. J. **416** 267 (1993).
* (29) P. K. Sahu and A. Ohnishi, Prog. of Theo. Phys. **104** 1163 (2000).
* (30) P. K. Sahu, T. K. Jha, K. C. Panda and S. K. Patra, Nucl Phys. A, **733** 314 (2004).
* (31) T. K. Jha, H. Mishra and V. Sreekanth, Phys. Rev. **C 77** 045801 (2008).
* (32) B. M. Waldhauser, J. A. Maruhn, H. Stocker and W. Greiner, Phys. Rev. C **38** 1003 (1988).
* (33) K. C. Chung, C. S. Wang, A. J. Santiago and J. W. Zhang, Eur. Phys. J. A **12** 161 (2001).
* (34) P. Moller, W.D. Myers, W.J. Swiatecki and J. Treiner, At. Data Nucl. Data Tables **39** 225 (1988).
* (35) C. H. Johnson, D. J. Horen and C. Mahaux, Phys. Rev. C **36** 2252 (1987).
* (36) R. J. Furnstahl, J. J. Rusnak and B. D. Serot, Nucl. Phys. A **632** 607 (1998).
* (37) P. Danielewicz, R. Lacey and W. G. Lynch, Science **298** 1592 (2002).
* (38) G. Q. Li, R. Machleidt and R. Brockmann, Phys. Rev. C **45** 2782 (1992).
* (39) N. K. Glendening, Phys. Lett. **B114** 392 (1982); Astrophys. J. **293** 470 (1985); Z. Phys. **A 326** 57 (1987).
* (40) M. Prakash, I. Bombaci, M. Prakash, P.J. Ellis, J.M. Lattimer and R. Knorren, Phys. Rep. **280** 1 (1997).
* (41) M. Prakash, T. L. Ainsworth, J. M. Lattimer, Phys. Rev. Lett. **61** 2518 (1988).
* (42) E. M. Aitala et. al., Phys. Rev. Lett. **86** 770 (2001).
* (43) M. Ishida et. al., Phys. Lett. **B 518** 203 (2001).
* (44) Z. Ma, N. V. Giai, H. Toki and M. L.Huillier, Phys. Rev. C **55** 2384 (1997).
* (45) T. K. Jha and H. Mishra, (_in preparation_).
* (46) S. Typel and H. H. Wolter, Nucl. Phys. A **656** 331 (1999).
* (47) Bao-Xi Sun et. al., Int. J. Mod. Phys. E **12** 543 (2003).
* (48) A. Barducci, et.al., Phys. Rev. D **42** 1757 (1990). | Within an effective non-linear chiral model, we evaluate nuclear matter parameters exploiting the uncertainties in the nuclear saturation properties. The model is sternly constrained with minimal free parameters, which display the interlink between nuclear incompressibility (\\(K\\)), the nucleon effective mass (\\(m^{*}\\)), the pion decay constant (\\(f_{\\pi}\\)) and the \\(\\sigma-\\)meson mass (\\(m_{\\sigma}\\)). The best fit among the various parameter set is then extracted and employed to study the resulting Equation of state (EOS). Further, we also discuss the consequences of imposing constraints on nuclear EOS from Heavy-Ion collision and other phenomenological model predictions.
pacs: 21.65.-f, 13.75.Cs, 97.60.Jd, 21.30.Fe | Give a concise overview of the text below. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.